virtual-insanity
← 뒤로

260310 gh (104개)

evergreen aggregate 2026-03-10

260310 GitHub 모음

[10/10] overstory (849 stars) Multi-agent orchestration for AI coding agents — pluggable

overstory (849 stars) Multi-agent orchestration for AI coding agents — pluggable runtime adapters for Claude Code, Pi, and more agent_orchestration 가설 llm 학습 재검토 orches

점수: 10/10 — 점수 10/10: claude code, claude, multi-agent, 가설


[9/10] openclaw-mission-control (1849 stars) AI Agent Orchestration Dashboard - Manage

openclaw-mission-control (1849 stars) AI Agent Orchestration Dashboard - Manage AI agents, assign tasks, and coordinate multi-agent collaboration via OpenClaw Gateway. agent_orchestration 가설 llm 학습 재검토 orches

점수: 9/10 — 점수 9/10: openclaw, multi-agent, 가설


[9/10] mission-control (1252 stars) AI Agent Orchestration Dashboard - Manage AI agents

mission-control (1252 stars) AI Agent Orchestration Dashboard - Manage AI agents, assign tasks, and coordinate multi-agent collaboration via OpenClaw Gateway. agent_orchestration 가설 llm 학습 재검토 orches

점수: 9/10 — 점수 9/10: openclaw, multi-agent, 가설


[6/10] agent-framework (7750 stars) A framework for building, orchestrating and deployi

agent-framework (7750 stars) A framework for building, orchestrating and deploying AI agents and multi-agent workflows with support for Python and .NET. agent_orchestration 가설 llm 학습 재검토 orches

점수: 6/10 — 점수 6/10: multi-agent, 가설


[6/10] goclaw (573 stars) Multi-agent AI gateway with teams, delegation & orchestration

goclaw (573 stars) Multi-agent AI gateway with teams, delegation & orchestration. Single Go binary, 11+ LLM providers, 5 channels. agent_orchestration 가설 llm 학습 재검토 orches

점수: 6/10 — 점수 6/10: multi-agent, 가설


[10/10] # 테스트 인사이트: ETF 섹터 리밸런싱 이 메모는 인사이트 생성 테스트용입니다. 주제: ETF 포트폴리오 리밸런싱과 conviction 점

원문

테스트 인사이트: ETF 섹터 리밸런싱

이 메모는 인사이트 생성 테스트용입니다. 주제: ETF 포트폴리오 리밸런싱과 conviction 점수 이상변동. 제안사항: 최근 9종 ETF 중 AI 섹터 비중 급상승으로 리밸런싱 필요. 관련 키워드: etf, conviction, 섹터, 리밸런싱, 인사이트, 포트폴리오.

출처: https://example.com

출처: https://example.com

점수: 10/10 — 점수 10/10: etf, conviction, 섹터, 리밸런싱, 포트폴리오


[9/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20421 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20421 url: "https://github.com/openclaw/openclaw/issues/20421" tags: [github, openclaw]


[20421] docs/ux: thinking token accumulation silently inflates context toward long-context billing tier

URL: https://github.com/openclaw/openclaw/issues/20421

이슈 내용

Observation

On Claude Max plan, a long session with extended thinking enabled hit 171k tokens (86% of 200k window). When manual compaction was triggered, the compaction request itself pushed the total over 200k — triggering Anthropic's 1M context billing tier, which requires the Extra Usage budget. With that budget exhausted, compaction returned 429: Extra usage is required for long context requests.

Root cause

Thinking blocks accumulate in session history — pi-ai includes them in the full message context for each request. With medium thinking (~8,192 budget tokens/turn), a multi-hour session accumulates substantial invisible token overhead on top of visible text.

Compaction is overflow-triggered only (trigger: 'overflow' | 'manual') — there is no proactive threshold. This is reasonable design, but combined with thinking token accumulation, users on plans with billing thresholds can be surprised.

Is this a bug?

Arguably not — this is expected behavior for a very long, thinking-heavy session. The 171k context utilization is accurate. Manual compaction earlier in the session would have avoided the problem.

Possible improvements (not bugs, just UX)

  1. Warn users when context approaches the model's context window (e.g. surface a warning at 75%) so they know to compact manually
  2. Document that thinking tokens accumulate in context history and contribute meaningfully to context growth
  3. Consider whether the context utilization display should more prominently reflect thinking token overhead

Note on compaction design

Reactive compaction is reasonable — proactive compaction would cause unnecessary summarization in sessions that never hit limits. But a visible warning at high utilization would help users avoid this edge case.

출처: https://github.com/openclaw/openclaw/issues/20421"

점수: 9/10 — 점수 9/10: openclaw, anthropic, claude


[9/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20324 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20324 url: "https://github.com/openclaw/openclaw/issues/20324" tags: [github, openclaw]


[20324] Read auto-paging in 2026.2.17 causes significant token usage increase for cron/worker sessions

URL: https://github.com/openclaw/openclaw/issues/20324

이슈 내용

Summary

The read tool auto-paging introduced in 2026.2.17 (PR #19508) causes a significant and unexpected increase in Anthropic API token usage, particularly for cron jobs and worker sessions that read multiple files per run.

Observed Behavior

After upgrading from 2026.2.15 to 2026.2.17, weekly Anthropic usage jumped from 44% to 57% on an x20 plan (~$65 equivalent) within a single morning. The spike correlated with morning cron jobs firing (9am cluster).

Root Cause

src/agents/pi-tools.read.ts now auto-pages through up to 8 chunks (MAX_ADAPTIVE_READ_PAGES) and up to 512KB (MAX_ADAPTIVE_READ_MAX_BYTES) per read call when no explicit limit is provided. The budget scales with context window:

contextWindowTokens * 4 chars/token * 0.2 = adaptive read budget

For Opus (200K context): 200,000 * 4 * 0.2 = 160,000 chars (~160KB) per read call, up from the previous 50KB cap.

Impact

In a setup with ~10 cron jobs reading 5-6 files each (workspace config, worker prompts, state files, memory), the input token volume roughly tripled overnight. This is invisible to the user — no change in behavior, just silently larger reads behind the scenes.

Reproduction

  1. Set up several cron jobs (agentTurn) that read workspace files (AGENTS.md, MEMORY.md, etc.)
  2. Compare token usage on 2026.2.15 vs 2026.2.17
  3. Observe ~3x increase in input tokens from read tool results

Suggested Fix

Consider one or more of: - Lower the default adaptive share (0.2 seems too aggressive for most use cases) - Disable auto-paging for cron/isolated sessions where reads are typically targeted - Make auto-paging opt-in rather than default behavior - Add a per-agent or global config to control read budget (agents.defaults.readMaxBytes)

Workaround

Rolled back to 2026.2.15. Users can also add explicit limit parameters to read calls to bypass auto-paging.

Environment

  • OpenClaw 2026.2.17 (rolled back to 2026.2.15)
  • Model: claude-opus-4-6 (200K con...

출처: https://github.com/openclaw/openclaw/issues/20324"

점수: 9/10 — 점수 9/10: openclaw, anthropic, claude


[9/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20449 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20449 url: "https://github.com/openclaw/openclaw/issues/20449" tags: [github, openclaw]


[20449] Model announcement mismatch: assistant reports Codex while run is actually Opus

URL: https://github.com/openclaw/openclaw/issues/20449

이슈 내용

Summary

In some sessions, the assistant announces the wrong active model in user-facing text.

Example observed: - Assistant says session is running on Codex - Gateway/embedded run logs show active model is anthropic/claude-opus-4-6

Why this matters

This breaks trust and causes confusion when users intentionally route work by model.

Suspected root cause

The announcement appears to rely on stale or mismatched runtime metadata (e.g., injected runtime context) rather than authoritative run/session model state at reply time.

Expected behavior

Any model-status statement should reflect the actual active model for the current run/session.

Repro notes (real-world)

  • Session reset/new session path
  • Assistant greeting includes model note per prompt rule
  • Reported model and actual execution model diverge

Request

  • Source model announcement from authoritative runtime state (same source used by run execution/session status)
  • Add a guard/test to prevent divergence between announced and actual model
  • Optionally suppress automatic model announcements unless explicitly requested by user

출처: https://github.com/openclaw/openclaw/issues/20449"

점수: 9/10 — 점수 9/10: openclaw, anthropic, claude


[9/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20349 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20349 url: "https://github.com/openclaw/openclaw/issues/20349" tags: [github, openclaw]


[20349] [email protected] broken on Android/Termux: koffi native module has no android-arm64 binary

URL: https://github.com/openclaw/openclaw/issues/20349

이슈 내용

Description

OpenClaw 2026.2.17 fails to start on Android (Termux) due to the koffi native FFI module lacking a prebuilt binary for android-arm64. The gateway crashes immediately on startup with:

[openclaw] Failed to start CLI: Error: Cannot find the native Koffi module; did you bundle it correctly?
    at init (node_modules/koffi/[[INDEX]].js:502:15)
    at Object.<anonymous> (node_modules/koffi/index.js:636:12)

This is a regression — version 2026.2.15 works perfectly on the same device.

Root Cause

@mariozechner/pi-tui was bumped from 0.52.12 to 0.53.0 in OpenClaw 2026.2.17. Version 0.53.0 added koffi ^2.9.0 as a hard dependency:

[email protected]
  └── @mariozechner/[email protected]
        └── koffi@^2.9.0              ← crashes on android-arm64

Previous versions of pi-tui (0.52.10, 0.52.12) did not depend on koffi.

Version Compatibility Matrix

OpenClaw pi-tui koffi Android Status
2026.2.13 0.52.10 ✅ Works
2026.2.14 0.52.12 ✅ Works
2026.2.15 0.52.12 ✅ Works
2026.2.17 0.53.0 ^2.9.0 ❌ Broken

Koffi Platform Support

Koffi ships prebuilt binaries for: - Linux (glibc): x86, x86_64, ARM32 LE, ARM64, RISC-V 64, LoongArch64 - Linux (musl): x86_64 - macOS: x86_64, ARM64 - Windows: x86, x86_64, ARM64 - FreeBSD: x86, x86_64 - OpenBSD: x86_64

Android (android-arm64) is not supported. npm installs the package without error (no os/cpu restrictions in koffi's package.json), but the native .node addon fails to load at runtime.

Additional Issue: Config Breaking Changes

When running 2026.2.17 on a config that worked with 2026.2.13, the gateway also logs:

Invalid config at ~/.openclaw/openclaw.json:
- plugins.entries.telegram: plugin not found: telegram
- plugins.slots.memory: plugin not found: memory-core

And:

Error: Unknown model: anthropic/claude-sonnet-4-6

These suggest plugin/model...

솔루션/댓글

@sergiomarquezdev

Investigation & Working Workaround (Verified in Production)

I investigated the root cause in the pi-tui source code and built a workaround that's now running in production on Android/Termux.

Root Cause Analysis

Looking at pi-tui's terminal.ts, koffi is imported at the top level:

import koffi from "koffi";

But it's only used in one method — enableWindowsVTInput():

private enableWindowsVTInput(): void {
    if (process.platform !== "win32") return;  // ← exits immediately on Linux/Android/macOS
    try {
        const k32 = koffi.load("kernel32.dll");
        // ... Windows console mode setup
    } catch {
        // koffi not available — already handled
    }
}

So koffi is: 1. Never called on non-Windows platforms (guarded by process.platform !== "win32") 2. Already wrapped in try/catch for when it's unavailable 3. Only crashes because the **t...

출처: https://github.com/openclaw/openclaw/issues/20349"

점수: 9/10 — 점수 9/10: openclaw, anthropic, claude


[8/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20079 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20079 url: "https://github.com/openclaw/openclaw/issues/20079" tags: [github, openclaw]


[20079] Anthropic 1M context: model ID '-1m' suffix not stripped before API call → 404

URL: https://github.com/openclaw/openclaw/issues/20079

출처: https://github.com/openclaw/openclaw/issues/20079"

점수: 8/10 — 점수 8/10: openclaw, anthropic


[8/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19965 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19965 url: "https://github.com/openclaw/openclaw/issues/19965" tags: [github, openclaw]


[19965] Proposal: Round timestamps to 5-minute intervals to enable Anthropic prompt caching (fixes #19534)

URL: https://github.com/openclaw/openclaw/issues/19965

이슈 내용

Background

Issue #19534 reports that Anthropic prompt caching is not working - Cache Read is always 0 despite cacheControl being enabled.

Root Cause Analysis

After investigation, I found that injectTimestamp() in src/gateway/server-methods/agent-timestamp.ts adds a unique timestamp to every user message:

// Current behavior: unique timestamp every request
const now = opts?.now ?? new Date();
return `[\${dow} \${formatted}] \${message}`;
// Example: [Wed 2026-02-18 12:30:45 UTC] hello

Since these stamped messages are included in the LLM prompt, each request has different content, breaking Anthropic's prompt cache which requires exact byte-for-byte matches.

Proposed Solution

Round timestamps to 5-minute intervals:

const fiveMinutes = 5 * 60 * 1000;
const rounded = new Date(Math.floor(now.getTime() / fiveMinutes) * fiveMinutes);
// Now: [Wed 2026-02-18 12:30 UTC] hello
// After 5 mins: [Wed 2026-02-18 12:35 UTC] hello

This way, requests within the same 5-minute window share identical message content and benefit from prompt caching.

Impact

  • Before: Cache Read = 0, Cache Write = ~170k tokens every request (~$0.50/msg)
  • After: Cache Read = ~170k tokens, Cache Write = ~100-200 tokens per window (~$0.05/msg)
  • Cost reduction: ~90%

Trade-offs

  1. Precision: Timestamp precision reduced from seconds to 5 minutes
  2. Staleness: Messages could show timestamp up to 5 minutes old
  3. User benefit: Most users care more about cost savings than second-level precision

Questions for Discussion

  1. Is 5-minute interval the right choice? (Could be 1min, 10min, etc.)
  2. Should this be configurable via agents.defaults.timestampInterval?
  3. Are there other dynamic content in messages that also break caching?

/cc @HenryLoenwind @arosstale - would appreciate your thoughts on this approach

출처: https://github.com/openclaw/openclaw/issues/19965"

점수: 8/10 — 점수 8/10: openclaw, anthropic


[8/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21189 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21189 url: "https://github.com/openclaw/openclaw/issues/21189" tags: [github, openclaw]


[21189] Bug: Multi-Bot Telegram Routing - Incoming Messages Not Delivered to Non-Default Agents

URL: https://github.com/openclaw/openclaw/issues/21189

이슈 내용

Bug: Multi-Bot Telegram Routing - Incoming Messages Not Delivered to Non-Default Agents

Summary

In a multi-agent OpenClaw setup with separate Telegram bots per agent, incoming Telegram messages are not routed to the correct agent session. Messages are polled and consumed (offset advances) but never trigger agent turns for non-default agents.

Environment

  • OpenClaw Version: 2026.2.12 (f9e444d)
  • Node.js: v22.x
  • OS: Linux
  • Setup: Multi-agent with two Telegram bots

Steps to Reproduce

  1. Configure multi-agent setup with two Telegram bots:
{
  "bindings": [
    {
      "agentId": "main",
      "match": {
        "channel": "telegram",
        "accountId": "default"
      }
    },
    {
      "agentId": "neo",
      "match": {
        "channel": "telegram",
        "accountId": "neo"
      }
    }
  ],
  "channels": {
    "telegram": {
      "enabled": true,
      "botToken": "YOUR_BOT_TOKEN_HERE",
      "accounts": {
        "default": {
          "name": "Agent One",
          "dmPolicy": "allowlist",
          "allowFrom": [123456789]
        },
        "neo": {
          "name": "Agent Two",
          "botToken": "SECOND_BOT_TOKEN_HERE",
          "dmPolicy": "allowlist",
          "allowFrom": [123456789]
        }
      }
    }
  }
}
  1. Restart gateway: openclaw gateway restart
  2. Send message to non-default agents bot
  3. Observe behavior

Expected Behavior

  • Message should be delivered to the correct agent based on accountId binding
  • Agent should receive the message and trigger a turn
  • Agent should respond via Telegram

Actual Behavior

  • Message is polled successfully (update offset advances)
  • Message is consumed from Telegram API
  • openclaw channels status --probe shows in:recent (bot received something)
  • BUT: Message never reaches agents session
  • BUT: No agent turn is triggered
  • BUT: audit failed in channel status
  • Session shows totalTokens: 0 after messages sent

D...

출처: https://github.com/openclaw/openclaw/issues/21189"

점수: 8/10 — 점수 8/10: openclaw, multi-agent


[8/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20250 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20250 url: "https://github.com/openclaw/openclaw/issues/20250" tags: [github, openclaw]


[20250] Raw LLM API errors leak to end users via WhatsApp/messaging channels

URL: https://github.com/openclaw/openclaw/issues/20250

솔루션/댓글

@nikolasdehor

Confirming this bug from production experience.

We hit a related but distinct variant of this during initial setup of our self-hosted gateway (OpenClaw v2026.2.15). Our error messages leaked to WhatsApp contacts were:

  • FailoverError: HTTP 401 authentication_error — raw auth failure from the Anthropic provider, delivered verbatim as a WhatsApp message to the end user
  • All models failed (2): [model-a] HTTP 401 ..., [model-b] HTTP 401 ... — the entire failover chain result, including all provider-specific error strings, sent as a chat message

The temporary fix we applied was switching the primary model to one that had a valid token, which stopped the auth errors from occurring, so the leak stopped. But that's not a real fix — the gateway should have never forwarded those strings to the messaging channel in the first place.

Question for the author / maintainers: Does PR #20251 cover the auth error path specifically? The fix I see guards api_error, server_error, and `inter...

@aldoeliacim

@nikolasdehor Thanks for the detailed feedback on the PR! Just pushed an expanded fix (c49276d) that covers the additional error paths you identified:

Addressed in this commit: - ✅ 401 authentication_error / permission_error — now suppressed with a generic "AI service temporarily unavailable" message, no raw provider details leaked - ✅ FailoverError: ... wrappers — stripped before reaching users; inner error is sanitized recursively - ✅ All models failed (N): ... — fully suppressed (the chain contains provider/model names) - ✅ Transient API errors trigger failoverclassifyFailoverReason now treats api_error/server_error/internal_error JSON payloads as retryable ("timeout" reason), so they fail over to the next model instead of surfacing immediately - ✅ agent-runner-execution.ts catch block — ALL errors are now sanitized via sanitizeUserFacingText(..., { errorContext: true }) before reaching end users, not just transient HTTP errors - ✅ *...

출처: https://github.com/openclaw/openclaw/issues/20250"

점수: 8/10 — 점수 8/10: openclaw, anthropic


[8/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 13989 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 13989 url: "https://github.com/openclaw/openclaw/issues/13989" tags: [github, openclaw]


[13989] Webchat UI renders inbound metadata as user message content

URL: https://github.com/openclaw/openclaw/issues/13989

솔루션/댓글

@w0nk1

Additional observation: This also affects Telegram (not just Webchat/TUI)

We're seeing the same metadata leak in Telegram DMs. In our case, the agent (not the UI) echoes the Conversation info (untrusted metadata) block as part of its assistant reply — it gets sent back to the user as visible Telegram message content.

Reproduction: - Multi-agent setup (two agents bound to different Telegram users) - Over time, the model learns to echo the metadata block as "normal" prefix behavior - Checked session logs: 96 out of ~100 assistant messages contained the leaked metadata block

Root cause (our assessment): The metadata is injected as a user-message prefix. The model sees it as part of the conversation and starts mirroring it in responses. Once it happens a few times, in-context learning reinforces the pattern.

Workaround: Added explicit instructions in the agent's SOUL.md/AGENTS.md to never output metadata blocks. Combined with a session reset, this appears to fix i...

@tauceti82

Just upgraded from 2026.2.15 to 2026.2.17, and now seeing this metadata block on every message in the webchat UI:

Conversation info (untrusted metadata): { "message_id": "...", "sender": "openclaw-control-ui" }

This wasn't present in 2026.2.15, so it appears to be a regression in the latest release.

PR #15998 looks like it addresses this for TUI – does webchat need a separate fix, or should that PR handle both?

@ArabianCowboy

Thanks for the previous fix — this appears to be a regression in the latest patch.

Regression report

Current version

  • OpenClaw: 2026.2.17
  • Surface: webchat (openclaw-control-ui)
  • Deployment: VPS (gateway reachable via local/webchat UI)

What I see

Every inbound user message is rendered with the metadata envelope at the top, e.g.:

Conversation info (untrusted metadata):
{
  "message_id": "....",
  "sender": "openclaw-control-ui"
}

[Wed ...] <actual user message>

Expected behavior

  • Metadata should remain internal context for the agent.
  • UI should display only the user-authored message text.

Why this looks like regression

This behavior had been fixed previously, but is now present again on 2026.2.17.

Notes

  • Reproducible on every message in my session.
  • Looks identical to prior issues about envelope leakage in UI rendering.

@orionbusinessbot-max

+1 same issue no error or bugs in the logs

@nybe

Update on 2026.2.17: Still seeing this in the latest release. The metadata block is rendering in the webchat UI for messages from Telegram/control-ui. Example from today (Feb 18, 2026):

Conversation info (untrusted metadata):
{
  "message_id": "ed08-3b-bf-c3-7ede9e",
  "sender": "openclaw-control-ui"
}
[actual message content here]

The 2026.2.17 changelog mentions stripping conversation_label, but the full Conversation info (untrusted metadata): { ... }block is still leaking through to the UI. The filter needs to catch the entire wrapper, not just the content.

출처: https://github.com/openclaw/openclaw/issues/13989"

점수: 8/10 — 점수 8/10: openclaw, multi-agent


[8/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21241 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21241 url: "https://github.com/openclaw/openclaw/issues/21241" tags: [github, openclaw]


[21241] Memory system ignores memorySearch.remote config, hardcodes OpenAI API calls

URL: https://github.com/openclaw/openclaw/issues/21241

이슈 내용

Bug Report: Memory System Configuration Ignored

Summary

The memory system is configured to use Gemini proxy via settings but the implementation still hardcodes OpenAI embeddings API calls, causing 401 authentication errors when is not provided.

Environment

  • Platform: openclaw-cloud production deployment
  • OpenClaw Version: Latest (update available npm 2026.2.19-2)
  • Affected Instances: All customer bots (prime-heron, sharp-otter confirmed)

Configuration

// openclaw.json - Memory system configured for Gemini proxy
"memorySearch": {
  "enabled": true,
  "remote": {
    "baseUrl": "http://api-router:9090/gemini",
    "apiKey": "GEMINI_PROXY_KEY",
    "batch": {
      "enabled": false
    }
  }
}
# gateway.env - No OpenAI key (as expected with proxy config)
ANTHROPIC_OAUTH_TOKEN=irt_...
GEMINI_PROXY_KEY=irt_...
GEMINI_API_KEY=irt_...
# OPENAI_API_KEY missing (should not be needed with proxy)

Error Logs

2026-02-19T18:24:13.962Z [memory] sync failed (session-start): Error: openai embeddings failed: 401 {"error":"Missing or invalid internal token"}
2026-02-19T18:25:00.580Z [memory] sync failed (watch): Error: openai embeddings failed: 401 {"error":"Missing or invalid internal token"}

Impact

  • Memory search completely broken - cannot find previous conversations
  • Memory sync failing - losing conversation history
  • Context degradation - bots cannot reference past interactions
  • Affects all deployed customer bots

Root Cause

The memory plugin implementation appears to ignore the memorySearch.remote configuration and directly calls OpenAI embeddings API instead of using the configured proxy endpoint.

Expected Behavior

When memorySearch.remote.baseUrl is configured, the memory system should: 1. Use the specified proxy endpoint (http://api-router:9090/gemini) 2. Use the specified API key (GEMINI_PROXY_KEY) 3. NOT attempt direct OpenAI API calls

Actual Behavior

Memory syste...

출처: https://github.com/openclaw/openclaw/issues/21241"

점수: 8/10 — 점수 8/10: openclaw, anthropic


[8/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21261 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21261 url: "https://github.com/openclaw/openclaw/issues/21261" tags: [github, openclaw]


[21261] [Bug]: Custom Provider Anthropic-Compatible 404 Error Due to Mismatch /v1 Behavior in Setup and Usage

URL: https://github.com/openclaw/openclaw/issues/21261

출처: https://github.com/openclaw/openclaw/issues/21261"

점수: 8/10 — 점수 8/10: openclaw, anthropic


[8/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20463 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20463 url: "https://github.com/openclaw/openclaw/issues/20463" tags: [github, openclaw]


[20463] Multi-agent: 'Session file path must be within sessions directory' for non-main agents

URL: https://github.com/openclaw/openclaw/issues/20463

이슈 내용

Environment

  • OpenClaw version: 2026.2.12 (f9e444d)
  • OS: macOS 26.3 (arm64, M1 Max)
  • Node: v22.13.1

Description

When configuring a second agent (perry) with its own Discord bot account, the agent's bot connects successfully ([perry] starting provider (@Perry)) but every message handler fails with:

[discord] handler failed: Error: Session file path must be within sessions directory

The main agent (BenderEcho) works fine. Only non-main agents hit this error.

Steps to Reproduce

  1. Add a second agent: openclaw agents add perry --workspace ~/.openclaw/agents/perry/workspace --bind discord:perry --non-interactive

  2. Configure Discord multi-account in openclaw.json: json { "channels": { "discord": { "accounts": { "default": { "token": "...", "guilds": { ... } }, "perry": { "token": "...", "guilds": { ... } } } } }, "bindings": [ { "agentId": "main", "match": { "channel": "discord", "accountId": "default" } }, { "agentId": "perry", "match": { "channel": "discord", "accountId": "perry" } } ] }

  3. Copy auth profiles to perry's agentDir

  4. Restart gateway
  5. @mention Perry's bot in Discord

Expected

Perry's agent handles the message and responds.

Actual

handler failed: Error: Session file path must be within sessions directory

Every time. Perry's bot connects ([perry] starting provider (@Perry)) but cannot process any messages.

Investigation

  • The validation in paths-*.js does: js const resolvedBase = path.resolve(sessionsDir); const resolvedCandidate = path.resolve(resolvedBase, trimmed); const relative = path.relative(resolvedBase, resolvedCandidate); if (relative.startsWith('..') || path.isAbsolute(relative)) throw ...
  • Perry's sessions.json stores absolute session file paths (e.g. /Users/x/.openclaw/agents/perry/sessions/uuid.jsonl)
  • The paths are valid and within perry's sessio...

출처: https://github.com/openclaw/openclaw/issues/20463"

점수: 8/10 — 점수 8/10: openclaw, multi-agent


[7/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20015 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20015 url: "https://github.com/openclaw/openclaw/issues/20015" tags: [github, openclaw]


[20015] Access Qwen3.5 & Latest Models Same-Day via Higress | OpenClaw 当天接入 Qwen3.5 等最新模型(Higress 集成)

URL: https://github.com/openclaw/openclaw/issues/20015

이슈 내용

Problem

OpenClaw currently has a hardcoded model list for each provider. When new models are released (like Qwen3.5, GLM-5, MiniMax M2.5), users must wait for an official release to use them.

For example: - Setting model: qwen/qwen3.5-plus results in Error: Unknown model - Setting model: zai/glm-5 or model: minimax/minimax-m25 also fails - The default models are hardcoded

With models releasing at a rapid pace (Qwen3.5 just launched with industry-leading性价比, MiniMax shipped M2, M2.1, M2.5 in just 108 days), waiting for official support is impractical.

Solution

I've created a Higress Integration Skill that allows OpenClaw users to access any new model immediately through Higress AI Gateway, without waiting for OpenClaw updates.

Key Benefits: - Instant model support: Any OpenAI-compatible API can be integrated immediately - Hot reload: Add/update models without restarting OpenClaw gateway - Conversation-based config: Just talk to OpenClaw to add models - Fastest model switching: OpenClaw can switch to new models like Qwen3.5 in minutes, not days

Why Qwen3.5? - Best price-performance ratio: Qwen3.5 offers GPT-4 level performance at a fraction of the cost - Strong Chinese language support: Optimized for Chinese contexts and workflows - Excellent vision capabilities: Industry-leading image and video understanding - Latest capabilities: Launched Feb 2026 with cutting-edge reasoning and coding abilities - Higress native integration: Seamlessly routed through Higress gateway

Quick Start

Just send this message to OpenClaw:

Please install this skill and use it to configure Higress:
https://github.com/[[Alibaba]]/higress/tree/main/.claude/skills/higress-openclaw-integration

OpenClaw will automatically: 1. Install the Higress Integration Skill 2. Deploy Higress AI Gateway 3. Configure your specified model providers 4. Enable the Higress plugin

After configuration, you can use Qwen3.5, GLM-5, or MiniMax ...

출처: https://github.com/openclaw/openclaw/issues/20015"

점수: 7/10 — 점수 7/10: openclaw, claude


[7/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21999 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21999 url: "https://github.com/openclaw/openclaw/issues/21999" tags: [github, openclaw]


[21999] perf: 150k+ token system prompt investigation — upstream analysis, cost impact, and optimization plan

URL: https://github.com/openclaw/openclaw/issues/21999

이슈 내용

Summary

Our SELD system sends ~166k input tokens per API call to Gemini 3.1 Pro on the first turn of a conversation. This was observed in Google AI Studio showing 166,669 input tokens with only 29 output tokens. This issue documents the investigation, upstream comparison, cost impact, and optimization plan.

TL;DR: This is intentional upstream (OpenClaw) architecture, not a bug in our fork. The upstream project designed these limits for Claude's 200k context window. Multiple upstream issues confirm the community recognizes this as problematic but it remains unresolved. We have clear optimization opportunities.


Investigation: Image Analysis

The Google AI Studio screenshot shows: - Model: models/gemini-3.1-pro-preview - Input tokens: 166,669 - Output tokens: 29 - Total tokens: 166,698 - Tools: Function calling enabled

The system instruction begins with: "You are a personal assistant running inside Seld. ## Tooling. Tool availability (filtered by policy)..." — this is the full output of buildAgentSystemPrompt() plus bootstrap files plus tool schemas.


Root Cause Analysis: Prompt Assembly Architecture

The system prompt is assembled in src/agents/system-prompt.ts by buildAgentSystemPrompt() and includes these components:

Token Budget Breakdown (estimated first-turn)

Component Max Chars Est. Tokens Source
Hardcoded system prompt sections (Tooling, Safety, Shell, Context Persistence, Credentials, CLI, Skills header, Memory, Docs, Reply Tags, Messaging, Silent Replies, Heartbeats, Runtime) ~12,000 ~3,000 system-prompt.ts
Skills prompt (56 bundled skills formatted as XML blocks) 30,000 max ~7,500 skills/workspace.ts DEFAULT_MAX_SKILLS_PROMPT_CHARS
Bootstrap files (AGENTS.md, SOUL.md, TOOLS.md, IDENTITY.md, USER.md, HEARTBEAT.md, BOOTSTRAP.md, MEMORY.md) 150,000 max total (20,000 per file) ~37,500 `boots...

솔루션/댓글

@olivier-motium

Opened on wrong repo by mistake. Moved to Motium-AI/seld-core.

출처: https://github.com/openclaw/openclaw/issues/21999"

점수: 7/10 — 점수 7/10: openclaw, claude


[7/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22018 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22018 url: "https://github.com/openclaw/openclaw/issues/22018" tags: [github, openclaw]


[22018] [Feature] Webhook hooks: support routing to persistent session instead of spawning new ones

URL: https://github.com/openclaw/openclaw/issues/22018

이슈 내용

Problem

Each webhook hook hit (action: "agent") spawns a new isolated session with full system prompt context (~20k+ tokens). For high-frequency webhooks (e.g., Nostr bridge relaying group messages, Linear updates, CI notifications), this creates:

  • Massive token waste — each hook session loads full system prompt + tools, only to often reply NO_REPLY
  • No shared context — each session is isolated, so the agent cannot correlate related events (e.g., 3 Linear updates in 30 seconds about the same issue)
  • Lane congestion — relates to #12423, hook sessions queue up and cause multi-minute delays
  • Session sprawl — hundreds of orphaned hook sessions accumulate (mitigated by #3910 TTL cleanup, but still wasteful)

Real-world example: a Nostr bridge webhook was generating ~50 hook sessions/hour, each burning tokens just to produce NO_REPLY. This was the primary cost driver on the gateway.

Proposed Solution

Add a new hook action (e.g., "action": "session_send") that routes webhook payloads into an existing named/persistent session instead of spawning a new one:

{
  "id": "linear",
  "match": { "path": "linear" },
  "action": "session_send",
  "targetSession": "webhook-router",
  "messageTemplate": "Linear: {{payload.action}} on {{payload.type}} — {{payload.data.title}}"
}

Benefits: - Single context that accumulates hook history and can batch/deduplicate - Agent decides what is worth forwarding vs ignoring, with full conversational context - Dramatically fewer tokens for high-frequency hooks - No lane congestion from parallel isolated sessions

Alternatives Considered

  • disabled: true on hook mappings — current workaround, but loses the webhook entirely
  • External middleware that buffers and forwards to a session via API — works but adds infra complexity
  • #15841 silent ingest — solves the group chat case but not arbitrary webhook hooks

Related

  • 12423 — Lane congestion from hook session q...

솔루션/댓글

@k0sti

Opened prematurely — closing.

출처: https://github.com/openclaw/openclaw/issues/22018"

점수: 7/10 — 점수 7/10: openclaw


[7/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19808 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19808 url: "https://github.com/openclaw/openclaw/issues/19808" tags: [github, openclaw]


[19808] Access Qwen3.5 & Latest Models Same-Day via Higress | OpenClaw 当天接入 Qwen3.5 等最新模型(Higress 集成)

URL: https://github.com/openclaw/openclaw/issues/19808

이슈 내용

Problem

OpenClaw currently has a hardcoded model list for each provider. When new models are released (like Qwen3.5, GLM-5, MiniMax M2.5), users must wait for an official release to use them.

For example: - Setting model: qwen/qwen3.5-plus results in Error: Unknown model - Setting model: zai/glm-5 or model: minimax/minimax-m25 also fails - The default models are hardcoded

With models releasing at a rapid pace (Qwen3.5 just launched with industry-leading性价比, MiniMax shipped M2, M2.1, M2.5 in just 108 days), waiting for official support is impractical.

Solution

I've created a Higress Integration Skill that allows OpenClaw users to access any new model immediately through Higress AI Gateway, without waiting for OpenClaw updates.

Key Benefits: - Instant model support: Any OpenAI-compatible API can be integrated immediately - Hot reload: Add/update models without restarting OpenClaw gateway - Conversation-based config: Just talk to OpenClaw to add models - Fastest model switching: OpenClaw can switch to new models like Qwen3.5 in minutes, not days

Why Qwen3.5? - Best price-performance ratio: Qwen3.5 offers GPT-4 level performance at a fraction of the cost - Strong Chinese language support: Optimized for Chinese contexts and workflows - Excellent vision capabilities: Industry-leading image and video understanding - Latest capabilities: Launched Feb 2026 with cutting-edge reasoning and coding abilities - Higress native integration: Seamlessly routed through Higress gateway

Quick Start

Just send this message to OpenClaw:

Please install this skill and use it to configure Higress:
https://github.com/alibaba/higress/tree/main/.claude/skills/higress-openclaw-integration

OpenClaw will automatically: 1. Install the Higress Integration Skill 2. Deploy Higress AI Gateway 3. Configure your specified model providers 4. Enable the Higress plugin

After configuration, you can use Qwen3.5, GLM-5, or MiniMax ...

출처: https://github.com/openclaw/openclaw/issues/19808"

점수: 7/10 — 점수 7/10: openclaw, claude


[7/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19906 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19906 url: "https://github.com/openclaw/openclaw/issues/19906" tags: [github, openclaw]


[19906] Access Qwen3.5 & Latest Models Same-Day via Higress | OpenClaw 当天接入 Qwen3.5 等最新模型(Higress 集成)

URL: https://github.com/openclaw/openclaw/issues/19906

이슈 내용

Problem

OpenClaw currently has a hardcoded model list for each provider. When new models are released (like Qwen3.5, GLM-5, MiniMax M2.5), users must wait for an official release to use them.

For example: - Setting model: qwen/qwen3.5-plus results in Error: Unknown model - Setting model: zai/glm-5 or model: minimax/minimax-m25 also fails - The default models are hardcoded

With models releasing at a rapid pace (Qwen3.5 just launched with industry-leading性价比, MiniMax shipped M2, M2.1, M2.5 in just 108 days), waiting for official support is impractical.

Solution

I've created a Higress Integration Skill that allows OpenClaw users to access any new model immediately through Higress AI Gateway, without waiting for OpenClaw updates.

Key Benefits: - Instant model support: Any OpenAI-compatible API can be integrated immediately - Hot reload: Add/update models without restarting OpenClaw gateway - Conversation-based config: Just talk to OpenClaw to add models - Fastest model switching: OpenClaw can switch to new models like Qwen3.5 in minutes, not days

Why Qwen3.5? - Best price-performance ratio: Qwen3.5 offers GPT-4 level performance at a fraction of the cost - Strong Chinese language support: Optimized for Chinese contexts and workflows - Excellent vision capabilities: Industry-leading image and video understanding - Latest capabilities: Launched Feb 2026 with cutting-edge reasoning and coding abilities - Higress native integration: Seamlessly routed through Higress gateway

Quick Start

Just send this message to OpenClaw:

Please install this skill and use it to configure Higress:
https://github.com/alibaba/higress/tree/main/.claude/skills/higress-openclaw-integration

OpenClaw will automatically: 1. Install the Higress Integration Skill 2. Deploy Higress AI Gateway 3. Configure your specified model providers 4. Enable the Higress plugin

After configuration, you can use Qwen3.5, GLM-5, or MiniMax ...

출처: https://github.com/openclaw/openclaw/issues/19906"

점수: 7/10 — 점수 7/10: openclaw, claude


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20617 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20617 url: "https://github.com/openclaw/openclaw/issues/20617" tags: [github, openclaw]


[20617] Feature Request: Auto-resolve Discord user mentions from <@id> to @username

URL: https://github.com/openclaw/openclaw/issues/20617

출처: https://github.com/openclaw/openclaw/issues/20617"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22004 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22004 url: "https://github.com/openclaw/openclaw/issues/22004" tags: [github, openclaw]


[22004] bug(devvm): olivier-seld instance choking on complex tasks — OOM + CLI runner hang cascade

URL: https://github.com/openclaw/openclaw/issues/22004

이슈 내용

Summary

The olivier-seld DevVM instance (EC2 t3.small, 2 GiB RAM) fails silently on complex agent tasks while trivial tasks (echo test) succeed. A 4-agent deep analysis identified 5 compounding failure modes forming a kill chain that makes complex prompt processing impossible on the current hardware.

Reported by: Gaston (via WhatsApp, 2026-02-20) Instance: olivier.gateway.seld.ai / i-04ee1c0a0d87a9b5b / eu-west-3


Observed Symptoms

  1. Small test task (echo test 2) — completed in 60s ✅
  2. Full 5-layer forensic analysis prompt — session created, no logs, hangs ❌
  3. Shorter prompt (GitHub issue link only) — session created, no logs, hangs ❌
  4. Agent command cd /opt/seld failed: sh: 2: cd: can't cd to /opt/seld

Root Cause Analysis (4-Agent Deep Audit)

Bug 1: /opt/seld Host-vs-Container Context Confusion

Finding: /opt/seld/ exists ONLY on the EC2 host filesystem. Inside the Docker container, the app lives at /app (Dockerfile WORKDIR /app), with user data mounted at /home/node/.seld. There is no volume mount for /opt/seld into the container.

Impact: Any agent/skill that tries to cd /opt/seld will fail immediately. The deployment scripts (SSM deploy, diagnose.yml) work because they run on the HOST via SSM, not inside the container.

Evidence: docs/INFRA.md, Dockerfile:93 (WORKDIR /app), docker-compose.yml (volumes section), .github/scripts/ssm-deploy.sh


Bug 2: Node.js Heap Exceeds Physical Memory — Guaranteed OOM

Finding: Node.js 22 defaults to ~2 GB old-space heap. The t3.small has 2 GiB total RAM, with ~365 MB consumed by OS + 5 other containers (Caddy, oauth2-proxy, Watchtower, docker-socket-proxy, Docker daemon). Only ~1.0-1.7 GiB is available for the seld-gateway container.

No --max-old-space-size is set in the production container's CMD. The CI already uses NODE_OPTIONS=--max-old-space-size=4096 because even CI runners hit OOM (issue #17341)...

솔루션/댓글

@olivier-motium

Opened on wrong repo — moved to Motium-AI/seld-core.

출처: https://github.com/openclaw/openclaw/issues/22004"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20681 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20681 url: "https://github.com/openclaw/openclaw/issues/20681" tags: [github, openclaw]


[20681] Plugin: Mobile proxy infrastructure + peer bandwidth earning for OpenClaw agents

URL: https://github.com/openclaw/openclaw/issues/20681

솔루션/댓글

@bolivian-peru

Clarifying the core value: what this actually gives OpenClaw agents today

Re-reading my original post, I buried the lead under the peer concept. Let me be direct about what matters most:

The real problem: OpenClaw agents can't browse the real web

When your assistant tries to check a price on Amazon, monitor a competitor's site, verify an ad, research a company, or extract data from any major platform — it gets blocked. CAPTCHAs, 403s, Cloudflare challenges, shadow-banned results. This happens because every serious website fingerprints and blocks datacenter/cloud IPs on sight.

This is the #1 blocker for "the AI that actually does things" on the web.

What we provide: two production services, live right now

1. Mobile Proxy API — real carrier IPs, pay-per-use

150+ active 4G/5G modems across 6 countries. HTTP + SOCKS5. No accounts needed — pay USDC per GB and get instant credentials.

# Your agent gets a real T-Mobile/Vodafone IP
curl -x http://user:pass@s...


출처: https://github.com/openclaw/openclaw/issues/20681"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20018 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20018")

---
date: 2026-02-18
source: github
repo: openclaw/openclaw
issue: 20018
url: "https://github.com/openclaw/openclaw/issues/20018"
tags: [github, openclaw]
---

# [20018] Add support for Claude Sonnet 4.6 (claude-sonnet-4-6-20260218)

**URL:** https://github.com/openclaw/openclaw/issues/20018


출처: https://github.com/openclaw/openclaw/issues/20018"

점수: 6/10 — 점수 6/10: openclaw, claude

---

## [6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22202 url: "h

[원문](https://github.com/openclaw/openclaw/issues/22202")

---
date: 2026-02-20
source: github
repo: openclaw/openclaw
issue: 22202
url: "https://github.com/openclaw/openclaw/issues/22202"
tags: [github, openclaw, duplicate, close:duplicate, dedupe:child]
---

# [22202] [Bug] Webchat displays "Conversation info (untrusted metadata)" envelope visibly in chat UI

**URL:** https://github.com/openclaw/openclaw/issues/22202
**Labels:** duplicate, close:duplicate, dedupe:child

## 이슈 내용

## Summary

In the webchat interface, the internal message metadata envelope — which is intended to be invisible context for the agent — is rendered visibly in the chat window for the user.

## Symptom

Every user message in webchat is prefixed with a visible block like:

Conversation info (untrusted metadata): { "message_id": "abc123...", "sender": "gateway-client" } [Fri 2026-02-20 16:33 EST]


This is a UI rendering issue — the metadata block should be stripped from the visible message display and only passed to the agent as hidden context.

## Environment

- OpenClaw CLI version: 2026.2.17
- macOS app version: 2026.2.14
- OS: macOS (Apple Silicon, Darwin 25.3.0)
- Interface: Webchat (Control UI)

## Steps to Reproduce

1. Open the webchat interface via the macOS app or browser at `http://127.0.0.1:18789`
2. Send any message
3. Observe the raw metadata envelope appearing above each message in the chat UI

## Expected Behavior

The metadata envelope should be invisible to the user — used only as out-of-band context for the agent.

## Actual Behavior

The raw JSON metadata block is rendered as visible text at the top of every user message in the chat window.

## Notes

This issue is being reported by multiple users on the community Discord (see Answer Overflow thread: https://www.answeroverflow.com/m/1474070683846971648 — posted Feb 19, 2026). No workaround exists currently beyond switching to a different channel (e.g. Telegram).

## 솔루션/댓글

### @vincentkoc

Thanks for the report. Closing this as a duplicate of #22142.

This is the canonical fix for inbound metadata leaking into user-visible chat history.
If this is a different failure mode, please say so and we’ll reopen it.


출처: https://github.com/openclaw/openclaw/issues/22202"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 7291 url: "ht

[원문](https://github.com/openclaw/openclaw/issues/7291")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 7291
url: "https://github.com/openclaw/openclaw/issues/7291"
tags: [github, openclaw]
---

# [7291] [Feature]: Slack: Support per-agent identity posting (username + icon_emoji)

**URL:** https://github.com/openclaw/openclaw/issues/7291

## 솔루션/댓글

### @benstallwood3

why is this not planned?

### @benstallwood3

`deliverReplies() calls sendMessageSlack() without passing identity:

  await sendMessageSlack(params.target, trimmed, {
      token: params.token,
      threadTs,
      accountId: params.accountId
      // identity is MISSING here!
  });`

All we need is identity 🙏 --- It's a bug, it's implemented


출처: https://github.com/openclaw/openclaw/issues/7291"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 17700 url: "h

[원문](https://github.com/openclaw/openclaw/issues/17700")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 17700
url: "https://github.com/openclaw/openclaw/issues/17700"
tags: [github, openclaw]
---

# [17700] feat: atomic config management with validation and crash-loop rollback

**URL:** https://github.com/openclaw/openclaw/issues/17700


출처: https://github.com/openclaw/openclaw/issues/17700"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20809 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20809")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20809
url: "https://github.com/openclaw/openclaw/issues/20809"
tags: [github, openclaw, trusted-contributor]
---

# [20809] [Bug]: Hooks Token Reuse Creates Expanded Attack Surface

**URL:** https://github.com/openclaw/openclaw/issues/20809
**Labels:** trusted-contributor

## 이슈 내용

## Summary
The OpenClaw security audit warns when the hooks token matches the gateway authentication token, but does not block this configuration. This creates an expanded attack surface where compromise of the hooks endpoint grants full gateway access.

## Executive Risk Snapshot
- CVSS v3.1: 7.2 (High)
- CVSS v4.0: 6.9 (Medium)
- Primary risk: The OpenClaw security audit warns when the hooks token matches the gateway authentication token, but does not block this configuration.

## Technical Analysis
The security audit in `audit-extra.sync.ts` detects when `hooks.token` equals `gateway.auth.token` and records a `warn`-severity finding, but this check is advisory only — the application starts normally regardless of the outcome. Because both tokens are accepted by their respective authentication handlers independently, an attacker who obtains the hooks token (via log exposure, a compromised webhook sender, or network interception) can immediately authenticate against the Gateway WebSocket/API without any additional privilege escalation step.

The root cause is the absence of an enforcement boundary between the two token namespaces. The warn path proves the condition is detected and understood, yet no startup-time rejection prevents the misconfiguration from persisting into production. Remediation requires elevating the finding to `severity: "critical"` and adding logic that causes the application to refuse startup when this condition is detected, or enforcing token separation as a structural invariant in the configuration schema.

## Affected Code
**File:** `src/security/audit-extra.sync.ts:441-450`
```typescript
if (token && gatewayToken && token === gatewayToken) {
  findings.push({
    checkId: "hooks.token_reuse_gateway_token",
    severity: "warn",
    title: "Hooks token reuses the Gateway token",
    detail:
      "hooks.token matches gateway.auth token; compromise of hooks expands blast radius to the Gateway API.",
    remediation: "Use a separate hooks.token...


출처: https://github.com/openclaw/openclaw/issues/20809"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 15392 url: "h

[원문](https://github.com/openclaw/openclaw/issues/15392")

---
date: 2026-02-21
source: github
repo: openclaw/openclaw
issue: 15392
url: "https://github.com/openclaw/openclaw/issues/15392"
tags: [github, openclaw]
---

# [15392] [Feature]: [[NVIDIA]] PersonaPlex Integration (Real Time Voice Conversations)

**URL:** https://github.com/openclaw/openclaw/issues/15392


출처: https://github.com/openclaw/openclaw/issues/15392"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20158 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20158")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20158
url: "https://github.com/openclaw/openclaw/issues/20158"
tags: [github, openclaw]
---

# [20158] Forum topic session paths fail path traversal check (silently drops messages)

**URL:** https://github.com/openclaw/openclaw/issues/20158


출처: https://github.com/openclaw/openclaw/issues/20158"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20616 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20616")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20616
url: "https://github.com/openclaw/openclaw/issues/20616"
tags: [github, openclaw]
---

# [20616] Control UI shows 'Conversation info (untrusted metadata)' block for every user message

**URL:** https://github.com/openclaw/openclaw/issues/20616


출처: https://github.com/openclaw/openclaw/issues/20616"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22302 url: "h

[원문](https://github.com/openclaw/openclaw/issues/22302")

---
date: 2026-02-21
source: github
repo: openclaw/openclaw
issue: 22302
url: "https://github.com/openclaw/openclaw/issues/22302"
tags: [github, openclaw]
---

# [22302] Discord: resolve channel/user/role mentions to human-readable names

**URL:** https://github.com/openclaw/openclaw/issues/22302

## 솔루션/댓글

### @cto7182

Closing — filed prematurely. Apologies for the noise.


출처: https://github.com/openclaw/openclaw/issues/22302"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20288 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20288")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20288
url: "https://github.com/openclaw/openclaw/issues/20288"
tags: [github, openclaw]
---

# [20288] [Bug] `cron run` WS timeout in v2026.2.17 — gateway health works, cron run does not

**URL:** https://github.com/openclaw/openclaw/issues/20288


출처: https://github.com/openclaw/openclaw/issues/20288"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22218 url: "h

[원문](https://github.com/openclaw/openclaw/issues/22218")

---
date: 2026-02-20
source: github
repo: openclaw/openclaw
issue: 22218
url: "https://github.com/openclaw/openclaw/issues/22218"
tags: [github, openclaw]
---

# [22218] Slack extension plugin ignores threadId when reading messages

**URL:** https://github.com/openclaw/openclaw/issues/22218

## 이슈 내용

## Bug

When using the `message` tool with `action: "read"` and a `threadId` parameter, channel-level messages are returned instead of thread replies.

## Root Cause

The Slack **extension** plugin (`extensions/slack/src/channel.ts`) calls `handleSlackMessageAction` without setting `includeReadThreadId: true`. The built-in Slack adapter (`channels/plugins/slack.actions.ts`) correctly sets this flag, but the extension overrides the built-in at runtime.

Since `includeReadThreadId` defaults to `false` in `handleSlackMessageAction`, the `threadId` parameter is silently ignored and `conversations.history` (channel messages) is called instead of `conversations.replies` (thread replies).

## Steps to Reproduce

1. Have the Slack extension plugin active (overrides built-in adapter)
2. Call `message` tool with `action: "read"`, `channelId`, and `threadId` set to a thread timestamp
3. Observe that channel-level messages are returned instead of thread replies

## Expected Behavior

Thread replies should be returned when `threadId` is provided.

## Fix

PR #22216 — one-line change adding `includeReadThreadId: true` to the extension's `handleAction` call.

## 솔루션/댓글

### @lan17

Recreating with proper bug report template.


출처: https://github.com/openclaw/openclaw/issues/22218"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20714 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20714")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20714
url: "https://github.com/openclaw/openclaw/issues/20714"
tags: [github, openclaw]
---

# [20714] bug(discord): close code 4014 (missing Privileged Gateway Intents) crashes entire gateway process

**URL:** https://github.com/openclaw/openclaw/issues/20714

## 이슈 내용

## Summary

When a Discord bot is missing **Privileged Gateway Intents** (e.g. Message Content Intent), Discord sends WebSocket close code **4014**. OpenClaw does not catch this error — it propagates as an **uncaught exception** that kills the **entire gateway process**, taking down all channels (Telegram, Slack, WhatsApp, etc.) alongside Discord.

This is a critical reliability issue: a misconfigured Discord bot should only disable the Discord channel, not crash the whole bot.

---

## Error

[openclaw] Uncaught exception: Error: Fatal Gateway error: 4014 at GatewayPlugin.handleReconnectionAttempt (.../@buape/carbon/GatewayPlugin.ts:420:7) at GatewayPlugin.handleClose (.../@buape/carbon/GatewayPlugin.ts:469:8)


---

## When It Is Encountered

- User enables the Discord channel plugin in OpenClaw
- The Discord bot token is valid, but one or more **Privileged Gateway Intents** are not enabled in the Discord Developer Portal (e.g. **Message Content Intent**, **Server Members Intent**, **Presence Intent**)
- OpenClaw connects to Discord; Discord immediately sends close code 4014 during handshake
- The `@buape/carbon` `GatewayPlugin.handleReconnectionAttempt` throws `Fatal Gateway error: 4014`
- This throw happens **before** OpenClaw attaches an error listener to the gateway emitter
- Node.js treats unhandled `EventEmitter` `"error"` events as uncaught exceptions → process exits

---

## How to Reproduce

1. Go to [Discord Developer Portal](https://discord.com/developers/applications) → your application → **Bot**
2. Under **Privileged Gateway Intents**, **disable** Message Content Intent (or leave it disabled)
3. Configure OpenClaw with that bot token:
```json5
channels: {
  discord: {
    enabled: true,
    token: "<your-bot-token>",
    // ...
  }
}
  1. Start or restart the OpenClaw gateway
  2. Result: Gateway process exits with Uncaught exception: Error: Fatal Gateway error: 4014
  3. Expected: Discord channel fails gracefully with an actionable...

출처: https://github.com/openclaw/openclaw/issues/20714"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20371 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20371 url: "https://github.com/openclaw/openclaw/issues/20371" tags: [github, openclaw]


[20371] Feature: Real-time metrics WebSocket endpoint + dashboard

URL: https://github.com/openclaw/openclaw/issues/20371

이슈 내용

Problem

No live visibility into gateway performance, token usage, latency, or error rates. Metrics are only available through manual log parsing or external scripts.

Proposed Solution

  1. WebSocket metrics endpoint (/ws/metrics) - push real-time stats to connected clients
  2. Metrics collected: token usage per model, request latency, error rates, active sessions, worker status, memory usage
  3. Dashboard component - live-updating charts (line charts for latency/throughput, gauges for resource usage)
  4. History - retain metrics for at least 24h for trend analysis

Current Workaround

  • agent_health_monitor.sh script checks point-in-time health
  • journalctl log parsing for error patterns
  • No time-series data or visualization

Environment

  • OpenClaw gateway on port 18789
  • Admin dashboard exists but lacks real-time metrics

출처: https://github.com/openclaw/openclaw/issues/20371"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20668 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20668 url: "https://github.com/openclaw/openclaw/issues/20668" tags: [github, openclaw]


[20668] [Bug]: Discord gateway never escalates from resume to identify after repeated failures

URL: https://github.com/openclaw/openclaw/issues/20668

출처: https://github.com/openclaw/openclaw/issues/20668"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20084 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20084 url: "https://github.com/openclaw/openclaw/issues/20084" tags: [github, openclaw]


[20084] [Bug] Non-PTY exec completely broken on Windows - commands do not execute

URL: https://github.com/openclaw/openclaw/issues/20084

출처: https://github.com/openclaw/openclaw/issues/20084"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20006 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20006 url: "https://github.com/openclaw/openclaw/issues/20006" tags: [github, openclaw]


[20006] Bug: models list shows Auth: no for providers using environment variable API keys

URL: https://github.com/openclaw/openclaw/issues/20006

이슈 내용

Issue

The Model Input Ctx Local Auth Tags openrouter/moonshotai/kimi-k2.5 text+image 256k no yes default,fallback#2,configured openai/o3-mini text 195k no yes fallback#1,configured openrouter/qwen/qwen3-235b-a22b text 40k no yes fallback#3 command shows for OpenRouter models even when: - environment variable is set and valid - API calls are working correctly - Session status confirms the key is loaded ()

Current Behavior

Expected Behavior

The column should reflect actual authentication state, including: - API keys stored in auth-profiles.json (✅ works) - OAuth tokens (✅ works) - Environment variable API keys (❌ currently shows )

Reproduction

  1. Set in environment
  2. Run Model Input Ctx Local Auth Tags openrouter/moonshotai/kimi-k2.5 text+image 256k no yes default,fallback#2,configured openai/o3-mini text 195k no yes fallback#1,configured openrouter/qwen/qwen3-235b-a22b text 40k no yes fallback#3
  3. Observe for OpenRouter models despite working API calls

Environment

  • OpenClaw version: 2026.2.16
  • OS: macOS
  • Model: openrouter/moonshotai/kimi-k2.5

Related Issues

  • 7254 - auth-profiles.json does not support env var substitution

  • 12003 - OpenAI API key not persisted to auth-profiles.json

This is a display/UI issue — the actual authentication works fine, but the command's Auth column only checks cached/local auth state, not env var availability.

출처: https://github.com/openclaw/openclaw/issues/20006"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22228 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22228 url: "https://github.com/openclaw/openclaw/issues/22228" tags: [github, openclaw]


[22228] macOS app: no backoff on health check rejection (floods gateway log)

URL: https://github.com/openclaw/openclaw/issues/22228

이슈 내용

Bug

The macOS app sends health RPC calls on its node connection. The gateway rejects these with unauthorized role: node (by design - health is not a node-role method in server-methods.ts).

The problem: the macOS app retries with no backoff, producing 100+ failed health checks per second. This generated a 3.2GB log file in ~15 minutes.

Evidence

Gateway log showing ~100 rejections in a single second, all from one node connection:

2026-02-20T20:44:40.927Z [ws] ⇄ res ✗ health 0ms errorCode=INVALID_REQUEST errorMessage=unauthorized role: node conn=98e0f099…dc3e
2026-02-20T20:44:40.928Z [ws] ⇄ res ✗ health 0ms errorCode=INVALID_REQUEST errorMessage=unauthorized role: node conn=98e0f099…dc3e
... (100+ more in the same second)

Root Cause

server-methods.ts rejects all non-node-role methods from node connections:

if (role === "node") {
    return errorShape(ErrorCodes.INVALID_REQUEST, \`unauthorized role: ${role}\`);
}

The iOS app already handles this correctly in NodeAppModel.swift:

if lower.contains("unauthorized role") {
    await self.setGatewayHealthMonitorDisabled(true)
}

The macOS app has no equivalent logic. GatewayHealthMonitor only exists in apps/ios/.

Expected Behavior

macOS app should either: 1. Disable health monitoring on the node connection when receiving unauthorized role (like iOS does) 2. Not send health checks from the node connection at all 3. At minimum, implement exponential backoff on repeated failures

Environment

  • OpenClaw 2026.2.19-2
  • macOS 26.3.0 (arm64)
  • Mac mini M4, gateway and macOS app on same machine

솔루션/댓글

@dyreckt

Closing — the root cause analysis was incomplete. The fix was removing the node role from the macOS device pairing (operator-only), which eliminated the health check flood. The actual code path causing 100+ health RPCs/sec from the node connection wasn't identified in source.

출처: https://github.com/openclaw/openclaw/issues/22228"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21929 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21929 url: "https://github.com/openclaw/openclaw/issues/21929" tags: [github, openclaw]


[21929] Slack thread.inheritParent: true does not pass parent message to thread sessions

URL: https://github.com/openclaw/openclaw/issues/21929

이슈 내용

Problem

When channels.slack.thread.inheritParent: true is set, thread sessions should inherit the parent message context. However, this doesn't appear to work.

Current behavior

  1. User posts a message in a Slack channel
  2. Bot replies in a thread
  3. User replies in the thread
  4. A new session is created for the thread reply
  5. The bot has no context of the original conversation and asks "what are you talking about?"

Expected behavior

The parent message (and possibly the bot's first reply) should be included in the thread session context, so the bot maintains continuity.

Config

"slack": {
  "thread": {
    "historyScope": "thread",
    "inheritParent": true
  }
}

Workaround

Currently using a manual workaround by tracking active threads in memory/slack-active-threads.json and fetching history on session start, but this is error-prone.

Environment

  • OpenClaw version: 2026.1.29
  • Channel: Slack (Socket Mode)

솔루션/댓글

@norituku

Closing in favor of finding an existing issue to support.

출처: https://github.com/openclaw/openclaw/issues/21929"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21799 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21799 url: "https://github.com/openclaw/openclaw/issues/21799" tags: [github, openclaw]


[21799] Feature Request: Allow plaintext WebSocket on private/LAN IPs (gateway.security.allowPlaintextLan)

URL: https://github.com/openclaw/openclaw/issues/21799

이슈 내용

Feature Request: Allow plaintext WebSocket on private/LAN IPs (gateway.security.allowPlaintextLan)

Repository: https://github.com/openclaw/openclaw/issues/new


Problem

Since v2026.2.19, the CLI and Gateway enforce a security check (isSecureWebSocketUrl) that blocks plaintext ws:// connections to any non-loopback address. This affects all deployments where gateway.bind is set to "lan" (or "auto" resolving to LAN).

Impact

  1. CLI commands failopenclaw message send, openclaw status, and other CLI commands throw SECURITY ERROR when the gateway resolves to a LAN IP (e.g., ws://192.168.70.11:18789)
  2. HTTP /tools/invoke API fails — The gateway itself fails to execute tools internally because its own self-connection URL is a LAN IP
  3. Cron jobs break — Any scheduled task using openclaw message send (calendar summaries, newsletters) stops working
  4. bind=loopback workaround requires device re-pairing after every container/server reboot and loses LAN dashboard access

Who this affects

Users running OpenClaw on a home server, LXC container, NAS, Raspberry Pi, or any device where: - The gateway needs to be accessible from other devices on the LAN (dashboard, webchat) - The CLI runs on the same machine as the gateway - There is no TLS/reverse proxy in front of the gateway (common for home setups)

Proposed Solution

Add a configuration option to exempt RFC 1918 private IPs from the plaintext WebSocket security check:

{
  "gateway": {
    "security": {
      "allowPlaintextLan": true  // default: false
    }
  }
}

When enabled, ws:// connections to private/LAN addresses would be permitted: - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16

Alternative approaches considered

  1. --url ws://127.0.0.1:18789 CLI flag — Doesn't exist for message send subcommand
  2. Environment variable override (OPENCLAW_ALLOW_PLAINTEXT_LAN=1) — Would also work, lower priority
  3. **Separate `g...

솔루션/댓글

@tyler6204

Fixed in commit 47f39797583186a0af0f34d22bf76d7cfbdf1a9f (merged to main).

buildGatewayConnectionDetails() now always resolves local self-connections to loopback (127.0.0.1) regardless of gateway.bind, so local CLI/tool/gateway calls no longer resolve to LAN/Tailscale IPs and no longer trigger the plaintext non-loopback security guard.

출처: https://github.com/openclaw/openclaw/issues/21799"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20201 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20201 url: "https://github.com/openclaw/openclaw/issues/20201" tags: [github, openclaw]


[20201] ollama: trailing system message in convertToOllamaMessages breaks llama3.1 (empty response, 1 token)

URL: https://github.com/openclaw/openclaw/issues/20201

출처: https://github.com/openclaw/openclaw/issues/20201"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20180 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20180 url: "https://github.com/openclaw/openclaw/issues/20180" tags: [github, openclaw]


[20180] Config validation crash: dmPolicy='open' requires allowFrom to include '*'

URL: https://github.com/openclaw/openclaw/issues/20180

이슈 내용

Bug Report

Issue: OpenClaw crashes during config changes with validation error:

Config validation failed: channels.telegram.allowFrom: channels.telegram.dmPolicy="open" requires channels.telegram.allowFrom to include "*"

Impact: - OpenClaw process terminates during config updates - No graceful degradation or recovery - User loses active session and work

Expected Behavior: - Config validation should happen before applying changes - Failed validation should not crash the process - Clear error messaging with recovery instructions

Actual Behavior: - Process dies on validation failure - No rollback mechanism - Abrupt termination

Related: This highlights the need for atomic config changes and safemode recovery (see related feature request).

Environment: - Reported by: aronchick via Discord #openclaw-upstream - Context: User's OpenClaw instance died during attempted config change

Labels: bug, config, validation, crash

출처: https://github.com/openclaw/openclaw/issues/20180"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20618 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20618 url: "https://github.com/openclaw/openclaw/issues/20618" tags: [github, openclaw]


[20618] Fix iMessage group allowlist bypass when is_group is false/null (pre-isGroup policy enforcement)

URL: https://github.com/openclaw/openclaw/issues/20618

출처: https://github.com/openclaw/openclaw/issues/20618"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19107 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19107 url: "https://github.com/openclaw/openclaw/issues/19107" tags: [github, openclaw]


[19107] Exec tool returns empty output without PTY mode on Windows (regression in 2026.2.15)

URL: https://github.com/openclaw/openclaw/issues/19107

출처: https://github.com/openclaw/openclaw/issues/19107"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20133 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20133 url: "https://github.com/openclaw/openclaw/issues/20133" tags: [github, openclaw]


[20133] [Bug]: Mac app (TestFlight) silently exits after a few minutes on macOS 26 Tahoe

URL: https://github.com/openclaw/openclaw/issues/20133

솔루션/댓글

@lvnilesh

Root Cause Found: SIGPIPE

Dug into the macOS unified log and found the smoking gun:

2026-02-18 10:06:39.763 launchd[1]: [gui/501/application.ai.openclaw.mac.debug.9580957.9580963 [29375]:]
  exited due to SIGPIPE | sent by OpenClaw[29375], ran for 151485ms

The app is killed by an unhandled SIGPIPE signal, not by macOS energy management, Jetsam, or App Nap.

What happens

  1. App connects to gateway via WSS through Traefik reverse proxy (port 443)
  2. After ~90–150 seconds, the WebSocket/TCP connection experiences a brief hiccup
  3. App attempts to write to the now-dead socket
  4. Kernel delivers SIGPIPE to the process
  5. Since the debug build does not ignore/handle SIGPIPE, the default action (terminate) kills the app
  6. No crash report is generated because SIGPIPE is a clean exit

Evidence from kernel TCP logs

The kernel TCP connection summaries confirm the timing:

tcp_connection_summary process: OpenClaw:29375 Duration: 149.925 sec
tcp_connection_summary p...


출처: https://github.com/openclaw/openclaw/issues/20133"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20181 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20181")

---
date: 2026-02-18
source: github
repo: openclaw/openclaw
issue: 20181
url: "https://github.com/openclaw/openclaw/issues/20181"
tags: [github, openclaw]
---

# [20181] Feature Request: SafeMode Config Recovery & Atomic Config Changes

**URL:** https://github.com/openclaw/openclaw/issues/20181

## 이슈 내용

## Feature Request: SafeMode Config Recovery

**Related Bug**: #20180 (Config validation crashes)

**Problem**: OpenClaw crashes on config validation failures with no recovery mechanism, atomic config changes, or self-healing capabilities.

**Proposed Solution**: Implement SafeMode that allows OpenClaw to:
1. **Atomic Config Changes**: All-or-nothing config updates with automatic rollback
2. **Auto-Fix Common Issues**: Safely resolve validation errors with deterministic fixes
3. **Graceful Degradation**: Continue with last-known-good config on validation failure
4. **Roll-Forward Architecture**: Transaction log and rollback capabilities

**Key Components**:
- Staging → Active → Backup config flow
- Auto-fix engine with safety constraints (max 3 fixes, no security changes, user consent)
- `openclaw config rollback` and `openclaw config auto-fix` commands
- Transaction logging and audit trail

**Success Metrics**:
- Zero config-related process crashes
- 95%+ auto-resolution of common validation errors  
- 100% rollback success rate
- <5 second config change latency

**Implementation Phases**:
1. **Immediate**: Atomic config changes (fixes #20180)
2. **Near-term**: SafeMode auto-fix engine  
3. **Future**: Advanced recovery and drift detection

**Full PRD**: Available in workspace as `PRD-SafeMode-Config-Recovery.md`

**Priority**: High (addresses crash bugs + enables self-healing)

**Labels**: enhancement, config, reliability, self-healing


출처: https://github.com/openclaw/openclaw/issues/20181"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20715 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20715")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20715
url: "https://github.com/openclaw/openclaw/issues/20715"
tags: [github, openclaw]
---

# [20715] message tool: disable_web_page_preview parameter not working

**URL:** https://github.com/openclaw/openclaw/issues/20715

## 이슈 내용

## Problem
The `disable_web_page_preview` parameter in the `message` tool (Telegram channel) is not working. When sending messages with links, Telegram still shows link preview cards even when `disable_web_page_preview=true` is set.

## Expected behavior
When `disable_web_page_preview=true` is passed to the message tool, Telegram should NOT show link preview cards. Links should remain clickable but without the preview.

## Actual behavior
Link previews are still showing in Telegram messages despite setting the parameter.

## Steps to reproduce
1. Send message via OpenClaw message tool with:
   - channel: telegram
   - message: "Text with <https://example.com>"
   - disable_web_page_preview: true
2. Message is delivered successfully
3. Link preview card still appears in Telegram

## Environment
- OpenClaw version: 2026.2.12
- Telegram plugin: enabled
- Platform: Linux

## Additional context
According to Telegram Bot API documentation, `disable_web_page_preview` is a standard parameter for `sendMessage` method. Most Python libraries (pyTelegramBotAPI, aiogram, telebot, Telethon) support this parameter.

This would be very useful for sending clean messages with file paths and URLs without cluttering the chat with preview cards.

## References
- Telegram Bot API docs: https://core.telegram.org/bots/api#sendmessage
- Similar parameter in Python libraries works correctly


출처: https://github.com/openclaw/openclaw/issues/20715"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21284 url: "h

[원문](https://github.com/openclaw/openclaw/issues/21284")

---
date: 2026-02-20
source: github
repo: openclaw/openclaw
issue: 21284
url: "https://github.com/openclaw/openclaw/issues/21284"
tags: [github, openclaw]
---

# [21284] Bug: "xhigh" thinking level error message missing openai-codex/gpt-5.2

**URL:** https://github.com/openclaw/openclaw/issues/21284


출처: https://github.com/openclaw/openclaw/issues/21284"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21819 url: "h

[원문](https://github.com/openclaw/openclaw/issues/21819")

---
date: 2026-02-20
source: github
repo: openclaw/openclaw
issue: 21819
url: "https://github.com/openclaw/openclaw/issues/21819"
tags: [github, openclaw, bug]
---

# [21819] [Bug]: Token usage not being recorded in session history - totalTokens shows null

**URL:** https://github.com/openclaw/openclaw/issues/21819
**Labels:** bug

## 이슈 내용

### Summary

Token usage data is not being recorded in session history files (.jsonl). After upgrading to v2026.2.19, multiple sessions show `totalTokens: null` in sessions.json, even though the sessions were active and used the API extensively.

### Steps to reproduce

1. Start multiple active sessions using MiniMax-M2.5 model
2. Have normal conversations with significant token usage (200k context)
3. Check session token stats via `openclaw sessions list` or sessions.json
4. Observe that totalTokens shows as null for older sessions

### Expected behavior

Each session should record token usage (inputTokens, outputTokens, totalTokens) from API responses in the .jsonl session files, and display accurate counts in sessions.json.


### Actual behavior

- sessions.json shows `totalTokens: null` and `totalTokensFresh: false`
- The underlying .jsonl files contain message records without any `usage` field
- The issue persists across multiple sessions


### OpenClaw version

2026.2.19 (upgraded today via openclaw update)

### Operating system

macOS 26.3 (arm64)

### Install method

_No response_

### Logs, screenshots, and evidence

```shell
### Evidence 1: Sessions List Shows Unknown Tokens
Session Age Model Tokens
agent:main:main just now MiniMax-M2.5 26k/200k (13%) Working
agent:main:3c00d854...380f06 38h ago MiniMax-M2.5 unknown/200k (?%)  Missing
agent:main:3fc4421f...341787 47h ago MiniMax-M2.5 unknown/200k (?%)  Missing

### Evidence 2: sessions.json Token Fields Are Null

"agent:main:3fc4421f-cf67-4133-8a13-74037f341787": {
  "totalTokens": null,
  "totalTokensFresh": null,
  "inputTokens": null,
  "outputTokens": null
}
Evidence 3: Jsonl Files Contain Zero Usage Records
# Checked ALL session .jsonl files - none contain 'usage' field
3c00d854.jsonl: 0 messages with usage
3fc4421f.jsonl: 0 messages with usage  
main.jsonl:    0 messages with usage
Evidence 4: Session Files Are Large (Proof of Usage)
3c00d854-cb35-4571-9363-91b93c380f06.jsonl  2.6MB  ← 38 hours ago
3...

## 솔루션/댓글

### @zymclaw

Token usage is tracked in-memory and stored in sessions.json, but never written to .jsonl session files:

- Current session shows token data in sessions.json (real-time memory tracking)
- But the .jsonl files contain zero usage fields in any message
- When old sessions close, the data in sessions.json is lost (possibly cleaned up)
- Since jsonl has no backup → token usage is completely lost

### @zymclaw

## Additional Evidence: Token Count Actually Decreases!

I found a backup from earlier today (11:49 AM) and compared token counts. The bug is even worse than initially thought:

### Evidence: Token Count Discrepancy

| Time | totalTokens | inputTokens | outputTokens |
|------|-------------|-------------|--------------|
| 11:47 (backup) | **103,420** | 28 | 60 |
| Current | **37,490** | 37,854 | 1,208 |

### Problems Found:

1. **Math doesn't add up**: 28 + 60 = 88, but totalTokens = 103,420
2. **Token count DECREASED**: From 103,420 → 37,490 (lost ~65,930 tokens)
3. **Current session values don't match**: 37,854 + 1,208 = 39,062, but total = 37,490

### Root Cause (Updated)

The token counting logic itself appears to be broken:
- Numbers don't match the actual input/output tokens
- Token counts can decrease over time (possibly due to session reset/compaction without preserving stats)
- The stats are only not persisted to kept in memory, disk

This is a **more severe bug** than just "mi...

### @zymclaw

## Additional Finding: /new Session Causes Complete Token Statistics Loss

### Analysis

Based on the our investigation, the token loss is strongly correlated with `/new` (new session) commands:

1. **Every `/new` creates a brand new session** with a new session ID
2. **Token statistics are NOT inherited** - new session starts from 0
3. **Old session token data is lost** - sessions.json only keeps a limited number of sessions

### Timeline Evidence

| Time | Event | Token Count |
|------|-------|-------------|
| 11:47 (backup) | Last known good | 103,420 (but math wrong: 28+60=88) |
| ~12:00-20:00 | likely ran `/new` | Statistics reset |
| 20:46 | Current session | 37,490 |

### Total Usage

i estimate approximately **9.5 million tokens** used today, but there's no way to verify this due to the bug.

### Impact

**This bug severely affects every user who uses `/new` command:**
- Each new session = complete loss of previous token statistics
- Users cannot track their actual API usage
- ...


출처: https://github.com/openclaw/openclaw/issues/21819"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21236 url: "h

[원문](https://github.com/openclaw/openclaw/issues/21236")

---
date: 2026-02-20
source: github
repo: openclaw/openclaw
issue: 21236
url: "https://github.com/openclaw/openclaw/issues/21236"
tags: [github, openclaw]
---

# [21236] [Bug]: Gateway returns "pairing required" after update to 2026.2.19-2

**URL:** https://github.com/openclaw/openclaw/issues/21236


출처: https://github.com/openclaw/openclaw/issues/21236"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20669 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20669")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20669
url: "https://github.com/openclaw/openclaw/issues/20669"
tags: [github, openclaw]
---

# [20669] [Bug]: Agent exec ignores node binding — always routes to gateway despite correct config

**URL:** https://github.com/openclaw/openclaw/issues/20669


출처: https://github.com/openclaw/openclaw/issues/20669"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20972 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20972")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20972
url: "https://github.com/openclaw/openclaw/issues/20972"
tags: [github, openclaw]
---

# [20972] [Bug]: Slack DM replies fail with missing_recipient_team_id when streaming is enabled

**URL:** https://github.com/openclaw/openclaw/issues/20972

## 이슈 내용

## Description

Slack DM (direct message) replies fail silently when the bot tries to respond. Channel messages work fine. The error in logs is `missing_recipient_team_id`, which comes from Slack's `chat.startStream` / `chat.stopStream` streaming API.

## Steps to Reproduce

1. Configure Slack channel with socket mode
2. Pair a user via DM (pairing works fine)
3. User sends a DM to the bot
4. Bot receives the message (shows `embedded run start` in logs)
5. Bot generates a response but fails to deliver it

## Expected Behavior

Bot replies to DMs the same way it replies in channels.

## Actual Behavior

- **Channels:** Replies work ✅
- **DMs:** Reply silently fails ❌
- **Proactive messages** (via `message` tool with `target: <userId>`): Work ✅

The error `missing_recipient_team_id` appears in logs, originating from Slack's streaming API (`chat.startStream`).

## Workaround

Setting `streaming: false` in the Slack channel config fixes channel replies but does NOT fix DM replies. The only workaround for DMs is sending proactive messages via the `message` tool instead of relying on automatic reply delivery.

## Environment

- OpenClaw version: 2026.2.17
- Slack mode: socket
- OS: macOS (Apple Silicon)
- Node: v24.13.1

## Additional Context

- Adding `assistant:write` scope to the Slack app did not resolve the issue
- `blockStreaming: false` and `streaming: false` config options were tried
- The `chat.startStream` API requires `recipient_team_id` which OpenClaw does not seem to pass for DM conversations
- Regular `chat.postMessage` via curl works fine for both channels and DMs


출처: https://github.com/openclaw/openclaw/issues/20972"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21095 url: "h

[원문](https://github.com/openclaw/openclaw/issues/21095")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 21095
url: "https://github.com/openclaw/openclaw/issues/21095"
tags: [github, openclaw]
---

# [21095] TUI: Add OSC 8 hyperlink support for file paths

**URL:** https://github.com/openclaw/openclaw/issues/21095

## 이슈 내용

## Feature Request

**Problem:** File paths displayed in the TUI (e.g. from tool output, file reads, etc.) are not clickable in terminals that support hyperlinks (iTerm2, WezTerm, Windows Terminal, etc.).

**Proposed Solution:** Emit [OSC 8 hyperlink escape sequences](https://gist.github.com/egmontkob/eb114294efbcd5adb1944c9f3cb5feda) around file paths in TUI output.

The format is straightforward:

\e]8;;file:///absolute/path\e\displayed text\e]8;;\e\


This would allow ⌘+click (iTerm2) or similar shortcuts in other terminals to open files directly — ideally in the user's configured editor.

**Context:** When working with workspace files, being able to click paths to open them in VS Code (or any editor) would significantly improve the TUI workflow. The webchat UI already supports clickable links.

**Terminals with OSC 8 support:** iTerm2, WezTerm, Windows Terminal, GNOME Terminal (VTE 0.50+), foot, kitty, and others.


출처: https://github.com/openclaw/openclaw/issues/21095"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20192 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20192")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20192
url: "https://github.com/openclaw/openclaw/issues/20192"
tags: [github, openclaw]
---

# [20192] Bug: Google Chat channel stays stopped (auto-restart loop, no lastError)

**URL:** https://github.com/openclaw/openclaw/issues/20192


출처: https://github.com/openclaw/openclaw/issues/20192"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21283 url: "h

[원문](https://github.com/openclaw/openclaw/issues/21283")

---
date: 2026-02-20
source: github
repo: openclaw/openclaw
issue: 21283
url: "https://github.com/openclaw/openclaw/issues/21283"
tags: [github, openclaw]
---

# [21283] [Bug]: Control UI /agents Tool Access buttons disabled due to race condition (configForm null)

**URL:** https://github.com/openclaw/openclaw/issues/21283


출처: https://github.com/openclaw/openclaw/issues/21283"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21013 url: "h

[원문](https://github.com/openclaw/openclaw/issues/21013")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 21013
url: "https://github.com/openclaw/openclaw/issues/21013"
tags: [github, openclaw, bug]
---

# [21013] [Bug]: docker-setup.sh fails on Vite

**URL:** https://github.com/openclaw/openclaw/issues/21013
**Labels:** bug

## 이슈 내용

### Summary

docker-setup.sh fails while trying to build vite

### Steps to reproduce

1. git clone https://github.com/openclaw/openclaw.git
2. cd openclaw
3. sudo ./docker-setup.sh

### Expected behavior

The Docker setup script should complete successfully.

### Actual behavior

Here's the last bit of output:

✗ Build failed in 534ms error during build: ../src/logging/logger.ts (2:9): "createRequire" is not exported by "__vite-browser-external", imported by "../src/logging/logger.ts". file: /app/src/logging/logger.ts:2:9

1: import fs from "node:fs"; 2: import { createRequire } from "node:module"; ^ 3: import path from "node:path"; 4: import { Logger as TsLogger } from "tslog";

at getRollupError (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/parseAst.js:402:41)
at error (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/parseAst.js:398:42)
at Module.error (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:17040:16)
at Module.traceVariable (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:17452:29)
at ModuleScope.findVariable (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:15070:39)
at Identifier.bind (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:5447:40)
at CallExpression.bind (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:2829:23)
at CallExpression.bind (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:12179:15)
at VariableDeclarator.bind (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:2829:23)
at VariableDeclaration.bind (file:///app/node_modules/.pnpm/[email protected]/node_modules/rollup/dist/es/shared/node-entry.js:2825:28)

...

출처: https://github.com/openclaw/openclaw/issues/21013"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20285 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20285 url: "https://github.com/openclaw/openclaw/issues/20285" tags: [github, openclaw]


[20285] Systematic Code Quality Improvement & Performance Optimization Roadmap

URL: https://github.com/openclaw/openclaw/issues/20285

출처: https://github.com/openclaw/openclaw/issues/20285"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22298 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22298 url: "https://github.com/openclaw/openclaw/issues/22298" tags: [github, openclaw]


[22298] [Bug]: Isolated cron jobs with announce delivery fail with pairing required (scope-upgrade)

URL: https://github.com/openclaw/openclaw/issues/22298

출처: https://github.com/openclaw/openclaw/issues/22298"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20082 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20082 url: "https://github.com/openclaw/openclaw/issues/20082" tags: [github, openclaw]


[20082] QMD memory search returns empty results after 2026.2.17 upgrade (collection naming change)

URL: https://github.com/openclaw/openclaw/issues/20082

출처: https://github.com/openclaw/openclaw/issues/20082"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 9956 url: "ht

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 9956 url: "https://github.com/openclaw/openclaw/issues/9956" tags: [github, openclaw]


[9956] [Bug]: GPT-OSS-120B function calling fails with JSON parse error in OpenClaw

URL: https://github.com/openclaw/openclaw/issues/9956

출처: https://github.com/openclaw/openclaw/issues/9956"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21287 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21287 url: "https://github.com/openclaw/openclaw/issues/21287" tags: [github, openclaw]


[21287] Feature: Model tier aliases for provider-agnostic cron job configuration

URL: https://github.com/openclaw/openclaw/issues/21287

출처: https://github.com/openclaw/openclaw/issues/21287"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21007 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21007 url: "https://github.com/openclaw/openclaw/issues/21007" tags: [github, openclaw]


[21007] Mistaken submission — please ignore

URL: https://github.com/openclaw/openclaw/issues/21007

이슈 내용

This issue was created by mistake. Please ignore and close.

출처: https://github.com/openclaw/openclaw/issues/21007"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 19790 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 19790 url: "https://github.com/openclaw/openclaw/issues/19790" tags: [github, openclaw]


[19790] [Feature]: Allow disabling Discord intermediate status reactions (tool/thinking/done emoji)

URL: https://github.com/openclaw/openclaw/issues/19790

솔루션/댓글

@dburkes

Still reproducible in 2026.2.19-2. Confirmed: intermediate status reactions (🧠, 🛠️, ✅) still appear and disappear on tool-using responses. Simple text-only responses (no tool calls) don't trigger them. The flicker is most noticeable when multiple tools are used in sequence (e.g., file read + exec). Would love a messages.statusReactions: false config option to opt out.

@Gitjay11

@dburkes I've updated my PR to handle this exactly as you suggested. I have committed the changes so you can check it now

You can now opt out of all intermediate status reactions (🧠, 🛠️, ✅, etc.) to prevent that flickering behavior. When disabled, the bot will only drop the initial acknowledgment reaction (👀) and let it stay for the duration of the response.

You can disable it globally for all compatible channels:

yaml
messages:
  statusReactions:
    enabled: false

Or you can disable it specifically just for your Discord accounts:

yaml
channels:
  discord:
    statusReactions: false

The implementation has also been updated to integrate smoothly with the new centralized StatusReactionsConfig system that was recently merged into the main branch. Let me know if you run into any other issues

출처: https://github.com/openclaw/openclaw/issues/19790"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 19854 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 19854 url: "https://github.com/openclaw/openclaw/issues/19854" tags: [github, openclaw]


[19854] nextcloud-talk plugin: EADDRINUSE crash loop due to premature startAccount promise resolution

URL: https://github.com/openclaw/openclaw/issues/19854

솔루션/댓글

@jzee

observing the exact same issue, identified the same root cause and same suggested fix. OpenClaw: 2026.2.17 OS: Debian 6.12.63-1 (2025-12-30) x86_64 GNU/Linux

Reverting to 2026.2.15 with openclaw update --tag v2026.2.15 restores the working system

@KundKMC

We can confirm this issue on our setup: OpenClaw 2026.2.17, Ubuntu 24.04 (Noble), Node 22.22.0, nextcloud-talk channel configured with systemd service.

Exact same crash loop as described — gateway starts, NextCloud Talk webhook binds port, startAccount resolves immediately, framework triggers auto-restart after 5s, second bind attempt hits EADDRINUSE, gateway process crashes, systemd restarts, repeat.

We applied a runtime patch to the compiled gateway-cli-*.js that keeps the startAccount promise pending when the return value includes a stop function (indicating a long-running channel):

// Before:
const trackedPromise = Promise.resolve(task).catch(...)

// After:
const trackedPromise = Promise.resolve(task).then((result) => {
  if (result && typeof result.stop === "function" && abort.signal && !abort.signal.aborted) {
    return new Promise((resolve) =>
      abort.signal.addEventListener("abort", () => resolve(result), { once: true })
    );
  }
  return resu...

### @vhark

Confirming this bug persists in **v2026.2.19** (latest as of today). Same crash loop behavior as described.

### Additional observation: SIGUSR1 hot-reload triggers the same crash path

In addition to the crash loop on initial startup, the same EADDRINUSE crash occurs during **SIGUSR1-triggered config reloads** (e.g., when an agent writes `config.patch`). The gateway receives SIGUSR1, restarts channels in-process, and the nextcloud-talk webhook tries to rebind port 8788 while the previous server instance hasn't been torn down yet:

01:18:38 [reload] config change requires gateway restart (meta.lastTouchedAt) 01:18:38 [gateway] received SIGUSR1; restarting 01:18:38 [nextcloud-talk] [default] starting Nextcloud Talk webhook server 01:18:38 [openclaw] Uncaught exception: Error: listen EADDRINUSE: address already in use 0.0.0.0:8788 at Server.setupListenHandle [as _listen2] (node:net:1940:16) at listenInCluster (node:net:1997:12) at node:net:2206:7


This means any confi...

### @miguelarios

Also experiencing this issue. Running OpenClaw in Docker, getting the same EADDRINUSE crash loop on port 8788 with the nextcloud-talk plugin. The gateway auto-restarts after 5s and hits the port conflict, causing a restart loop.


출처: https://github.com/openclaw/openclaw/issues/19854"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 2788 url: "ht

[원문](https://github.com/openclaw/openclaw/issues/2788")

---
date: 2026-02-21
source: github
repo: openclaw/openclaw
issue: 2788
url: "https://github.com/openclaw/openclaw/issues/2788"
tags: [github, openclaw]
---

# [2788] Feature Request: Recursive Language Model (RLM) integration for unbounded context

**URL:** https://github.com/openclaw/openclaw/issues/2788


출처: https://github.com/openclaw/openclaw/issues/2788"

점수: 6/10 — 점수 6/10: openclaw

---

## [6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20960 url: "h

[원문](https://github.com/openclaw/openclaw/issues/20960")

---
date: 2026-02-19
source: github
repo: openclaw/openclaw
issue: 20960
url: "https://github.com/openclaw/openclaw/issues/20960"
tags: [github, openclaw, bug]
---

# [20960] [Bug]: Unable to Install on Windows 11

**URL:** https://github.com/openclaw/openclaw/issues/20960
**Labels:** bug

## 이슈 내용

### Summary

`npm i -g openclaw` fails on Windows 11, Ryzen 3 4300G with Radeon Graphics, AMD Drivers installed.

### Steps to reproduce

run npm i -g openclaw on Windows 11

### Expected behavior

Should fallback to using no GPU.

### Actual behavior

Installation fails

### OpenClaw version

latest as of 19/02/2026

### Operating system

Windows 11

### Install method

npm global

### Logs, screenshots, and evidence

```shell
npm ERR! code 3221225477
npm ERR! path C:\Program Files (x86)\Nodist\bin\node_modules\openclaw\node_modules\node-llama-cpp
npm ERR! command failed
npm ERR! command C:\Windows\system32\cmd.exe /d /s /c node ./dist/cli/cli.js postinstall
npm ERR! [node-llama-cpp] The prebuilt binary for platform "win" "x64" with Vulkan support is not compatible with the current system, falling back to using no GPU

Impact and severity

Severity: High

Additional information

No response

출처: https://github.com/openclaw/openclaw/issues/20960"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20206 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20206 url: "https://github.com/openclaw/openclaw/issues/20206" tags: [github, openclaw]


[20206] [Bug]: iMessage attachments blocked by path security policy after 2026.2.17 upgrade — regression from 2026.2.15

URL: https://github.com/openclaw/openclaw/issues/20206

출처: https://github.com/openclaw/openclaw/issues/20206"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21275 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 21275 url: "https://github.com/openclaw/openclaw/issues/21275" tags: [github, openclaw]


[21275] [BUG] Slack auto-responses fail with "missing_recipient_team_id" error

URL: https://github.com/openclaw/openclaw/issues/21275

출처: https://github.com/openclaw/openclaw/issues/21275"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 12385 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 12385 url: "https://github.com/openclaw/openclaw/issues/12385" tags: [github, openclaw]


[12385] [Feature]: Security - Context-Based Runtime Security Policy (Shield.md)

URL: https://github.com/openclaw/openclaw/issues/12385

솔루션/댓글

@EricNetsch

please no more md files to attempt to solve security. we need more hard configs that are bulletproof not more md files

@fr0gger

Hey @EricNetsch, this is not security by markdown. this file is a policy that standardizes how agents have their security configured. It provides a common structure and can be coupled with any enforcement tool to apply hard controls. Think of it as a security baseline that security tools can evaluate and enforce consistently. The goal is to create a unified policy structure, not to rely on markdown as the security mechanism itself. Hope that help clarify the goal :)

@darfaz

Great proposal — this aligns closely with what we've been building with ClawMoat, an open-source runtime security layer for AI agents.

A few thoughts from our implementation experience:

On the three-state decision model (block / require_approval / log): This is exactly right. We use a similar pattern — our policy engine evaluates tool calls and returns DENY, WARN, or ALLOW. The key insight is that deterministic pattern matching handles 80%+ of real threats at <1ms latency. The remaining cases (novel attacks) benefit from ML classifiers as a second pass.

On scope coverage: Our current scanners cover prompt (injection + jailbreak), tool.call (dangerous command patterns, exfiltration), secrets.read (credential leakage detection + auto-masking), and memory (poisoning detection). We'd be happy to contribute detection patterns for these scopes to a standardized shield.md format.

**Practical suggestion — start with the high...

@uchibeke

We’ve built a runtime layer that does exactly the “policy structure + enforcement tool” split @fr0gger described: a deterministic, config-driven check before each tool run, with no model in the loop for the allow/deny decision.

How it fits Shield.md-style gating:

  • Policy as config: Passport (W3C DID/VC-style JSON) holds capabilities and limits; separate policy packs define rules per scope (e.g. system.command.execute, code.release.publish). So the “single policy structure” is JSON + limits, not markdown-as-security.
  • Enforcement: A small script (or our OpenClaw skill/plugin) runs before the tool executes: load passport + policy → evaluate rules (allowlists, blocked patterns, caps) → return ALLOW or DENY. Tool is only invoked if ALLOW.
  • Outcomes: Same three outcomes: block (DENY), require_approval (we support approval gates), log (every decision is logged; optional cloud = immutable audit).
  • Scopes: We currently gate tool calls (e.g. exec, git, file...

@darfaz

@uchibeke Nice implementation — the passport/policy-pack pattern is a clean approach to the authorization side.

Worth noting these solve different layers of the same problem:

  • APort Guardrails: authorization — "is this agent allowed to do X?" (policy enforcement, capability gating, approval workflows)
  • ClawMoat: threat detection — "is this input/output malicious?" (prompt injection, PII leaks, credential exposure, data exfiltration)

Both are needed. An agent could be fully authorized to run shell commands (APort says ALLOW) but still receive a prompt injection that makes it curl secrets to an attacker (ClawMoat catches that).

The layering would be: ClawMoat scans the content → APort enforces the policy → tool executes. Detection before authorization before execution.

Would be interesting to explore making them composable — e.g. ClawMoat risk scores feeding into APort policy decisions.

출처: https://github.com/openclaw/openclaw/issues/12385"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20150 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20150 url: "https://github.com/openclaw/openclaw/issues/20150" tags: [github, openclaw, enhancement, r: moltbook]


[20150] [Feature]: reopen feature

URL: https://github.com/openclaw/openclaw/issues/20150 Labels: enhancement, r: moltbook

이슈 내용

Summary

Reopen this feature https://github.com/openclaw/openclaw/issues/19477

It has nothing to do with Moltbook. Then it closed it

@barnacle_bot this feature has nothing to do with moltbook

Problem to solve

See related feature

Proposed solution

See related feature

Alternatives considered

No response

Impact

See related feature

Evidence/examples

No response

Additional information

No response

출처: https://github.com/openclaw/openclaw/issues/20150"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19968 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 19968 url: "https://github.com/openclaw/openclaw/issues/19968" tags: [github, openclaw]


[19968] MM plugin: SIGUSR1 hot-reload breaks WebSocket session routing for group channels

URL: https://github.com/openclaw/openclaw/issues/19968

이슈 내용

Bug Description

After applying a config change via SIGUSR1 (hot-reload), the Mattermost WebSocket monitor continues to receive messages but routes them to incorrect sessions, causing replies to be silently dropped (never delivered to the channel).

Steps to Reproduce

  1. Have an OpenClaw gateway running with channels.mattermost configured (chatmode: "onmessage")
  2. Bot is a member of multiple MM channels (e.g., office-general and a custom channel like protocol-evolution)
  3. Apply a config change via SIGUSR1 (e.g., changing chatmode from oncall to onmessage)
  4. Send a message in the custom channel

Expected Behavior

Bot receives the message, agent processes it, and the reply is delivered back to the same channel.

Actual Behavior

  • The gateway log shows config hot reload applied (channels.mattermost.chatmode) — config change is accepted
  • The MM WebSocket connection stays alive (TCP ESTABLISHED)
  • Messages are received and the agent runs successfully (generates reply text)
  • But the reply is never delivered to the channel — no delivered reply to channel:xxx log entry
  • The session lane changes from the correct session:agent:main:mattermost:channel:<channel_id> to a generic session:mm-<botname> with messageChannel= (empty)
  • The agent's reply text appears in gateway.log but is followed by silence instead of delivery confirmation

Workaround

A full gateway restart (launchctl kickstart -k / openclaw gateway stop && start) fixes the issue. Only SIGUSR1 hot-reload is affected.

Evidence from Logs

Before SIGUSR1 (working):

lane enqueue: lane=session:agent:main:mattermost:channel:4i7mryg4tffkijnamzrxjozbpc
[mattermost] delivered reply to channel:4i7mryg4tffkijnamzrxjozbpc

After SIGUSR1 (broken):

lane enqueue: lane=session:mm-rex queueSize=1
embedded run start: runId=xxx sessionId=mm-rex messageChannel=
# Agent generates reply "在,我这边在线。" but no delivery log follows

Environment

  • Open...

출처: https://github.com/openclaw/openclaw/issues/19968"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 13991 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 13991 url: "https://github.com/openclaw/openclaw/issues/13991" tags: [github, openclaw]


[13991] [Proposal] Associative Hierarchical Memory: Human-Like Recall for Agent Memory Systems

URL: https://github.com/openclaw/openclaw/issues/13991

출처: https://github.com/openclaw/openclaw/issues/13991"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21127 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 21127 url: "https://github.com/openclaw/openclaw/issues/21127" tags: [github, openclaw]


[21127] Slack: threadId parameter not passed through to read action handler

URL: https://github.com/openclaw/openclaw/issues/21127

이슈 내용

Summary

When using the message tool with action: read on Slack, the threadId parameter is not passed through to the handler, making it impossible to read thread replies.

Current Behavior

  • Reading messages from a channel returns top-level messages only
  • Thread parent messages show reply_count, reply_users, etc., but there's no way to fetch the actual replies
  • When tagged in a thread, the agent only sees the single message they were tagged in, not the thread context

Expected Behavior

  • threadId parameter should be supported on the read action
  • When provided, it should fetch replies within that thread (using Slack's conversations.replies API)
  • Ideally, when an agent is mentioned in a thread, thread context should be automatically included

Root Cause

The threadId parameter handling exists in slack.actions.js but is not wired up in the main slack.js plugin file.

Impact

This is a significant gap for Slack usage — agents can't participate meaningfully in threaded conversations because they lack context.

Environment

  • OpenClaw version: 2026.1.30
  • Channel: Slack (Socket Mode)

출처: https://github.com/openclaw/openclaw/issues/21127"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 12854 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 12854 url: "https://github.com/openclaw/openclaw/issues/12854" tags: [github, openclaw]


[12854] [Bug]: Extensions using CJS dependencies fail with "require is not defined"

URL: https://github.com/openclaw/openclaw/issues/12854

출처: https://github.com/openclaw/openclaw/issues/12854"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20202 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20202 url: "https://github.com/openclaw/openclaw/issues/20202" tags: [github, openclaw]


[20202] [Bug]: Duplicate announce injections flood main session after isolated cron completes

URL: https://github.com/openclaw/openclaw/issues/20202

출처: https://github.com/openclaw/openclaw/issues/20202"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20160 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20160 url: "https://github.com/openclaw/openclaw/issues/20160" tags: [github, openclaw]


[20160] [Bug]: Memory source (MEMORY.md + memory/*.md) not indexed after fresh rebuild - shows 0/26 while sessions index correctly

URL: https://github.com/openclaw/openclaw/issues/20160

솔루션/댓글

@nikolasdehor

Hitting this too — our WhatsApp agent (Devinho) relies entirely on MEMORY.md and a set of memory/*.md files (chat-nikolas-dm.md, chat-pai.md, etc.) for persistent context: contact names, conversation history, user preferences. After a fresh rebuild with rm main2.sqlite + openclaw memory index, we see the exact same output — sessions index fine, memory source stays at 0/N files.

The real-world impact is significant: the agent loses all of its persistent memory context after a rebuild. It effectively becomes a blank slate — forgets contacts, communication styles, ongoing topics — while the sessions index (which has the raw logs) rebuilds correctly. The disconnect between "raw sessions indexed" and "memory files indexed" makes it look like everything worked but the agent is actually missing its structured knowledge layer.

A couple of questions:

  1. Is there a workaround to force a memory-only reindex without deleting and rebuilding the full database? Something like `open...

@TarsAI-Agent

Thanks for confirming the issue and adding context on real-world impact.

To answer your questions:

  1. Memory-only reindex: Unfortunately, there's no --source memory flag currently. The available commands are just openclaw memory index (incremental) and openclaw memory index --force (full rebuild). A source-specific reindex would be a nice feature request.

  2. Rebuild command: There's no separate openclaw memory rebuild subcommand. The options are:

  3. openclaw memory index --force — should do a full reindex, but as we've found, it doesn't properly populate the memory source
  4. rm ~/.openclaw/memory/main2.sqlite && openclaw memory index — nuclear option, rebuilds from scratch (but still exhibits this bug for memory files)

  5. Workspace bootstrap vs BOOT.md: The "workspace bootstrap" I mentioned is the Project Context system. Files like MEMORY.md, memory/*.md, AGENTS.md, TOOLS.md, etc. in your workspace directory are loaded and injected directly i...

@TarsAI-Agent

Additional diagnostic finding:

I tried a workaround and discovered more about the bug behavior:

The --force flag alternates between sources

When running openclaw memory index --force repeatedly:

  1. First run: Indexes sessions (5662 files), wipes memory (0 files)
  2. Second run: Indexes memory (27 files), wipes sessions (0 files)
  3. Third run: Back to indexing sessions, wipes memory

The indexer appears to rebuild only ONE source per --force run, clearing the other.

Observed states

# After fresh rebuild (sessions indexed, memory wiped)
memory · 0/27 files · 0 chunks
sessions · 5662/5662 files · 10740 chunks

# After running --force again (memory indexed, sessions wiped!)
memory · 27/27 files · 286 chunks
sessions · 0/5665 files · 0 chunks

Implication

The core issue seems to be that --force rebuilds sources sequentially but each rebuild clears the entire index, so only the last-processed source survives. Both sources should b...

출처: https://github.com/openclaw/openclaw/issues/20160"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20277 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20277 url: "https://github.com/openclaw/openclaw/issues/20277" tags: [github, openclaw]


[20277] Systematic Code Quality Improvement & Performance Optimization Roadmap

URL: https://github.com/openclaw/openclaw/issues/20277

출처: https://github.com/openclaw/openclaw/issues/20277"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22299 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22299 url: "https://github.com/openclaw/openclaw/issues/22299" tags: [github, openclaw]


[22299] Bug: Gateway loopback mode rejects internal subagent/session connections with 'pairing required'

URL: https://github.com/openclaw/openclaw/issues/22299

출처: https://github.com/openclaw/openclaw/issues/22299"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 10841 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 10841 url: "https://github.com/openclaw/openclaw/issues/10841" tags: [github, openclaw]


[10841] Reminders set for wrong times because agent doesn't know current time

URL: https://github.com/openclaw/openclaw/issues/10841

출처: https://github.com/openclaw/openclaw/issues/10841"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20749 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20749 url: "https://github.com/openclaw/openclaw/issues/20749" tags: [github, openclaw]


[20749] [Bug]: Slack provider crashes on startup in v2026.2.17 - Cannot read properties of undefined (reading 'listeners')

URL: https://github.com/openclaw/openclaw/issues/20749

이슈 내용

Summary

After upgrading to v2026.2.17, the Slack provider crashes immediately on startup with Cannot read properties of undefined (reading 'listeners') in @slack/bolt v4.6.0's App.function() (line 283 of App.js). The provider enters a crash loop, exhausts all 10 auto-restart attempts, and Slack becomes completely unresponsive. The crash is triggered by the new streaming code (#9972, #18555) and happens before config is read, so channels.slack.streaming = false cannot mitigate it.

Steps to Reproduce

  1. Have a working Slack socket mode setup on v2026.2.15 (slash commands, DMs, channel allowlist)
  2. Upgrade to v2026.2.17: npm install -g [email protected]
  3. Restart the gateway
  4. Observe crash loop in logs:
[slack] [default] starting provider
[slack] [default] channel exited: Cannot read properties of undefined (reading 'listeners')
[slack] [default] auto-restart attempt 1/10 in 5s
...repeats identically through attempt 10/10...
[health-monitor] [slack:default] health-monitor: hit 3 restarts/hour limit, skipping

The crash happens within ~300ms of starting provider each time. No special config needed - vanilla socket mode setup triggers it.

Expected Behavior

Slack provider starts successfully and connects via socket mode, either with streaming enabled or gracefully falling back to non-streaming delivery if streaming initialization fails. The channels.slack.streaming = false config option should disable streaming before the Bolt app setup path that triggers the crash.

Actual Behavior

Slack provider crashes immediately on every startup attempt. The crash is in @slack/bolt v4.6.0 App.function() at line 283 where this.listeners.push(fn.getListeners()) is called with this being undefined - suggesting the method is called before the Bolt App is fully initialized or with a broken this binding (e.g., destructured before construction completes).

The provider exhausts all 10 auto-restart attempts (exponential backoff: 5s, 10s...

출처: https://github.com/openclaw/openclaw/issues/20749"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20759 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20759 url: "https://github.com/openclaw/openclaw/issues/20759" tags: [github, openclaw]


[20759] Discord DM Incident Follow-up: Verify Fixes A/B + Implement Issue #96 Lock Gate

URL: https://github.com/openclaw/openclaw/issues/20759

이슈 내용

Context

Incident 2026-02-19 identified three issues: - A. Discord Resume Loop — Fixed via ResilientGatewayPlugin - B. Compaction Safety Net — Fixed via pre-emptive 100% check in get-reply-run.ts - C. Issue #96 Lock Contention — PLAN.md exists but not implemented

Tasks

Phase 1: Verify Fixes A & B (Priority)

  1. Run codex review on both fix changesets
  2. Address any issues found in review
  3. Validate test coverage is sufficient
  4. Test manually if needed (simulate resume failures, 100% context scenarios)

Phase 2: Implement Issue #96 Lock Gate

Follow PLAN.md in Projects/Issues/: - Move compactionPendingVerification check before session lock acquisition - Implement getSessionEntryWithoutLock helper - Implement appendHeldMessage for messages arriving during compaction - Add tests for concurrent message handling during compaction

Success Criteria

  • Codex review passes on fixes A & B (or issues resolved)
  • Issue #96 implementation complete with tests
  • No message drops during compaction windows

Files

  • Fix A: src/discord/monitor/gateway-plugin.ts, src/discord/monitor/gateway-plugin.test.ts
  • Fix B: src/auto-reply/reply/get-reply-run.ts, src/auto-reply/reply/get-reply-run.compaction-gate.test.ts
  • Issue #96: src/auto-reply/reply/get-reply-run.ts (lock gate refactor)
  • Reference: Projects/Issues/PLAN.md, Projects/Issues/96.md

Related

  • Incident report: Projects/Issues/incident-2026-02-19-discord-dm.md
  • Fix reports: fix-report-discord-resume-loop.md, fix-report-compaction-safety-net.md

출처: https://github.com/openclaw/openclaw/issues/20759"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20359 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20359 url: "https://github.com/openclaw/openclaw/issues/20359" tags: [github, openclaw]


[20359] [Bug] Subagent model override fails with 401 while main session works

URL: https://github.com/openclaw/openclaw/issues/20359

이슈 내용

Problem

When spawning subagents with model: "kimi-coding/k2p5", the subagent consistently gets HTTP 401: Invalid Authentication while the main session works fine with the same model.

Steps to Reproduce

  1. Configure default model as kimi-coding/k2p5
  2. Spawn a subagent: sessions_spawn(model="kimi-coding/k2p5", task="hi")
  3. Observe: Subagent gets 401, falls back to MiniMax
  4. Main session uses k2p5 successfully

Expected

Subagent should use the specified model (or at least the agent's configured model) without 401 errors.

Evidence

# Subagent history shows:
{ role: "assistant", provider: "kimi-coding", model: "k2p5", errorMessage: "401 Invalid Authentication" }

# But main session (same model) works fine

Environment

  • OpenClaw: 2026.2.15
  • Model: kimi-coding/k2p5
  • Provider: moonshot (openai-completions API)

Notes

  • Fallback to MiniMax-M2.5 works
  • Issue may be related to how subagents resolve auth tokens vs main session
  • Appears to be specific to kimi-coding model, other models may be affected

출처: https://github.com/openclaw/openclaw/issues/20359"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22057 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22057 url: "https://github.com/openclaw/openclaw/issues/22057" tags: [github, openclaw, bug]


[22057] [Bug]: Isolated cron job with announce replies in Slack thread not main channel

URL: https://github.com/openclaw/openclaw/issues/22057 Labels: bug

이슈 내용

Summary

Bug Report: Announce delivery inherits channel session thread context, causing cron posts to appear as thread replies

Summary

Cron jobs with delivery.mode: "announce" and delivery.to: "channel:" post as thread replies to the most recent threaded conversation in the target channel, instead of as top-level messages. This happens despite sessionTarget: "isolated" being set on the cron job.

Root Cause Analysis

Announce delivery is a two-phase process:

  1. Cron run — executes on an isolated session lane (session:agent::cron:). This correctly uses a fresh session with no thread context.
  2. Announce delivery — runs a second LLM call on the channel session (sessionId: ). This channel session carries deliveryContext.threadId from the most recent threaded interaction on the target channel, and the Slack chat.postMessage call inherits this thread_ts.

Evidence from logs showing the two phases:

# Phase 1: Isolated cron run (correct — fresh session) lane=session:agent::cron:

# Phase 2: Announce delivery (inherits thread context from channel session) runId=announce:v1:agent::cron::: sessionId= ← channel session, NOT the cron session

The channel session in sessions.json (agent::slack:channel:) contains:

{ "deliveryContext": { "channel": "slack", "to": "channel:", "accountId": "default", "threadId": "" // ← from last threaded conversation }, "origin": { "threadId": "" ...

솔루션/댓글

@tyler6204

Resolved by #22223 (merge commit: fe57bea088c7ad8c9dcef1721d9490daedc5cf00), with related test hardening in #22274 (merge commit: 2dba150c1650c3ecd0cc79bd2ac4ed1b412dd4e8). Thanks for the detailed report!

출처: https://github.com/openclaw/openclaw/issues/22057"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20683 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20683 url: "https://github.com/openclaw/openclaw/issues/20683" tags: [github, openclaw]


[20683] [Bug]: Control UI Insecure Auth Bypass Allows Token-Only Auth Over HTTP

URL: https://github.com/openclaw/openclaw/issues/20683

출처: https://github.com/openclaw/openclaw/issues/20683"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22301 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22301 url: "https://github.com/openclaw/openclaw/issues/22301" tags: [github, openclaw]


[22301] Feature request: gateway.clientUrl config to decouple client URL from bind mode

URL: https://github.com/openclaw/openclaw/issues/22301

출처: https://github.com/openclaw/openclaw/issues/22301"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22241 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22241 url: "https://github.com/openclaw/openclaw/issues/22241" tags: [github, openclaw, bug]


[22241] [Bug]: streamMode not recognized for Discord

URL: https://github.com/openclaw/openclaw/issues/22241 Labels: bug

이슈 내용

Summary

channels.discord.streamMode is an "unrecognized key", so gateway doesn't restart when setting this configuration.

Steps to reproduce

  1. Setup discord channel with streamMode: 'partial'.
"channels": {
    "discord": {
      "enabled": true,
      "groupPolicy": "open",
      "replyToMode": "first",
      "streamMode": "partial", // <<<< add this line to the configuration
      "token": "REDACTED"
    }
  }

Expected behavior

Configuration is accepted as documented here: https://docs.openclaw.ai/channels/discord#reply-tags-and-native-replies

Actual behavior

Error on Gateway restart:

Config invalid
File: ~/.openclaw/openclaw.json
Problem:
  - channels.discord: Unrecognized key: "streamMode"

OpenClaw version

2026.2.19-2

Operating system

macOS 15.5

Install method

No response

Logs, screenshots, and evidence


Impact and severity

Affected: Discord messaging Severity: Minor, but annoying (cannot enable feature) Frequency: always (cannot enable feature) Consequences: Have to wait for the full response.

Additional information

No response

솔루션/댓글

@mauro1855

Upon further investigation, I see the PR was merged just 4 hours ago: https://github.com/openclaw/openclaw/pull/22111 The documentation seems updated, but the version is unreleased. I'll close this issue and await 2026.2.20

출처: https://github.com/openclaw/openclaw/issues/22241"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22156 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22156 url: "https://github.com/openclaw/openclaw/issues/22156" tags: [github, openclaw, duplicate, close:duplicate, dedupe:child]


[22156] webchat: Conversation metadata visible to users

URL: https://github.com/openclaw/openclaw/issues/22156 Labels: duplicate, close:duplicate, dedupe:child

이슈 내용

Issue Description

The conversation metadata (untrusted metadata) is visible to users in the webchat interface, which appears before every message they send.

What's happening

Users see this block before every message they type:

Conversation info (untrusted metadata): {
  message_id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx,
  sender: openclaw-control-ui
}
[Fri 2026-02-20 17:02 GMT-3] user_message_here

Expected Behavior

This metadata should be internal/hidden and only visible to the agent as context, not shown in the user-facing chat interface.

Current Behavior

The metadata is displayed to users, which can be: - Confusing (what is this JSON block?) - Unnecessary noise in the conversation flow - Not user-friendly

Environment

  • OpenClaw version: 2026.2.19-2 (45d9b20)
  • Interface: webchat
  • Channel: webchat

Suggested Fix

Hide the "Conversation info (untrusted metadata)" block from the user-facing chat display. It should only be injected as context for the agent, similar to how system messages are handled.

Notes

The metadata itself is correct and useful for the agent — it just shouldn't be visible to end users in the UI.

솔루션/댓글

@vincentkoc

Thanks for the report. Closing this as a duplicate of #22142.

This is the canonical fix for inbound metadata leaking into user-visible chat history. If this is a different failure mode, please say so and we’ll reopen it.

출처: https://github.com/openclaw/openclaw/issues/22156"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20427 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20427 url: "https://github.com/openclaw/openclaw/issues/20427" tags: [github, openclaw]


[20427] Google Drive Service Account upload fails: zero storage quota

URL: https://github.com/openclaw/openclaw/issues/20427

이슈 내용

Summary

Google Drive Service Account (SA) authentication for file uploads returns "storage quota exceeded" errors. Service Accounts have zero storage quota by default, preventing any file upload — even when converting to Google Docs native format.

Environment

  • OpenClaw version: 2026.2.15
  • Google SA: [email protected]
  • Drive Folder: Shared folder with SA as editor
  • Auth method: JWT (googleapis Node.js SDK)

Steps to Reproduce

  1. Configure Google Drive sync with Service Account credentials
  2. Attempt to upload any file to Google Drive via SA
  3. Observe error

Actual Behavior

Error: The user's Drive storage quota has been exceeded.

Even when uploading as Google Docs native format (mimeType: application/vnd.google-apps.document), the SA is treated as the file owner and has 0 bytes quota.

Expected Behavior

Files uploaded by SA to a shared folder should count against the folder owner's quota (or Google Workspace domain quota), not the SA's personal quota.

Workarounds

  • Domain-wide delegation — impersonate a real user account (requires Google Workspace admin)
  • Transfer ownership — upload then immediately transfer ownership to a real user via Drive API
  • Use a real user OAuth token — skip SA entirely, use OAuth2 refresh token for a real Google account

Impact

  • Google Drive knowledge sync (outbound) is broken
  • Inbound sync (Drive → local) works fine (read-only doesn't need quota)
  • Reports and documents cannot be automatically pushed to Drive

Additional Context

The existing inbound sync (sync-gdrive.js) works because it only reads files from Drive. The issue is specifically with uploading/creating files via SA.

This is a Google platform limitation, not an OpenClaw bug per se, but worth documenting as it affects the recommended gdrive integration path. The docs could note this limitation and suggest OAuth2 refresh token flow instead of SA for bidirectional sync.

출처: https://github.com/openclaw/openclaw/issues/20427"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22047 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22047 url: "https://github.com/openclaw/openclaw/issues/22047" tags: [github, openclaw]


[22047] Regression: bind=lan breaks browser tool self-connection due to #20803 security check

URL: https://github.com/openclaw/openclaw/issues/22047

이슈 내용

Summary

bind: "lan" configuration causes the browser tool (and any Gateway self-connection) to fail with a SECURITY ERROR after the plaintext ws:// block introduced in #20803.

Affected versions

Confirmed broken after commit 9edec67a1 (fix(security): block plaintext WebSocket connections to non-loopback addresses).

Root cause

Two commits are in conflict:

  • b8c8130ef (#11448)fix(gateway): use LAN IP for WebSocket/probe URLs when bind=lanbuildGatewayConnectionDetails now generates ws://192.168.x.x:18789 when bind=lan
  • 9edec67a1 (#20803)fix(security): block plaintext WebSocket connections to non-loopback addresses → All plaintext ws:// to non-loopback addresses are now rejected

When bind: "lan" is set, localUrl becomes ws://<LAN_IP>:<port>. This passes the #11448 check but is then rejected by the #20803 security guard (isSecureWebSocketUrl returns false for non-loopback IPs).

Relevant code in src/gateway/call.ts:

const preferLan = bindMode === "lan";
const lanIPv4 = preferLan ? pickPrimaryLanIPv4() : undefined;
const localUrl =
  preferTailnet && tailnetIPv4
    ? `${scheme}://${tailnetIPv4}:${localPort}`
    : preferLan && lanIPv4
      ? `${scheme}://${lanIPv4}:${localPort}`   // <-- produces ws://192.168.x.x
      : `${scheme}://127.0.0.1:${localPort}`;

Impact

  • Any agent running on the same host as the Gateway with bind: "lan" cannot connect to its own Gateway via the browser tool or any callGateway code path.
  • Affected real-world use case: Chi agent on macmini trying to open NotebookLM via the browser tool.

Steps to reproduce

  1. Set gateway.bind: "lan" in openclaw.json
  2. Run any tool that calls buildGatewayConnectionDetails() (e.g. browser tool opening a URL)
  3. Observe: SECURITY ERROR: Gateway URL "ws://192.168.x.x:18789" uses plaintext ws:// to a non-loopback address.

Expected behavior

A self-connection from an agent running **on the same...

솔루션/댓글

@tyler6204

Fixed in commit 47f39797583186a0af0f34d22bf76d7cfbdf1a9f (merged to main).

buildGatewayConnectionDetails() now always resolves local self-connections to loopback (127.0.0.1) regardless of gateway.bind, so local CLI/tool/gateway calls no longer resolve to LAN/Tailscale IPs and no longer trigger the plaintext non-loopback security guard.

출처: https://github.com/openclaw/openclaw/issues/22047"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20369 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20369 url: "https://github.com/openclaw/openclaw/issues/20369" tags: [github, openclaw]


[20369] Feature: Telegram streaming responses (incremental message updates)

URL: https://github.com/openclaw/openclaw/issues/20369

이슈 내용

Problem

Long responses arrive as one large message after full generation. No intermediate feedback beyond typing indicator.

Proposed Solution

Stream response chunks to Telegram by editing the message incrementally during generation: 1. Send initial message on first chunk 2. Edit message with accumulated text as chunks arrive 3. Final edit with complete response

This significantly improves perceived latency and UX for longer responses.

Context

  • Telegram Bot API supports editMessageText which can be used for streaming
  • Some bots use sendMessageDraft or repeated edits with rate limiting
  • Typing indicator (sendChatAction) is already sent but not enough for 10s+ generations

Environment

  • OpenClaw gateway handles Telegram channel
  • Would need hooks in the response streaming pipeline to emit partial updates

출처: https://github.com/openclaw/openclaw/issues/20369"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22116 url: "h

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 22116 url: "https://github.com/openclaw/openclaw/issues/22116" tags: [github, openclaw, duplicate, close:duplicate, dedupe:child]


[22116] Webchat displays message metadata JSON with each message after 2026.2.19-2

URL: https://github.com/openclaw/openclaw/issues/22116 Labels: duplicate, close:duplicate, dedupe:child

이슈 내용

After updating to 2026.2.19-2, the webchat UI now displays message metadata (JSON envelope) alongside each message:

Conversation info (untrusted metadata): { "message_id": "...", "sender": "openclaw-control-ui" } [Fri 2026-02-20 21:28 GMT+3] status report

This started appearing today (Feb 20, 2026). Before the update, only the user's typed text was shown.

Expected: Only the message text should be visible, not the metadata envelope.

솔루션/댓글

@Mellowambience

On it — this is exactly the bug fixed by #21138 which I have open. Rebasing now onto the latest main and adding Fixes #22116 to the PR.

Root cause: buildInboundUserContextPrefix prepends metadata blocks to stored user message content; message-normalizer.ts renders them verbatim. Fix: stripInboundMetadata() utility strips the prefix at display time without touching storage. Zero-alloc fast path when no sentinel strings are present.

@vincentkoc

Thanks for the report. Closing this as a duplicate of #22142.

This is the canonical fix for inbound metadata leaking into user-visible chat history. If this is a different failure mode, please say so and we’ll reopen it.

출처: https://github.com/openclaw/openclaw/issues/22116"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22300 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 22300 url: "https://github.com/openclaw/openclaw/issues/22300" tags: [github, openclaw]


[22300] [Bug]: Telegram polling consumes messages but never processes them — bot never responds (macOS, Apple Silicon)

URL: https://github.com/openclaw/openclaw/issues/22300

출처: https://github.com/openclaw/openclaw/issues/22300"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20443 url: "h

원문


date: 2026-02-18 source: github repo: openclaw/openclaw issue: 20443 url: "https://github.com/openclaw/openclaw/issues/20443" tags: [github, openclaw]


[20443] WORKFLOW_AUTO.md referenced by Post-Compaction Audit but never created or documented

URL: https://github.com/openclaw/openclaw/issues/20443

이슈 내용

Problem

After context compaction, agents receive this system message:

⚠️ Post-Compaction Audit: The following required startup files were not read after context reset: - WORKFLOW_AUTO.md - memory/\d{4}-\d{2}-\d{2}.md

Please read them now using the Read tool before continuing. This ensures your operating protocols are restored after memory compaction.

WORKFLOW_AUTO.md is hardcoded in src/auto-reply/reply/post-compaction-audit.ts as part of DEFAULT_REQUIRED_READS, but:

  1. Never auto-generated — not created by openclaw setup, openclaw onboard, or any bootstrap process
  2. Not documented — not mentioned in docs/concepts/agent-workspace.md or any other docs
  3. No config override — hardcoded with a comment // extensible to config later that was never implemented
  4. No template — no guidance on what the file should contain

Impact

Every compaction triggers a warning for a file that doesn't exist. The agent either gets a read error or wastes a tool call on nothing. This happens on every agent workspace out of the box.

Suggested fix

Either: - A) Add WORKFLOW_AUTO.md to the bootstrap file set with a sensible default template (like other workspace files) - B) Make DEFAULT_REQUIRED_READS configurable and remove WORKFLOW_AUTO.md from the hardcoded default - C) Document the file in agent-workspace.md so users know to create it manually

Ideally A + B.

출처: https://github.com/openclaw/openclaw/issues/20443"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 8714 url: "ht

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 8714 url: "https://github.com/openclaw/openclaw/issues/8714" tags: [github, openclaw]


[8714] [Bug]: Custom OpenAI-compatible provider shows 'Cannot read properties of undefined (reading 0)' before response

URL: https://github.com/openclaw/openclaw/issues/8714

출처: https://github.com/openclaw/openclaw/issues/8714"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 18974 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 18974 url: "https://github.com/openclaw/openclaw/issues/18974" tags: [github, openclaw]


[18974] [Bug] Telegram DM topics keep breaking: conflicting fixes suppress message_thread_id for private chats

URL: https://github.com/openclaw/openclaw/issues/18974

솔루션/댓글

@Lukavyi

Full timeline of DM topic breakages

This isn't a one-off — it's a recurring pattern over the past month. Every ~1-2 weeks someone merges a fix that breaks DM topics again.

Timeline:

Jan 12ff292e67c fix typing indicator in General forum topic Jan 156146acbb6 separate thread params for typing vs messages Jan 16fe8b28cdd fix General topic messages (#848) Jan 24ac45c8b40 preserve topic in sub-agent announcements Jan 2506a7e1e8c threaded conversation support (#1597) Jan 289154971BROKE IT: ignore message_thread_id for non-forum group sessions — but DMs aren't isForum, so all DM thread routing died. User @lukavyi found and fixed this, merged as PR #3368. Feb 58860d2ed7 fix: preserve DM topic threadId in deliveryContext — fixing fallout from Jan 28 Feb 5f2c5c847b fix: preserve telegram DM topic threadId (#9039) Feb 6eef247b7a fix: auto-inject Telegram forum topic threadId in message tool *...

@Lukavyi

Suggestion: add a review guardrail to AGENTS.md

OpenClaw uses AGENTS.md as the single source of truth for AI-assisted PR review and merge guidelines. Adding a specific guardrail there would catch this pattern before merge.

Proposed addition to the PR Workflow or Coding Style section:

## Telegram DM Topics Guardrail

- Telegram private chats (1:1 with a bot) **support forum topics** since Bot API 9.3 (Dec 31, 2025).
- The `has_topics_enabled` field on the `User` class indicates this.
- **NEVER** suppress `message_thread_id` based solely on `chatId > 0` or `scope === "dm"`.
- The correct check: if private chat AND has_topics_enabled → preserve thread_id (treat as forum).
- If private chat AND !has_topics_enabled → suppress thread_id.
- Before merging any PR that touches `message_thread_id`, `buildTelegramThreadParams`, `sendMessageTelegram`, or thread routing in `src/telegram/`: verify it does NOT break DM topic routing.
- Reference: https://core.telegram.org/...

### @obviyus

Thanks for the detailed assessment. I've fixed the bug and added tests / comments so that no other agents try to "solve" it again. Will merge the PR shortly.

### @Lukavyi

Thanks for the quick fix! 🙏

One more suggestion to prevent this from recurring: consider adding a **Telegram DM Topics guardrail** to the PR review flow in `AGENTS.md`. Something like:

```markdown
## Telegram DM Topics Guardrail

- Telegram private chats (1:1 with bots) support forum topics since Bot API 9.3 (Dec 31, 2025).
- `has_topics_enabled` on the User class indicates when this is active.
- NEVER suppress message_thread_id based solely on chatId > 0 or scope === "dm".
- Before merging any PR touching message_thread_id, buildTelegramThreadParams,
  or thread routing in src/telegram/: verify it does NOT break DM topic routing.
- Ref: https://core.telegram.org/bots/api-changelog#december-31-2025

Since AI agents read AGENTS.md before reviewing PRs, this would automatically catch the chatId > 0 → suppress antipattern during review — so it doesn't get merged a 4th time.

@mysteriousHerb

Similarly i think the cronjob announcement to the topic / thread of telegram is also often broken with updates. possibly still broken, havent tried it much as i cannot even message in thread as of now :P

출처: https://github.com/openclaw/openclaw/issues/18974"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-21 source: github repo: openclaw/openclaw issue: 11014 url: "h

원문


date: 2026-02-21 source: github repo: openclaw/openclaw issue: 11014 url: "https://github.com/openclaw/openclaw/issues/11014" tags: [github, openclaw]


[11014] [Feature]: Add skill/extension security scanning pipeline to detect malicious skills before execution

URL: https://github.com/openclaw/openclaw/issues/11014

출처: https://github.com/openclaw/openclaw/issues/11014"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-20 source: github repo: openclaw/openclaw issue: 1126 url: "ht

원문


date: 2026-02-20 source: github repo: openclaw/openclaw issue: 1126 url: "https://github.com/openclaw/openclaw/issues/1126" tags: [github, openclaw]


[1126] OpenAI Responses reasoning item replay causes 400 (missing following item)

URL: https://github.com/openclaw/openclaw/issues/1126

출처: https://github.com/openclaw/openclaw/issues/1126"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20517 url: "h

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 20517 url: "https://github.com/openclaw/openclaw/issues/20517" tags: [github, openclaw]


[20517] Telegram DM inbound messages received but never trigger agent runs

URL: https://github.com/openclaw/openclaw/issues/20517

이슈 내용

Environment

  • OpenClaw: 2026.2.17
  • OS: Linux 6.17.0-14-generic (x64)
  • Node: v24.13.0
  • Telegram lib: grammY (long polling)

Description

After swapping Telegram bot tokens (deleted old bot, created new one), inbound Telegram DMs are received by the gateway (raw-update logged with full message payload) but never trigger an agent run. Outbound messages via openclaw message send and the message tool work perfectly.

What we tried

  • dmPolicy: "pairing" — no run triggered
  • dmPolicy: "open" with allowFrom: ["*"] — no run triggered
  • dmPolicy: "allowlist" with explicit numeric user ID — no run triggered
  • Full gateway stop/start (not just SIGUSR1) — no change
  • openclaw doctor --non-interactive — reports Telegram OK
  • openclaw pairing list telegram — no pending requests
  • Multiple message types: plain text, /commands — all silently dropped

Log evidence

# Messages ARE received:
telegram update: {"update_id":79897920,"message":{"message_id":19,"from":{"id":8531938357,...},"text":"Yes!"}}

# Route resolution shows peer=none at startup:
[routing] resolveAgentRoute: channel=telegram accountId=default peer=none guildId=none teamId=none bindings=0

# No embedded run start for any Telegram message. Session goes idle with queueDepth=0.
# Outbound sends work fine (Message ID confirmed).

Key observations

  • Telegram worked earlier in the day with the SAME OpenClaw version (different bot token)
  • The old bot token had 401 errors (8 auto-restart attempts) before the new token was configured
  • Two heartbeat-triggered runs (messageChannel=telegram) DID fire successfully at xx:00 and xx:30
  • openclaw status warns: "Telegram DMs share the main session" and "CRITICAL: Telegram DMs are open"
  • peer=none in route resolution seems suspicious — may indicate the Telegram user is never being resolved as a known peer

Expected behavior

Inbound Telegram DMs from an allowlisted user ID should trigger an agent run on the main session.

Wor...

출처: https://github.com/openclaw/openclaw/issues/20517"

점수: 6/10 — 점수 6/10: openclaw


[6/10] --- date: 2026-02-19 source: github repo: openclaw/openclaw issue: 4892 url: "ht

원문


date: 2026-02-19 source: github repo: openclaw/openclaw issue: 4892 url: "https://github.com/openclaw/openclaw/issues/4892" tags: [github, openclaw]


[4892] qwen3/Ollama models: Streaming tool calls incompatibility with parseStreamingJson

URL: https://github.com/openclaw/openclaw/issues/4892

출처: https://github.com/openclaw/openclaw/issues/4892"

점수: 6/10 — 점수 6/10: openclaw


관련 노트

  • [[Alibaba]]
  • [[NVIDIA]]
  • [[INDEX]]