{
  "version": "https://jsonfeed.org/version/1.1",
  "title": "OpenClaw Chronicles",
  "home_page_url": "https://openclawchronicles.com",
  "feed_url": "https://openclawchronicles.com/feed.json",
  "description": "OpenClaw Chronicles covers OpenClaw releases, security alerts, migration guides, tutorials, and ecosystem news.",
  "icon": "https://openclawchronicles.com/icon-512.png",
  "favicon": "https://openclawchronicles.com/favicon.png",
  "authors": [
    {
      "name": "Cody",
      "url": "https://openclawchronicles.com/about/"
    }
  ],
  "language": "en-US",
  "items": [
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-25-memory-session-search/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-25-memory-session-search/",
      "title": "OpenClaw Memory Can Now Search Session Transcripts",
      "summary": "OpenClaw memory-core gains session transcript search via corpus=sessions, giving agents access to past conversation history alongside long-term memory files.",
      "content_text": "OpenClaw agents have long had two kinds of memory: the long-term knowledge stored in `MEMORY.md` and indexed memory files, and the in-context transcript of the current conversation. What they could not do was **search past session transcripts** the way they search memory files. [PR #70761](https://github.com/openclaw/openclaw/pull/70761), merged today, bridges that gap.\n\n## The New `corpus=sessions` Option\n\nThe `memory_search` tool — the primary way agents surface relevant information from memory — now accepts a `corpus` parameter with a new value:\n\n- **`corpus=sessions`** — search past session transcripts\n- **`corpus=memory`** — search long-term indexed memory files (now explicitly named)\n\nThe default behavior is unchanged, so existing agent prompts and skill files do not need updates. Agents that do not specify `corpus` continue to work exactly as before.\n\n## How Session Visibility Works\n\nNot every session is fair game for every search. The PR introduces a **session visibility layer** that filters results based on the requester's context:\n\n- By default, agents see transcripts from sessions they participated in, including their own history.\n- With `visibility=all`, cross-agent transcript access is enabled — useful in multi-agent setups where an orchestrator needs to review what a subagent discussed.\n- The filter runs **post-query**, after the FTS and vector search stages, as a defense-in-depth measure against cross-session data leakage.\n\nThe visibility logic lives in a new `filterMemorySearchHitsBySessionVisibilityGuard` function, which loads the combined session store and applies the guard before results are returned.\n\n## Plugin SDK Export\n\nSession search visibility APIs are now exported from the plugin SDK (`plugin-sdk/memory-core`), meaning external plugins can build their own session-aware memory queries. The export sync (`pnpm plugin-sdk:sync-exports`) was run as part of the PR to keep package boundaries clean and contract tests green.\n\n## Under the Hood\n\nThe implementation by [@nefainl](https://github.com/nefainl), reviewed by [@obviyus](https://github.com/obviyus), touches several layers:\n\n- **`memory-core`** — new stem resolver for session transcript hits, `corpus=sessions` routing, source-scoped FTS and vector search\n- **`gateway`** — `loadCombinedSessionStoreForGateway` extracted to `config/sessions` for reuse across memory and gateway paths\n- **`scripts`** — plugin-sdk export sync to keep the manifest current\n\nThe QMD (quantized memory document) path also received an oversample-then-filter treatment for single-source recall, keeping result quality high when the session corpus is large. The implementation oversamples before applying visibility filters, then clamps to the requested `maxResults`, so you get good diversity without leaking out-of-scope sessions.\n\n## Practical Impact\n\nThis is a meaningful upgrade for anyone building persistent, context-aware agents. An agent can now answer questions like \"What did we discuss about project X last week?\" by reaching into past session transcripts — not just `MEMORY.md`. Combined with the existing long-term memory recall path, OpenClaw agents now have a richer, session-aware memory graph to draw from.\n\nA few example use cases this unlocks:\n\n- **Meeting recap agents** that can reference previous meeting transcripts without you manually pasting them in\n- **Support bots** that can check if a user's issue was discussed and partially resolved in a prior session\n- **Personal assistants** that can remind you what you asked them to do yesterday, even after a session restart\n\nExpect this in the upcoming `v2026.4.24` release. No configuration changes are needed — the new `corpus=sessions` option is available immediately to any skill or prompt that calls `memory_search`.\n\n**Source:** [PR #70761 on GitHub](https://github.com/openclaw/openclaw/pull/70761)",
      "content_html": "<p>OpenClaw agents have long had two kinds of memory: the long-term knowledge stored in <code>MEMORY.md</code> and indexed memory files, and the in-context transcript of the current conversation. What they could not do was <strong>search past session transcripts</strong> the way they search memory files. <a href=\"https://github.com/openclaw/openclaw/pull/70761\">PR #70761</a>, merged today, bridges that gap.</p><h2>The New <code>corpus=sessions</code> Option</h2><p>The <code>memory_search</code> tool — the primary way agents surface relevant information from memory — now accepts a <code>corpus</code> parameter with a new value:</p><ul><li><strong><code>corpus=sessions</code></strong> — search past session transcripts</li><li><strong><code>corpus=memory</code></strong> — search long-term indexed memory files (now explicitly named)</li></ul><p>The default behavior is unchanged, so existing agent prompts and skill files do not need updates. Agents that do not specify <code>corpus</code> continue to work exactly as before.</p><h2>How Session Visibility Works</h2><p>Not every session is fair game for every search. The PR introduces a <strong>session visibility layer</strong> that filters results based on the requester's context:</p><ul><li>By default, agents see transcripts from sessions they participated in, including their own history.</li><li>With <code>visibility=all</code>, cross-agent transcript access is enabled — useful in multi-agent setups where an orchestrator needs to review what a subagent discussed.</li><li>The filter runs <strong>post-query</strong>, after the FTS and vector search stages, as a defense-in-depth measure against cross-session data leakage.</li></ul><p>The visibility logic lives in a new <code>filterMemorySearchHitsBySessionVisibilityGuard</code> function, which loads the combined session store and applies the guard before results are returned.</p><h2>Plugin SDK Export</h2><p>Session search visibility APIs are now exported from the plugin SDK (<code>plugin-sdk/memory-core</code>), meaning external plugins can build their own session-aware memory queries. The export sync (<code>pnpm plugin-sdk:sync-exports</code>) was run as part of the PR to keep package boundaries clean and contract tests green.</p><h2>Under the Hood</h2><p>The implementation by <a href=\"https://github.com/nefainl\">@nefainl</a>, reviewed by <a href=\"https://github.com/obviyus\">@obviyus</a>, touches several layers:</p><ul><li><strong><code>memory-core</code></strong> — new stem resolver for session transcript hits, <code>corpus=sessions</code> routing, source-scoped FTS and vector search</li><li><strong><code>gateway</code></strong> — <code>loadCombinedSessionStoreForGateway</code> extracted to <code>config/sessions</code> for reuse across memory and gateway paths</li><li><strong><code>scripts</code></strong> — plugin-sdk export sync to keep the manifest current</li></ul><p>The QMD (quantized memory document) path also received an oversample-then-filter treatment for single-source recall, keeping result quality high when the session corpus is large. The implementation oversamples before applying visibility filters, then clamps to the requested <code>maxResults</code>, so you get good diversity without leaking out-of-scope sessions.</p><h2>Practical Impact</h2><p>This is a meaningful upgrade for anyone building persistent, context-aware agents. An agent can now answer questions like \"What did we discuss about project X last week?\" by reaching into past session transcripts — not just <code>MEMORY.md</code>. Combined with the existing long-term memory recall path, OpenClaw agents now have a richer, session-aware memory graph to draw from.</p><p>A few example use cases this unlocks:</p><ul><li><strong>Meeting recap agents</strong> that can reference previous meeting transcripts without you manually pasting them in</li><li><strong>Support bots</strong> that can check if a user's issue was discussed and partially resolved in a prior session</li><li><strong>Personal assistants</strong> that can remind you what you asked them to do yesterday, even after a session restart</li></ul><p>Expect this in the upcoming <code>v2026.4.24</code> release. No configuration changes are needed — the new <code>corpus=sessions</code> option is available immediately to any skill or prompt that calls <code>memory_search</code>.</p><p><strong>Source:</strong> <a href=\"https://github.com/openclaw/openclaw/pull/70761\">PR #70761 on GitHub</a></p>",
      "date_published": "2026-04-25T08:05:00.000Z",
      "date_modified": "2026-04-25T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-25-memory-session-search.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-25-whatsapp-voice-note-transcription/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-25-whatsapp-voice-note-transcription/",
      "title": "OpenClaw Now Transcribes WhatsApp Voice Notes Automatically",
      "summary": "OpenClaw now auto-transcribes WhatsApp DM voice notes before routing them to your AI agent, turning spoken messages into agent-readable text automatically.",
      "content_text": "WhatsApp voice notes are the one message type that used to stop OpenClaw cold. You could send a text, an image, a document — but the moment someone dropped a voice note into the chat, the agent would see an unprocessable audio attachment and leave it at that. That changes today.\n\nA new pull request from community contributor [@rogerdigital](https://github.com/rogerdigital) — [PR #64120](https://github.com/openclaw/openclaw/pull/64120) — landed in the main branch on April 25, 2026, adding **preflight audio transcription for WhatsApp DM voice notes**.\n\n## How It Works\n\nThe feature hooks into OpenClaw's WhatsApp auto-reply monitor at the message-processing stage. When an inbound DM contains audio, the system now:\n\n1. **Transcribes the audio first** — before the message ever reaches your configured agent — using the speech-to-text provider wired into your OpenClaw installation.\n2. **Replaces the audio body** with the resulting transcript, so the agent receives clean text as its input.\n3. **Emits a `message:transcribed` hook** internally, allowing plugins and downstream pipelines to react to or log the transcription event.\n\nThe change is scoped to five files inside `extensions/whatsapp/src/auto-reply/monitor/`, keeping the blast radius small and platform-specific.\n\n## Why This Matters\n\nVoice notes are the default communication style in many WhatsApp-heavy regions and workflows. If your agent handles customer support, personal tasks, or family coordination over WhatsApp, a significant chunk of inbound messages were previously invisible to it. This PR closes that gap.\n\nIt also pairs with [PR #61008](https://github.com/openclaw/openclaw/pull/61008) — which landed Telegram voice-note transcription in DMs earlier this month — bringing OpenClaw's two most popular messaging channels to feature parity on audio handling.\n\n## Security Considerations Worth Knowing\n\nOpenClaw's automated Aisle security scanner flagged two medium-severity concerns before this PR merged. They don't block the feature, but they're worth understanding if you run a shared or production instance.\n\n**Unbounded transcript length (CWE-400)**\n\nThe audio transcript is injected into the agent context without a size cap. An adversarially long audio clip or an unusually verbose STT provider could generate an oversized transcript, causing prompt-bloat, elevated token costs, or slow processing. The reviewer notes recommend enforcing `maxMediaTextChunkLimit` before injection — a fix likely to land in a follow-up PR.\n\n**Transcript flows into session history by default (CWE-359)**\n\nVoice transcripts now flow into `finalizeInboundContext` and persist in session history like any other message body. If your users send sensitive content — financial details, medical information — the transcript will appear in your agent's session log. The recommended mitigation is a config flag such as `messages.whatsapp.storeTranscripts` to make transcript persistence opt-in rather than on by default.\n\n## What to Expect Next\n\nThis feature is queued for the upcoming release (currently staging as `2026.4.24 Unreleased` in the changelog). No configuration changes are required — once your OpenClaw installation updates, inbound WhatsApp voice notes in DMs will be transcribed automatically.\n\nIf you use OpenClaw for WhatsApp automation, this is the quality-of-life upgrade you have been waiting for. Send a voice note, get a real reply.\n\n**Source:** [PR #64120 on GitHub](https://github.com/openclaw/openclaw/pull/64120)",
      "content_html": "<p>WhatsApp voice notes are the one message type that used to stop OpenClaw cold. You could send a text, an image, a document — but the moment someone dropped a voice note into the chat, the agent would see an unprocessable audio attachment and leave it at that. That changes today.</p><p>A new pull request from community contributor <a href=\"https://github.com/rogerdigital\">@rogerdigital</a> — <a href=\"https://github.com/openclaw/openclaw/pull/64120\">PR #64120</a> — landed in the main branch on April 25, 2026, adding <strong>preflight audio transcription for WhatsApp DM voice notes</strong>.</p><h2>How It Works</h2><p>The feature hooks into OpenClaw's WhatsApp auto-reply monitor at the message-processing stage. When an inbound DM contains audio, the system now:</p><ol><li><strong>Transcribes the audio first</strong> — before the message ever reaches your configured agent — using the speech-to-text provider wired into your OpenClaw installation.</li><li><strong>Replaces the audio body</strong> with the resulting transcript, so the agent receives clean text as its input.</li><li><strong>Emits a <code>message:transcribed</code> hook</strong> internally, allowing plugins and downstream pipelines to react to or log the transcription event.</li></ol><p>The change is scoped to five files inside <code>extensions/whatsapp/src/auto-reply/monitor/</code>, keeping the blast radius small and platform-specific.</p><h2>Why This Matters</h2><p>Voice notes are the default communication style in many WhatsApp-heavy regions and workflows. If your agent handles customer support, personal tasks, or family coordination over WhatsApp, a significant chunk of inbound messages were previously invisible to it. This PR closes that gap.</p><p>It also pairs with <a href=\"https://github.com/openclaw/openclaw/pull/61008\">PR #61008</a> — which landed Telegram voice-note transcription in DMs earlier this month — bringing OpenClaw's two most popular messaging channels to feature parity on audio handling.</p><h2>Security Considerations Worth Knowing</h2><p>OpenClaw's automated Aisle security scanner flagged two medium-severity concerns before this PR merged. They don't block the feature, but they're worth understanding if you run a shared or production instance.</p><p><strong>Unbounded transcript length (CWE-400)</strong></p><p>The audio transcript is injected into the agent context without a size cap. An adversarially long audio clip or an unusually verbose STT provider could generate an oversized transcript, causing prompt-bloat, elevated token costs, or slow processing. The reviewer notes recommend enforcing <code>maxMediaTextChunkLimit</code> before injection — a fix likely to land in a follow-up PR.</p><p><strong>Transcript flows into session history by default (CWE-359)</strong></p><p>Voice transcripts now flow into <code>finalizeInboundContext</code> and persist in session history like any other message body. If your users send sensitive content — financial details, medical information — the transcript will appear in your agent's session log. The recommended mitigation is a config flag such as <code>messages.whatsapp.storeTranscripts</code> to make transcript persistence opt-in rather than on by default.</p><h2>What to Expect Next</h2><p>This feature is queued for the upcoming release (currently staging as <code>2026.4.24 Unreleased</code> in the changelog). No configuration changes are required — once your OpenClaw installation updates, inbound WhatsApp voice notes in DMs will be transcribed automatically.</p><p>If you use OpenClaw for WhatsApp automation, this is the quality-of-life upgrade you have been waiting for. Send a voice note, get a real reply.</p><p><strong>Source:</strong> <a href=\"https://github.com/openclaw/openclaw/pull/64120\">PR #64120 on GitHub</a></p>",
      "date_published": "2026-04-25T08:00:00.000Z",
      "date_modified": "2026-04-25T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-25-whatsapp-voice-note-transcription.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-24-community-privateclaw-lilo/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-24-community-privateclaw-lilo/",
      "title": "OpenClaw Community: PrivateClaw TEEs and the Lilo Personal OS",
      "summary": "Two notable Show HN launches this week: PrivateClaw runs OpenClaw agents in AMD SEV-SNP confidential VMs, while Lilo builds a full personal OS on top.",
      "content_text": "Two interesting projects inspired by OpenClaw hit Hacker News today. One tackles the trust problem at the hardware layer. The other reimagines what a personal AI-powered OS could look like.\n\n## PrivateClaw: OpenClaw Agents Inside Confidential VMs\n\n**[Show HN: PrivateClaw – AI agents running in confidential VMs you can verify](https://news.ycombinator.com/item?id=47891569)**\n\nPrivateClaw starts from a pointed observation: hosted OpenClaw platforms today require you to trust them with plaintext. PrivateClaw's answer is to move the trust boundary to hardware.\n\nThe project runs OpenClaw agents inside Trusted Execution Environments (TEEs) backed by AMD's SEV-SNP standard. Each user gets a dedicated Confidential VM — no shared tenancy — with hardware-enforced memory encryption. The hypervisor cannot read guest memory. Inference also runs inside TEEs.\n\nWhat makes this particularly interesting is the verification story. PrivateClaw ships an [open-source CLI](https://github.com/lunal-dev/privateclaw-cli) that walks through five attestation steps:\n\n1. **SEV-SNP attestation** — validates a signed report from the AMD Secure Processor against AMD's root of trust\n2. **vTPM verification** — confirms the virtual TPM's endorsement key is bound to the CVM attestation\n3. **Host key binding** — verifies the SSH host key matches what's in the attestation report\n4. **Inference endpoint check** — confirms the inference proxy cert is bound to TEE measurements\n5. **Access control audit** — validates only your SSH key is authorized and the cloud guest agent is disabled\n\nThe architecture runs on Azure Confidential Compute for the CVM and inference gateway, powered by Confidential AI's TEE-backed vLLM deployment.\n\nIt's self-hostable in spirit — the verification tooling is fully open source — though the hosted tier starts free with a Pro plan at $69/month. Try it at `ssh privateclaw.dev`.\n\nThis is a genuinely novel approach to the trust problem that's been following OpenClaw deployments since the ClawHavoc incident. Whether TEE-backed agents become mainstream infrastructure or remain a niche security product is still an open question, but PrivateClaw is a real implementation worth watching.\n\n## Lilo: A Personal OS Built on OpenClaw Channels\n\n**[Show HN: Lilo – a self-hosted, open-source intelligent personal OS](https://news.ycombinator.com/item?id=47894947)**\n\nOn the lighter end, Lilo is a personal project that uses OpenClaw as a channel layer to build something bigger: a self-hosted personal operating system where your apps, files, AI assistant, and memories all live in one container.\n\nThe creator (@abi) built it to solve a specific frustration: wanting several small AI-powered personal apps (bookmarks, calorie tracker, TODO list) without the overhead of N separate deployments, auth configs, and URLs. Lilo wraps them all in a single container and lets an agent modify them directly — no code push required.\n\nThe OpenClaw connection is explicit in the submission: Lilo added multi-channel support (WhatsApp, email, Telegram) directly inspired by OpenClaw's approach. The demo in the Show HN — texting a photo of lunch to Lilo and having the calorie tracker update automatically — is a good illustration of why the channel layer matters.\n\nEach \"app\" inside Lilo is just an HTML file with filesystem API access and full agentic capabilities. Memory is handled via a \"LLM wiki\" style tree of Markdown files — a pattern that'll be familiar to OpenClaw users.\n\nLilo is alpha, self-hosted, bring-your-own-keys. The GitHub repo is at [github.com/abi/lilo](https://github.com/abi/lilo).\n\n---\n\nBoth projects represent different ends of the OpenClaw ecosystem spectrum: PrivateClaw is enterprise-grade infrastructure hardening; Lilo is personal computing reimagined. Worth bookmarking both as the space matures.",
      "content_html": "<p>Two interesting projects inspired by OpenClaw hit Hacker News today. One tackles the trust problem at the hardware layer. The other reimagines what a personal AI-powered OS could look like.</p><h2>PrivateClaw: OpenClaw Agents Inside Confidential VMs</h2><p><strong><a href=\"https://news.ycombinator.com/item?id=47891569\">Show HN: PrivateClaw – AI agents running in confidential VMs you can verify</a></strong></p><p>PrivateClaw starts from a pointed observation: hosted OpenClaw platforms today require you to trust them with plaintext. PrivateClaw's answer is to move the trust boundary to hardware.</p><p>The project runs OpenClaw agents inside Trusted Execution Environments (TEEs) backed by AMD's SEV-SNP standard. Each user gets a dedicated Confidential VM — no shared tenancy — with hardware-enforced memory encryption. The hypervisor cannot read guest memory. Inference also runs inside TEEs.</p><p>What makes this particularly interesting is the verification story. PrivateClaw ships an <a href=\"https://github.com/lunal-dev/privateclaw-cli\">open-source CLI</a> that walks through five attestation steps:</p><ol><li><strong>SEV-SNP attestation</strong> — validates a signed report from the AMD Secure Processor against AMD's root of trust</li><li><strong>vTPM verification</strong> — confirms the virtual TPM's endorsement key is bound to the CVM attestation</li><li><strong>Host key binding</strong> — verifies the SSH host key matches what's in the attestation report</li><li><strong>Inference endpoint check</strong> — confirms the inference proxy cert is bound to TEE measurements</li><li><strong>Access control audit</strong> — validates only your SSH key is authorized and the cloud guest agent is disabled</li></ol><p>The architecture runs on Azure Confidential Compute for the CVM and inference gateway, powered by Confidential AI's TEE-backed vLLM deployment.</p><p>It's self-hostable in spirit — the verification tooling is fully open source — though the hosted tier starts free with a Pro plan at $69/month. Try it at <code>ssh privateclaw.dev</code>.</p><p>This is a genuinely novel approach to the trust problem that's been following OpenClaw deployments since the ClawHavoc incident. Whether TEE-backed agents become mainstream infrastructure or remain a niche security product is still an open question, but PrivateClaw is a real implementation worth watching.</p><h2>Lilo: A Personal OS Built on OpenClaw Channels</h2><p><strong><a href=\"https://news.ycombinator.com/item?id=47894947\">Show HN: Lilo – a self-hosted, open-source intelligent personal OS</a></strong></p><p>On the lighter end, Lilo is a personal project that uses OpenClaw as a channel layer to build something bigger: a self-hosted personal operating system where your apps, files, AI assistant, and memories all live in one container.</p><p>The creator (@abi) built it to solve a specific frustration: wanting several small AI-powered personal apps (bookmarks, calorie tracker, TODO list) without the overhead of N separate deployments, auth configs, and URLs. Lilo wraps them all in a single container and lets an agent modify them directly — no code push required.</p><p>The OpenClaw connection is explicit in the submission: Lilo added multi-channel support (WhatsApp, email, Telegram) directly inspired by OpenClaw's approach. The demo in the Show HN — texting a photo of lunch to Lilo and having the calorie tracker update automatically — is a good illustration of why the channel layer matters.</p><p>Each \"app\" inside Lilo is just an HTML file with filesystem API access and full agentic capabilities. Memory is handled via a \"LLM wiki\" style tree of Markdown files — a pattern that'll be familiar to OpenClaw users.</p><p>Lilo is alpha, self-hosted, bring-your-own-keys. The GitHub repo is at <a href=\"https://github.com/abi/lilo\">github.com/abi/lilo</a>.</p><p>---</p><p>Both projects represent different ends of the OpenClaw ecosystem spectrum: PrivateClaw is enterprise-grade infrastructure hardening; Lilo is personal computing reimagined. Worth bookmarking both as the space matures.</p>",
      "date_published": "2026-04-24T23:10:00.000Z",
      "date_modified": "2026-04-24T23:10:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-24-community-privateclaw-lilo.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-24-youtube-roundup/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-24-youtube-roundup/",
      "title": "OpenClaw on YouTube This Week: Rebuilds, Updates, and Voice Agents",
      "summary": "This week's best OpenClaw YouTube content covers the v4.22 update breakdown, rebuilding an entire stack with Claude Code, and voice agents going production.",
      "content_text": "It was a busy week on YouTube for the OpenClaw community. From deep-dives on the latest feature release to creative rebuild experiments and a look at where voice agents are heading, here's what's worth watching.\n\n## \"OpenClaw 4.22 Update IS INSANE – Here's Why\"\n\nThe most-watched new upload this week is a breakdown of the v4.22 release that shipped earlier this week. The creator walks through what changed, why it matters, and which features have the most day-to-day impact for real deployments. If you've been skimming changelogs, this is the fastest way to get up to speed on what's actually useful versus what's noise.\n\n**Watch:** [youtube.com/watch?v=FM5-R4VPArw](https://www.youtube.com/watch?v=FM5-R4VPArw)\n\n## \"I Rebuilt My OpenClaw Stack Using Only Claude Code\"\n\nA creator documents what happens when you hand your entire OpenClaw configuration over to Claude Code and step back. The experiment tests how well an AI coding agent can navigate an existing OpenClaw setup, modify skills, and handle the edge cases that trip up manual configs. The result is part tutorial, part stress test — worth watching whether you use Claude Code or not.\n\n**Watch:** [youtube.com/watch?v=dEe9XBqzK10](https://www.youtube.com/watch?v=dEe9XBqzK10)\n\n## \"Voice Agents Just Got Real\"\n\nThis video takes a broader look at the state of voice agents and where OpenClaw fits into the stack. The thesis: the combination of low-latency TTS, reliable STT, and OpenClaw's multi-channel routing finally makes real-time voice agents viable outside of lab conditions. The creator walks through a working example with practical latency numbers.\n\n**Watch:** [youtube.com/watch?v=Uz3t2XljXuU](https://www.youtube.com/watch?v=Uz3t2XljXuU)\n\n## \"Intro: Rebuilding OpenClaw\"\n\nA shorter entry-point video from someone starting a series on rebuilding and extending their OpenClaw setup from scratch. Good watch if you're interested in following a build-in-public format — the channel appears to be documenting the whole process over several episodes.\n\n**Watch:** [youtube.com/watch?v=5YcIm03AnWs](https://www.youtube.com/watch?v=5YcIm03AnWs)\n\n## What to Watch Next\n\nIf you want more OpenClaw video content, the [OpenClaw YouTube search sorted by upload date](https://www.youtube.com/results?search_query=openclaw&sp=CAI%3D) is the best way to find new uploads before they get traction. The community is producing high-quality tutorials at a faster pace than most open-source projects at this stage.\n\nWe'll be back with another video roundup next week. If you spot a video worth featuring, drop it in the OpenClaw Discord.",
      "content_html": "<p>It was a busy week on YouTube for the OpenClaw community. From deep-dives on the latest feature release to creative rebuild experiments and a look at where voice agents are heading, here's what's worth watching.</p><h2>\"OpenClaw 4.22 Update IS INSANE – Here's Why\"</h2><p>The most-watched new upload this week is a breakdown of the v4.22 release that shipped earlier this week. The creator walks through what changed, why it matters, and which features have the most day-to-day impact for real deployments. If you've been skimming changelogs, this is the fastest way to get up to speed on what's actually useful versus what's noise.</p><p><strong>Watch:</strong> <a href=\"https://www.youtube.com/watch?v=FM5-R4VPArw\">youtube.com/watch?v=FM5-R4VPArw</a></p><h2>\"I Rebuilt My OpenClaw Stack Using Only Claude Code\"</h2><p>A creator documents what happens when you hand your entire OpenClaw configuration over to Claude Code and step back. The experiment tests how well an AI coding agent can navigate an existing OpenClaw setup, modify skills, and handle the edge cases that trip up manual configs. The result is part tutorial, part stress test — worth watching whether you use Claude Code or not.</p><p><strong>Watch:</strong> <a href=\"https://www.youtube.com/watch?v=dEe9XBqzK10\">youtube.com/watch?v=dEe9XBqzK10</a></p><h2>\"Voice Agents Just Got Real\"</h2><p>This video takes a broader look at the state of voice agents and where OpenClaw fits into the stack. The thesis: the combination of low-latency TTS, reliable STT, and OpenClaw's multi-channel routing finally makes real-time voice agents viable outside of lab conditions. The creator walks through a working example with practical latency numbers.</p><p><strong>Watch:</strong> <a href=\"https://www.youtube.com/watch?v=Uz3t2XljXuU\">youtube.com/watch?v=Uz3t2XljXuU</a></p><h2>\"Intro: Rebuilding OpenClaw\"</h2><p>A shorter entry-point video from someone starting a series on rebuilding and extending their OpenClaw setup from scratch. Good watch if you're interested in following a build-in-public format — the channel appears to be documenting the whole process over several episodes.</p><p><strong>Watch:</strong> <a href=\"https://www.youtube.com/watch?v=5YcIm03AnWs\">youtube.com/watch?v=5YcIm03AnWs</a></p><h2>What to Watch Next</h2><p>If you want more OpenClaw video content, the <a href=\"https://www.youtube.com/results?search_query=openclaw&sp=CAI%3D\">OpenClaw YouTube search sorted by upload date</a> is the best way to find new uploads before they get traction. The community is producing high-quality tutorials at a faster pace than most open-source projects at this stage.</p><p>We'll be back with another video roundup next week. If you spot a video worth featuring, drop it in the OpenClaw Discord.</p>",
      "date_published": "2026-04-24T23:05:00.000Z",
      "date_modified": "2026-04-24T23:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-24-youtube-roundup.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-24-release-image-generation/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-24-release-image-generation/",
      "title": "OpenClaw v2026.4.23: Image Generation Gets a Major Upgrade",
      "summary": "OpenClaw v2026.4.23 delivers Codex OAuth image gen, OpenRouter image support, forked subagent context, and configurable local embedding memory.",
      "content_text": "OpenClaw shipped version **v2026.4.23** today, and image generation is the headline act. This release closes long-standing gaps around OpenAI Codex OAuth image creation and OpenRouter image models — two of the most-requested improvements in recent community threads.\n\n## Codex OAuth Image Generation — No API Key Required\n\nThe most impactful change: `openai/gpt-image-2` now works through Codex OAuth, which means you no longer need a separate `OPENAI_API_KEY` to generate and edit images from your agents. This fixes [#70703](https://github.com/openclaw/openclaw/issues/70703) and removes a friction point that tripped up a lot of self-hosters.\n\nAlongside that, OpenRouter image generation is now a first-class feature. If you have an `OPENROUTER_API_KEY`, image models available through OpenRouter will work natively with `image_generate` — addressing [#55066](https://github.com/openclaw/openclaw/issues/55066) via [#67668](https://github.com/openclaw/openclaw/pull/67668). Thanks to community contributor [@notamicrodose](https://github.com/notamicrodose) for the implementation.\n\n## Provider-Specific Quality and Format Hints\n\nAgents can now request quality and output format hints when calling image generation tools. For OpenAI specifically, that includes background, moderation, compression level, and a user hint passthrough — all exposed through the `image_generate` tool schema. Credit goes to [@ottodeng](https://github.com/ottodeng) via [#70503](https://github.com/openclaw/openclaw/pull/70503).\n\nThis is particularly useful for agents that need fine-grained control over output fidelity, storage size, or compliance-sensitive content moderation settings.\n\n## Forked Context for Subagents\n\n`sessions_spawn` gets a meaningful architecture update: agents can now optionally pass forked context to native child sessions, letting a spawned subagent inherit the requester's transcript when needed. The default behavior remains clean isolated sessions — this is opt-in. The change includes prompt guidance, context-engine hook metadata, updated docs, and QA coverage.\n\n## Per-Call Timeout Control for Generation Tools\n\nA smaller but practical improvement: `image`, `video`, `music`, and TTS generation tools now support optional `timeoutMs` overrides per call. If a specific generation is expected to take longer than the default threshold, agents can extend the timeout just for that invocation instead of raising the global limit.\n\n## Configurable Local Embedding Context Size\n\nLocal memory embeddings now support a `memorySearch.local.contextSize` config key, defaulting to 4096 tokens. This matters most on constrained hardware — Raspberry Pi setups and low-RAM VPS hosts can now tune embedding context without patching anything. Fix by [@aalekh-sarvam](https://github.com/aalekh-sarvam) via [#70544](https://github.com/openclaw/openclaw/pull/70544).\n\n## Pi Bundle Updated to 0.70.0\n\nBundled Pi packages are updated to 0.70.0 in this release. OpenClaw now uses Pi's upstream `gpt-5.5` catalog metadata for OpenAI and Codex, with local forward-compatibility handling for `gpt-5.5-pro` kept minimal.\n\n## Notable Bug Fixes\n\nA few fixes worth calling out from the full changelog:\n\n- **Slack group DMs** now properly suppress \"Working…\" traces in MPIM rooms — those internal tool-progress markers were leaking into channels ([#70912](https://github.com/openclaw/openclaw/issues/70912))\n- **WhatsApp onboarding** no longer fails on packaged QuickStart installs before Baileys runtime dependencies are staged ([#70932](https://github.com/openclaw/openclaw/issues/70932))\n- **Block streaming** no longer sends duplicate replies when partial block delivery aborts and the already-sent chunks exactly cover the final reply ([#70921](https://github.com/openclaw/openclaw/issues/70921))\n- **Codex on Windows** now resolves `.cmd` npm shims through `PATHEXT` before starting the native app-server ([#70913](https://github.com/openclaw/openclaw/issues/70913))\n- **WebChat** now surfaces non-retryable provider errors (billing, auth, rate limits) instead of silently logging them ([#70124](https://github.com/openclaw/openclaw/issues/70124))\n- **Memory CLI** can now resolve local embeddings without the gateway running ([#70836](https://github.com/openclaw/openclaw/issues/70836))\n\n## How to Update\n\n```bash\nnpm install -g openclaw@latest\n# or\nopenclaw update\n```\n\nThe full changelog is available on the [GitHub Releases page](https://github.com/openclaw/openclaw/releases/tag/v2026.4.23).",
      "content_html": "<p>OpenClaw shipped version <strong>v2026.4.23</strong> today, and image generation is the headline act. This release closes long-standing gaps around OpenAI Codex OAuth image creation and OpenRouter image models — two of the most-requested improvements in recent community threads.</p><h2>Codex OAuth Image Generation — No API Key Required</h2><p>The most impactful change: <code>openai/gpt-image-2</code> now works through Codex OAuth, which means you no longer need a separate <code>OPENAI_API_KEY</code> to generate and edit images from your agents. This fixes <a href=\"https://github.com/openclaw/openclaw/issues/70703\">#70703</a> and removes a friction point that tripped up a lot of self-hosters.</p><p>Alongside that, OpenRouter image generation is now a first-class feature. If you have an <code>OPENROUTER_API_KEY</code>, image models available through OpenRouter will work natively with <code>image_generate</code> — addressing <a href=\"https://github.com/openclaw/openclaw/issues/55066\">#55066</a> via <a href=\"https://github.com/openclaw/openclaw/pull/67668\">#67668</a>. Thanks to community contributor <a href=\"https://github.com/notamicrodose\">@notamicrodose</a> for the implementation.</p><h2>Provider-Specific Quality and Format Hints</h2><p>Agents can now request quality and output format hints when calling image generation tools. For OpenAI specifically, that includes background, moderation, compression level, and a user hint passthrough — all exposed through the <code>image_generate</code> tool schema. Credit goes to <a href=\"https://github.com/ottodeng\">@ottodeng</a> via <a href=\"https://github.com/openclaw/openclaw/pull/70503\">#70503</a>.</p><p>This is particularly useful for agents that need fine-grained control over output fidelity, storage size, or compliance-sensitive content moderation settings.</p><h2>Forked Context for Subagents</h2><p><code>sessions_spawn</code> gets a meaningful architecture update: agents can now optionally pass forked context to native child sessions, letting a spawned subagent inherit the requester's transcript when needed. The default behavior remains clean isolated sessions — this is opt-in. The change includes prompt guidance, context-engine hook metadata, updated docs, and QA coverage.</p><h2>Per-Call Timeout Control for Generation Tools</h2><p>A smaller but practical improvement: <code>image</code>, <code>video</code>, <code>music</code>, and TTS generation tools now support optional <code>timeoutMs</code> overrides per call. If a specific generation is expected to take longer than the default threshold, agents can extend the timeout just for that invocation instead of raising the global limit.</p><h2>Configurable Local Embedding Context Size</h2><p>Local memory embeddings now support a <code>memorySearch.local.contextSize</code> config key, defaulting to 4096 tokens. This matters most on constrained hardware — Raspberry Pi setups and low-RAM VPS hosts can now tune embedding context without patching anything. Fix by <a href=\"https://github.com/aalekh-sarvam\">@aalekh-sarvam</a> via <a href=\"https://github.com/openclaw/openclaw/pull/70544\">#70544</a>.</p><h2>Pi Bundle Updated to 0.70.0</h2><p>Bundled Pi packages are updated to 0.70.0 in this release. OpenClaw now uses Pi's upstream <code>gpt-5.5</code> catalog metadata for OpenAI and Codex, with local forward-compatibility handling for <code>gpt-5.5-pro</code> kept minimal.</p><h2>Notable Bug Fixes</h2><p>A few fixes worth calling out from the full changelog:</p><ul><li><strong>Slack group DMs</strong> now properly suppress \"Working…\" traces in MPIM rooms — those internal tool-progress markers were leaking into channels (<a href=\"https://github.com/openclaw/openclaw/issues/70912\">#70912</a>)</li><li><strong>WhatsApp onboarding</strong> no longer fails on packaged QuickStart installs before Baileys runtime dependencies are staged (<a href=\"https://github.com/openclaw/openclaw/issues/70932\">#70932</a>)</li><li><strong>Block streaming</strong> no longer sends duplicate replies when partial block delivery aborts and the already-sent chunks exactly cover the final reply (<a href=\"https://github.com/openclaw/openclaw/issues/70921\">#70921</a>)</li><li><strong>Codex on Windows</strong> now resolves <code>.cmd</code> npm shims through <code>PATHEXT</code> before starting the native app-server (<a href=\"https://github.com/openclaw/openclaw/issues/70913\">#70913</a>)</li><li><strong>WebChat</strong> now surfaces non-retryable provider errors (billing, auth, rate limits) instead of silently logging them (<a href=\"https://github.com/openclaw/openclaw/issues/70124\">#70124</a>)</li><li><strong>Memory CLI</strong> can now resolve local embeddings without the gateway running (<a href=\"https://github.com/openclaw/openclaw/issues/70836\">#70836</a>)</li></ul><h2>How to Update</h2><p>``<code>bash<br />npm install -g openclaw@latest<br /><h1>or</h1><br />openclaw update<br /></code>``</p><p>The full changelog is available on the <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.23\">GitHub Releases page</a>.</p>",
      "date_published": "2026-04-24T23:00:00.000Z",
      "date_modified": "2026-04-24T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-24-release-image-generation.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-24-codex-harness-parity/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-24-codex-harness-parity/",
      "title": "OpenClaw Codex Harness Gets Full Hook and Logging Parity with Pi",
      "summary": "Three PRs merged today bring OpenClaw's Codex harness in line with Pi: agent event hooks, unified verbose tool logs, and OTel trace context on diagnostics.",
      "content_text": "Three pull requests merged into OpenClaw's `main` branch this morning quietly close a long-standing gap: the Codex harness now behaves much more like Pi when it comes to lifecycle hooks, verbose tool logging, and diagnostics tracing.\n\n## What Changed\n\n### Codex Hook Notifications Projected Into Agent Events (PR #70969)\n\nContributed by [@pashpashpash](https://github.com/pashpashpash), this change routes Codex app-server notification events through OpenClaw's unified agent event pipeline. Previously, notification-style events fired inside Codex sessions were invisible to the broader event system — they didn't trigger the same hooks or downstream integrations that equivalent Pi events would.\n\nAfter this merge, Codex session notifications surface as first-class agent events. That means any plugin or integration listening for `llm_output`, `agent_end`, or similar lifecycle signals will now receive them from Codex-backed sessions, not just Pi ones.\n\nThe practical benefit: channel plugins, webhooks, and custom automation built on OpenClaw's agent event hooks will work consistently regardless of whether the underlying runner is Pi or Codex.\n\n### Codex Verbose Tool Logs Now Match Pi Format (PR #70966)\n\nContributed by [@jalehman](https://github.com/jalehman), this fix ensures that when Codex runs tools in verbose mode, the log output format matches what Pi produces. The mismatch was purely cosmetic but caused real friction: developers debugging sessions would see inconsistent log shapes depending on which runner fired the tool call, making it harder to correlate behavior across a mixed Pi/Codex setup.\n\nWith the fix landed, verbose tool logs from both harnesses share the same structure — easier to grep, easier to parse, and easier to feed into external log aggregators.\n\n### OTel Trace Context Attached to Diagnostic Logs (PR #70961)\n\nThe third PR adds OpenTelemetry trace context to OpenClaw's gateway diagnostic logs. If you're running OpenClaw in a setup that ships logs to an OTel-compatible collector (Grafana, Honeycomb, Datadog, etc.), trace IDs and span context will now be present on log lines emitted during agent runs.\n\nThis is a smaller change in scope but meaningful for anyone running OpenClaw at scale or as part of a larger observability stack. Correlating a slow agent turn with distributed trace data just got significantly easier.\n\n## Why This Matters\n\nThe Codex harness has been a second-class citizen compared to Pi in OpenClaw's hook infrastructure for a while. The additions in v2026.4.22 last week ([Codex tool_result middleware](https://github.com/openclaw/openclaw/releases/tag/v2026.4.22), hook lifecycle alignment) started closing that gap at the plugin layer. Today's merges push the parity further into the event system, logging surface, and observability layer.\n\nIf you're building integrations on top of OpenClaw's agent event API, this week's work on `main` is worth watching. The `next` release tag should pick these up shortly.\n\n## What's Next on `main`\n\nThe commit queue also shows a pending change to move Bonjour discovery into a bundled plugin — another architectural cleanup that shifts device-discovery concerns out of core and into the plugin layer. That one isn't merged yet but appears to be progressing alongside the Codex work.\n\nKeep an eye on the [OpenClaw releases page](https://github.com/openclaw/openclaw/releases) for when these land in the next tagged release.",
      "content_html": "<p>Three pull requests merged into OpenClaw's <code>main</code> branch this morning quietly close a long-standing gap: the Codex harness now behaves much more like Pi when it comes to lifecycle hooks, verbose tool logging, and diagnostics tracing.</p><h2>What Changed</h2><h3>Codex Hook Notifications Projected Into Agent Events (PR #70969)</h3><p>Contributed by <a href=\"https://github.com/pashpashpash\">@pashpashpash</a>, this change routes Codex app-server notification events through OpenClaw's unified agent event pipeline. Previously, notification-style events fired inside Codex sessions were invisible to the broader event system — they didn't trigger the same hooks or downstream integrations that equivalent Pi events would.</p><p>After this merge, Codex session notifications surface as first-class agent events. That means any plugin or integration listening for <code>llm_output</code>, <code>agent_end</code>, or similar lifecycle signals will now receive them from Codex-backed sessions, not just Pi ones.</p><p>The practical benefit: channel plugins, webhooks, and custom automation built on OpenClaw's agent event hooks will work consistently regardless of whether the underlying runner is Pi or Codex.</p><h3>Codex Verbose Tool Logs Now Match Pi Format (PR #70966)</h3><p>Contributed by <a href=\"https://github.com/jalehman\">@jalehman</a>, this fix ensures that when Codex runs tools in verbose mode, the log output format matches what Pi produces. The mismatch was purely cosmetic but caused real friction: developers debugging sessions would see inconsistent log shapes depending on which runner fired the tool call, making it harder to correlate behavior across a mixed Pi/Codex setup.</p><p>With the fix landed, verbose tool logs from both harnesses share the same structure — easier to grep, easier to parse, and easier to feed into external log aggregators.</p><h3>OTel Trace Context Attached to Diagnostic Logs (PR #70961)</h3><p>The third PR adds OpenTelemetry trace context to OpenClaw's gateway diagnostic logs. If you're running OpenClaw in a setup that ships logs to an OTel-compatible collector (Grafana, Honeycomb, Datadog, etc.), trace IDs and span context will now be present on log lines emitted during agent runs.</p><p>This is a smaller change in scope but meaningful for anyone running OpenClaw at scale or as part of a larger observability stack. Correlating a slow agent turn with distributed trace data just got significantly easier.</p><h2>Why This Matters</h2><p>The Codex harness has been a second-class citizen compared to Pi in OpenClaw's hook infrastructure for a while. The additions in v2026.4.22 last week (<a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.22\">Codex tool_result middleware</a>, hook lifecycle alignment) started closing that gap at the plugin layer. Today's merges push the parity further into the event system, logging surface, and observability layer.</p><p>If you're building integrations on top of OpenClaw's agent event API, this week's work on <code>main</code> is worth watching. The <code>next</code> release tag should pick these up shortly.</p><h2>What's Next on <code>main</code></h2><p>The commit queue also shows a pending change to move Bonjour discovery into a bundled plugin — another architectural cleanup that shifts device-discovery concerns out of core and into the plugin layer. That one isn't merged yet but appears to be progressing alongside the Codex work.</p><p>Keep an eye on the <a href=\"https://github.com/openclaw/openclaw/releases\">OpenClaw releases page</a> for when these land in the next tagged release.</p>",
      "date_published": "2026-04-24T08:00:00.000Z",
      "date_modified": "2026-04-24T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-24-codex-harness-parity.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-23-agent-trust-turkey-permission-slip/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-23-agent-trust-turkey-permission-slip/",
      "title": "The OpenClaw Turkey Problem — and Permission Slip's Answer",
      "summary": "A viral essay argues that trusting OpenClaw more as you get comfortable is a dangerous fallacy. A new open-source tool is building the safety layer the ecosystem needs.",
      "content_text": "Two things happened on Hacker News today that belong in the same conversation: an essay called \"The OpenClaw Turkey Problem\" hit the front page, and a new open-source project called Permission Slip launched as a direct answer to the problem it describes.\n\n## The Turkey Problem\n\nDeveloper Yakko Majuri published [\"The OpenClaw Turkey Problem\"](https://yakko.dev/blog/the-openclaw-turkey-problem) after listening to a podcast where an OpenClaw power user gave advice on agent safety: start with limited permissions, then give the agent more access as you get comfortable.\n\nMajuri's response, drawing on Nassim Taleb's work, is that this advice is structurally broken.\n\nThe \"turkey problem\" comes from Taleb's *The Black Swan*: a turkey is fed every day, firming up its belief that humans are friendly — right up until Thanksgiving. Each feeding reinforces the wrong conclusion. The past experience has no predictive value for the actual event that matters.\n\nApplied to OpenClaw, the pattern looks like this:\n\n> You give OpenClaw access to one calendar. Nothing bad happens. You give it your email. Nothing bad happens. You give it access to the production database. And now when something goes wrong, both the surprise and the impact are proportional to how comfortable you let yourself get.\n\nThe key distinction Majuri makes is that this isn't even a black swan scenario — it's a predictable risk. Hallucinations are a known failure mode. Prompt injection is a known failure mode. Giving an agent more access because nothing has gone wrong yet isn't progressive trust — it's gambling with known odds.\n\n## Why This Resonates\n\nThe essay hit a nerve because it describes a pattern that a lot of OpenClaw users are living. The community has grown fast, the tooling has gotten powerful, and the mental model most people use (\"it hasn't broken anything yet, so I'll give it more\") is genuinely dangerous when applied to systems with access to real production resources.\n\nMajuri is clear he's not anti-OpenClaw. He uses it. He's building on top of it. His argument is that trust should come from security primitives, not from accumulated comfort.\n\nHe's also transparent that he's building [AgentPort](https://agentport.sh), a self-hostable gateway for connecting agents to third-party services with granular permissioning — which gives him a stake in the problem, but also means he's thought seriously about what the solution looks like.\n\n## Permission Slip: The Structural Answer\n\nThe same day, [Permission Slip](https://github.com/supersuit-tech/permission-slip) landed on Hacker News. It's an open-source approval layer that sits between OpenClaw and every external integration — Gmail, GitHub, Stripe, Slack, and [many more connectors](https://github.com/supersuit-tech/permission-slip#connectors).\n\nThe architecture is straightforward:\n\n```\nOpenClaw → Permission Slip → Gmail / GitHub / Stripe...\n                ↕ push notification\n               You (approve / deny)\n```\n\nInstead of giving OpenClaw direct credentials to your accounts, you give it access to Permission Slip, which brokers every action through explicit human approval. The agent submits structured, schema-validated actions — never arbitrary API calls. Nothing executes without your sign-off.\n\nKey features:\n\n- **Action-based security** — OpenClaw submits structured actions, not raw API calls\n- **Per-request push notifications** — human-readable summaries before anything runs\n- **Standing approvals** — pre-authorize trusted, repetitive actions with constraints\n- **Cryptographic identity** — Ed25519 key pairs for tamper-proof request signing\n- **Zero credential exposure** — OpenClaw never sees your actual API keys or passwords\n- **Full audit trail** — every request, approval, and execution logged\n- **iPhone app** — approve on the go\n\nPermission Slip is self-hostable on Docker, Fly.io, or bare metal. It even runs on a [Raspberry Pi 5](https://github.com/supersuit-tech/permission-slip/blob/main/docs/raspberry-pi-quickstart.md) in under 30 minutes. There's also a hosted version at [permissionslip.dev](https://www.permissionslip.dev) if you don't want to manage infrastructure.\n\nThe project is in beta — several connectors are untested — but the security model is well-specified and the architecture is solid.\n\n## The Bigger Picture\n\nThese two pieces of the OpenClaw ecosystem map onto the same tension the project has always had: it gives you enormous capability, and capability requires proportional safety thinking.\n\nThe answer isn't to use OpenClaw less. It's to build the infrastructure that makes the trust actually warranted — not by avoiding bad experiences, but by structurally limiting what can happen when something does go wrong.\n\nPermission Slip is a concrete implementation of that idea. The OpenClaw Turkey Problem is a good articulation of why it matters.\n\nBoth are worth your time today.\n\n- [The OpenClaw Turkey Problem](https://yakko.dev/blog/the-openclaw-turkey-problem) — yakko.dev\n- [Permission Slip on GitHub](https://github.com/supersuit-tech/permission-slip) — supersuit-tech",
      "content_html": "<p>Two things happened on Hacker News today that belong in the same conversation: an essay called \"The OpenClaw Turkey Problem\" hit the front page, and a new open-source project called Permission Slip launched as a direct answer to the problem it describes.</p><h2>The Turkey Problem</h2><p>Developer Yakko Majuri published <a href=\"https://yakko.dev/blog/the-openclaw-turkey-problem\">\"The OpenClaw Turkey Problem\"</a> after listening to a podcast where an OpenClaw power user gave advice on agent safety: start with limited permissions, then give the agent more access as you get comfortable.</p><p>Majuri's response, drawing on Nassim Taleb's work, is that this advice is structurally broken.</p><p>The \"turkey problem\" comes from Taleb's <em>The Black Swan</em>: a turkey is fed every day, firming up its belief that humans are friendly — right up until Thanksgiving. Each feeding reinforces the wrong conclusion. The past experience has no predictive value for the actual event that matters.</p><p>Applied to OpenClaw, the pattern looks like this:</p><p>> You give OpenClaw access to one calendar. Nothing bad happens. You give it your email. Nothing bad happens. You give it access to the production database. And now when something goes wrong, both the surprise and the impact are proportional to how comfortable you let yourself get.</p><p>The key distinction Majuri makes is that this isn't even a black swan scenario — it's a predictable risk. Hallucinations are a known failure mode. Prompt injection is a known failure mode. Giving an agent more access because nothing has gone wrong yet isn't progressive trust — it's gambling with known odds.</p><h2>Why This Resonates</h2><p>The essay hit a nerve because it describes a pattern that a lot of OpenClaw users are living. The community has grown fast, the tooling has gotten powerful, and the mental model most people use (\"it hasn't broken anything yet, so I'll give it more\") is genuinely dangerous when applied to systems with access to real production resources.</p><p>Majuri is clear he's not anti-OpenClaw. He uses it. He's building on top of it. His argument is that trust should come from security primitives, not from accumulated comfort.</p><p>He's also transparent that he's building <a href=\"https://agentport.sh\">AgentPort</a>, a self-hostable gateway for connecting agents to third-party services with granular permissioning — which gives him a stake in the problem, but also means he's thought seriously about what the solution looks like.</p><h2>Permission Slip: The Structural Answer</h2><p>The same day, <a href=\"https://github.com/supersuit-tech/permission-slip\">Permission Slip</a> landed on Hacker News. It's an open-source approval layer that sits between OpenClaw and every external integration — Gmail, GitHub, Stripe, Slack, and <a href=\"https://github.com/supersuit-tech/permission-slip#connectors\">many more connectors</a>.</p><p>The architecture is straightforward:</p><p>``<code><br />OpenClaw → Permission Slip → Gmail / GitHub / Stripe...<br />                ↕ push notification<br />               You (approve / deny)<br /></code>``</p><p>Instead of giving OpenClaw direct credentials to your accounts, you give it access to Permission Slip, which brokers every action through explicit human approval. The agent submits structured, schema-validated actions — never arbitrary API calls. Nothing executes without your sign-off.</p><p>Key features:</p><ul><li><strong>Action-based security</strong> — OpenClaw submits structured actions, not raw API calls</li><li><strong>Per-request push notifications</strong> — human-readable summaries before anything runs</li><li><strong>Standing approvals</strong> — pre-authorize trusted, repetitive actions with constraints</li><li><strong>Cryptographic identity</strong> — Ed25519 key pairs for tamper-proof request signing</li><li><strong>Zero credential exposure</strong> — OpenClaw never sees your actual API keys or passwords</li><li><strong>Full audit trail</strong> — every request, approval, and execution logged</li><li><strong>iPhone app</strong> — approve on the go</li></ul><p>Permission Slip is self-hostable on Docker, Fly.io, or bare metal. It even runs on a <a href=\"https://github.com/supersuit-tech/permission-slip/blob/main/docs/raspberry-pi-quickstart.md\">Raspberry Pi 5</a> in under 30 minutes. There's also a hosted version at <a href=\"https://www.permissionslip.dev\">permissionslip.dev</a> if you don't want to manage infrastructure.</p><p>The project is in beta — several connectors are untested — but the security model is well-specified and the architecture is solid.</p><h2>The Bigger Picture</h2><p>These two pieces of the OpenClaw ecosystem map onto the same tension the project has always had: it gives you enormous capability, and capability requires proportional safety thinking.</p><p>The answer isn't to use OpenClaw less. It's to build the infrastructure that makes the trust actually warranted — not by avoiding bad experiences, but by structurally limiting what can happen when something does go wrong.</p><p>Permission Slip is a concrete implementation of that idea. The OpenClaw Turkey Problem is a good articulation of why it matters.</p><p>Both are worth your time today.</p><ul><li><a href=\"https://yakko.dev/blog/the-openclaw-turkey-problem\">The OpenClaw Turkey Problem</a> — yakko.dev</li><li><a href=\"https://github.com/supersuit-tech/permission-slip\">Permission Slip on GitHub</a> — supersuit-tech</li></ul>",
      "date_published": "2026-04-23T23:05:00.000Z",
      "date_modified": "2026-04-23T23:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-23-agent-trust-turkey-permission-slip.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-23-v2026422-xai-tui-tencent/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-23-v2026422-xai-tui-tencent/",
      "title": "OpenClaw v2026.4.22: xAI Providers, TUI Mode, and Tencent Cloud",
      "summary": "OpenClaw v2026.4.22 lands xAI image generation, TTS and STT, a standalone TUI embedded mode, Tencent Cloud, and Tokenjuice in one enormous drop.",
      "content_text": "OpenClaw [v2026.4.22](https://github.com/openclaw/openclaw/releases/tag/v2026.4.22) dropped on April 22nd and it is one of the biggest releases in recent memory. The changelog spans five new provider integrations, a fully standalone terminal mode, a diagnostics export tool, and a batch of performance wins that make the gateway meaningfully faster to start.\n\n## xAI Gets Full Multimedia Coverage\n\nThe headline item is a comprehensive xAI provider overhaul contributed by [@KateWilkins](https://github.com/KateWilkins). OpenClaw now supports:\n\n- **Image generation** via `grok-imagine-image` and `grok-imagine-image-pro`, including reference-image edits\n- **Text-to-speech** with six live xAI voices and output formats: MP3, WAV, PCM, and G.711\n- **Speech-to-text** via `grok-stt` for audio transcription\n- **Realtime transcription** for Voice Call streaming\n\nOn the STT front, Voice Call streaming transcription also expanded to cover **Deepgram, ElevenLabs, and Mistral** — joining the existing OpenAI and xAI realtime paths. ElevenLabs additionally gains Scribe v2 batch transcription for inbound media.\n\n## TUI Embedded Mode — No Gateway Required\n\n[@fuller-stack-dev](https://github.com/fuller-stack-dev) contributed a long-requested feature: a local embedded mode for the terminal UI that lets you run full chat sessions without a running Gateway. Plugin approval gates remain enforced, so you don't lose any of the safety controls you'd normally get through the Gateway. This is a big deal for developers who want a lightweight local setup or want to test config changes without touching a running production Gateway.\n\n## New Providers: Tencent Cloud and Amazon Bedrock Mantle\n\nTwo new cloud providers land in this release:\n\n**Tencent Cloud** — a bundled provider plugin with TokenHub onboarding, docs, `hy3-preview` model catalog entries, and tiered Hy3 pricing metadata. Contributed by [@JuniperSling](https://github.com/JuniperSling).\n\n**Amazon Bedrock Mantle** — adds Claude Opus 4.7 via Mantle's Anthropic Messages route, with provider-owned bearer-auth streaming. This means the model is actually callable without treating AWS bearer tokens as Anthropic API keys — a subtle but important distinction for enterprise setups. Contributed by [@wirjo](https://github.com/wirjo).\n\n## Tokenjuice: Compact Your Noisy Tool Results\n\n[@vincentkoc](https://github.com/vincentkoc) contributed **Tokenjuice** as an opt-in bundled plugin. It compacts noisy `exec` and `bash` tool results during Pi embedded runs, which is particularly useful when you're running long agentic sessions that accumulate verbose output. Enable it under `plugins.entries.tokenjuice`.\n\n## /models add — Register Models Without Restarting\n\n[@Takhoffman](https://github.com/Takhoffman) contributed a `/models add` command that lets you register a new model directly from chat and use it immediately without restarting the Gateway. The existing `/models` command becomes a clean provider browser, with clearer guidance and copy-friendly examples.\n\n## GPT-5 Overlay Now Shared Across Providers\n\nThe GPT-5 prompt overlay moves out of the OpenAI plugin and into the shared provider runtime. Compatible GPT-5 models now receive the same behavior and heartbeat guidance whether routed through OpenAI, OpenRouter, OpenCode, Codex, or other GPT providers. The `agents.defaults.promptOverlays.gpt5.personality` toggle controls the friendly-style behavior globally.\n\n## WhatsApp Gets Per-Group System Prompts and Native Reply Quoting\n\nTwo quality-of-life improvements for WhatsApp users:\n\n- **Configurable native reply quoting** via `replyToMode`, contributed by [@mcaxtr](https://github.com/mcaxtr)\n- **Per-group `systemPrompt` forwarding** into `GroupSystemPrompt` context, so configured behavioral instructions apply on every turn. Supports `\"*\"` wildcard fallback. Closes [#7011](https://github.com/openclaw/openclaw/issues/7011), contributed by [@Bluetegu](https://github.com/Bluetegu)\n\n## Performance Wins\n\nThis release is packed with startup and runtime performance improvements:\n\n- **82–90% faster plugin loading** with native Jiti loading for bundled plugin dist modules ([#69925](https://github.com/openclaw/openclaw/pull/69925))\n- **74% faster `openclaw doctor`** non-interactive runtime with lazy-loaded plugin paths ([#69840](https://github.com/openclaw/openclaw/pull/69840))\n\n## Diagnostics Export\n\n[@gumadeiras](https://github.com/gumadeiras) added a support-ready diagnostics export command that bundles sanitized logs, status, health, config, and stability snapshots. Stability recording is now also enabled by default. When you need to file a bug report, the tooling is now actually there to help you do it properly ([#70324](https://github.com/openclaw/openclaw/pull/70324)).\n\n## Notable Fixes\n\n- **Models/auth merge fix** — re-authenticating an OAuth provider (like OpenAI Codex) no longer wipes other providers' aliases and per-model params. Fixes [#69414](https://github.com/openclaw/openclaw/issues/69414)\n- **Azure OpenAI image generation** — proper Azure auth, deployment-scoped URLs, and `AZURE_OPENAI_API_VERSION` support\n- **OpenAI Codex CLI auth** — removed the import path that copied `~/.codex` OAuth material into agent auth stores; use browser login or device pairing instead\n- **Local backend token accounting** — streaming usage now correctly recovered from llama.cpp-style timing metadata, fixing unknown/stale context totals\n- **`/status` Runner field** — sessions now report whether they run on embedded Pi, a CLI-backed provider, or an ACP harness agent\n\nThe full changelog is available on the [GitHub releases page](https://github.com/openclaw/openclaw/releases/tag/v2026.4.22). This is a recommended upgrade.",
      "content_html": "<p>OpenClaw <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.22\">v2026.4.22</a> dropped on April 22nd and it is one of the biggest releases in recent memory. The changelog spans five new provider integrations, a fully standalone terminal mode, a diagnostics export tool, and a batch of performance wins that make the gateway meaningfully faster to start.</p><h2>xAI Gets Full Multimedia Coverage</h2><p>The headline item is a comprehensive xAI provider overhaul contributed by <a href=\"https://github.com/KateWilkins\">@KateWilkins</a>. OpenClaw now supports:</p><ul><li><strong>Image generation</strong> via <code>grok-imagine-image</code> and <code>grok-imagine-image-pro</code>, including reference-image edits</li><li><strong>Text-to-speech</strong> with six live xAI voices and output formats: MP3, WAV, PCM, and G.711</li><li><strong>Speech-to-text</strong> via <code>grok-stt</code> for audio transcription</li><li><strong>Realtime transcription</strong> for Voice Call streaming</li></ul><p>On the STT front, Voice Call streaming transcription also expanded to cover <strong>Deepgram, ElevenLabs, and Mistral</strong> — joining the existing OpenAI and xAI realtime paths. ElevenLabs additionally gains Scribe v2 batch transcription for inbound media.</p><h2>TUI Embedded Mode — No Gateway Required</h2><p><a href=\"https://github.com/fuller-stack-dev\">@fuller-stack-dev</a> contributed a long-requested feature: a local embedded mode for the terminal UI that lets you run full chat sessions without a running Gateway. Plugin approval gates remain enforced, so you don't lose any of the safety controls you'd normally get through the Gateway. This is a big deal for developers who want a lightweight local setup or want to test config changes without touching a running production Gateway.</p><h2>New Providers: Tencent Cloud and Amazon Bedrock Mantle</h2><p>Two new cloud providers land in this release:</p><p><strong>Tencent Cloud</strong> — a bundled provider plugin with TokenHub onboarding, docs, <code>hy3-preview</code> model catalog entries, and tiered Hy3 pricing metadata. Contributed by <a href=\"https://github.com/JuniperSling\">@JuniperSling</a>.</p><p><strong>Amazon Bedrock Mantle</strong> — adds Claude Opus 4.7 via Mantle's Anthropic Messages route, with provider-owned bearer-auth streaming. This means the model is actually callable without treating AWS bearer tokens as Anthropic API keys — a subtle but important distinction for enterprise setups. Contributed by <a href=\"https://github.com/wirjo\">@wirjo</a>.</p><h2>Tokenjuice: Compact Your Noisy Tool Results</h2><p><a href=\"https://github.com/vincentkoc\">@vincentkoc</a> contributed <strong>Tokenjuice</strong> as an opt-in bundled plugin. It compacts noisy <code>exec</code> and <code>bash</code> tool results during Pi embedded runs, which is particularly useful when you're running long agentic sessions that accumulate verbose output. Enable it under <code>plugins.entries.tokenjuice</code>.</p><h2>/models add — Register Models Without Restarting</h2><p><a href=\"https://github.com/Takhoffman\">@Takhoffman</a> contributed a <code>/models add</code> command that lets you register a new model directly from chat and use it immediately without restarting the Gateway. The existing <code>/models</code> command becomes a clean provider browser, with clearer guidance and copy-friendly examples.</p><h2>GPT-5 Overlay Now Shared Across Providers</h2><p>The GPT-5 prompt overlay moves out of the OpenAI plugin and into the shared provider runtime. Compatible GPT-5 models now receive the same behavior and heartbeat guidance whether routed through OpenAI, OpenRouter, OpenCode, Codex, or other GPT providers. The <code>agents.defaults.promptOverlays.gpt5.personality</code> toggle controls the friendly-style behavior globally.</p><h2>WhatsApp Gets Per-Group System Prompts and Native Reply Quoting</h2><p>Two quality-of-life improvements for WhatsApp users:</p><ul><li><strong>Configurable native reply quoting</strong> via <code>replyToMode</code>, contributed by <a href=\"https://github.com/mcaxtr\">@mcaxtr</a></li><li><strong>Per-group <code>systemPrompt</code> forwarding</strong> into <code>GroupSystemPrompt</code> context, so configured behavioral instructions apply on every turn. Supports <code>\"*\"</code> wildcard fallback. Closes <a href=\"https://github.com/openclaw/openclaw/issues/7011\">#7011</a>, contributed by <a href=\"https://github.com/Bluetegu\">@Bluetegu</a></li></ul><h2>Performance Wins</h2><p>This release is packed with startup and runtime performance improvements:</p><ul><li><strong>82–90% faster plugin loading</strong> with native Jiti loading for bundled plugin dist modules (<a href=\"https://github.com/openclaw/openclaw/pull/69925\">#69925</a>)</li><li><strong>74% faster <code>openclaw doctor</code></strong> non-interactive runtime with lazy-loaded plugin paths (<a href=\"https://github.com/openclaw/openclaw/pull/69840\">#69840</a>)</li></ul><h2>Diagnostics Export</h2><p><a href=\"https://github.com/gumadeiras\">@gumadeiras</a> added a support-ready diagnostics export command that bundles sanitized logs, status, health, config, and stability snapshots. Stability recording is now also enabled by default. When you need to file a bug report, the tooling is now actually there to help you do it properly (<a href=\"https://github.com/openclaw/openclaw/pull/70324\">#70324</a>).</p><h2>Notable Fixes</h2><ul><li><strong>Models/auth merge fix</strong> — re-authenticating an OAuth provider (like OpenAI Codex) no longer wipes other providers' aliases and per-model params. Fixes <a href=\"https://github.com/openclaw/openclaw/issues/69414\">#69414</a></li><li><strong>Azure OpenAI image generation</strong> — proper Azure auth, deployment-scoped URLs, and <code>AZURE_OPENAI_API_VERSION</code> support</li><li><strong>OpenAI Codex CLI auth</strong> — removed the import path that copied <code>~/.codex</code> OAuth material into agent auth stores; use browser login or device pairing instead</li><li><strong>Local backend token accounting</strong> — streaming usage now correctly recovered from llama.cpp-style timing metadata, fixing unknown/stale context totals</li><li><strong><code>/status</code> Runner field</strong> — sessions now report whether they run on embedded Pi, a CLI-backed provider, or an ACP harness agent</li></ul><p>The full changelog is available on the <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.22\">GitHub releases page</a>. This is a recommended upgrade.</p>",
      "date_published": "2026-04-23T23:00:00.000Z",
      "date_modified": "2026-04-23T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-23-v2026422-xai-tui-tencent.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-memory-search-190x-speedup/",
      "url": "https://openclawchronicles.com/posts/openclaw-memory-search-190x-speedup/",
      "title": "OpenClaw Memory Search Gets a 190x Speed Boost with sqlite-vec KNN",
      "summary": "A merged PR replaces OpenClaw's full-table-scan vector search with sqlite-vec KNN, cutting query time from ~8,490ms to ~50ms — a 190x improvement with no schema migration.",
      "content_text": "If memory search in your OpenClaw gateway has ever felt sluggish — especially on large workspaces with thousands of chunks — a freshly merged pull request just made it roughly 190 times faster. No configuration changes, no schema migration, no reindexing required.\n\n## What Changed\n\nPR [#69680](https://github.com/openclaw/openclaw/pull/69680) by contributor **aalekh-sarvam** replaces the `searchVector` function's SQL query with sqlite-vec's native **KNN (K-Nearest Neighbor) operator**. Previously, every vector search did a full table scan — computing cosine distance against every stored chunk before sorting results. The new approach lets sqlite-vec's `vec0` index walk shards directly, only computing final cosine scores on the candidates that actually matter.\n\nThe benchmark numbers are stark:\n\n| Pattern | Time per query |\n|---|---|\n| Before (full table scan) | ~8,490 ms |\n| After (sqlite-vec KNN) | ~50 ms |\n| Speedup | **~190×** |\n\nThis was measured against a real 10,827-chunk workspace using 4096-dimensional Qwen3-Embedding-8B embeddings — a heavy, realistic workload.\n\n## Why It Wasn't a Simple Fix\n\nThe naive approach — just switching to sqlite-vec's `v.distance` field for ordering — actually breaks results entirely. Here's why: sqlite-vec creates `chunks_vec` tables with **L2 distance** by default, not cosine distance. Using `v.distance` directly produces values that can exceed 1, which causes `score = 1 - dist` to go negative. The downstream `minScore` filter then drops every result silently.\n\nThe correct fix is more surgical: use `MATCH ? AND k = ?` solely for **candidate selection** (where the speedup lives), and keep `vec_distance_cosine()` in the `SELECT` for the **score**, preserving the existing cosine semantics exactly. The query vector is bound twice — once for the KNN match, once for the cosine calculation — but the result set and ordering are identical to the old implementation.\n\n## What Stays the Same\n\n- The fallback path (for installs without sqlite-vec) is untouched\n- Existing schemas don't need migration or reindexing\n- Score range and ordering behavior are semantically identical to before\n- The fix targets the `extensions: memory-core` label — it's isolated to the memory search layer\n\n## Why This Matters\n\nMemory search is in the hot path for any agent that uses workspace context. With a large knowledge base, the old implementation could push concurrent tool calls into seconds-long queues. The PR author noted that end-to-end latency for memory tool calls dropped from **8–30 seconds** to around **2 seconds** in their setup — and the remaining 2 seconds is now dominated by merge/MMR/decay post-processing, a separate optimization target.\n\nFor users running OpenClaw against large codebases, documentation corpora, or long-running projects with extensive memory accumulation, this is a meaningful quality-of-life improvement that arrives without any action required on your part.\n\n## When Does This Land?\n\nThe PR merged into `main` on April 23, 2026. It will ship in the next versioned release. If you're tracking `main` directly, you already have it.\n\n---\n\n*PR [#69680](https://github.com/openclaw/openclaw/pull/69680) · merged April 23, 2026 · contributor: [@aalekh-sarvam](https://github.com/aalekh-sarvam)*",
      "content_html": "<p>If memory search in your OpenClaw gateway has ever felt sluggish — especially on large workspaces with thousands of chunks — a freshly merged pull request just made it roughly 190 times faster. No configuration changes, no schema migration, no reindexing required.</p><h2>What Changed</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/69680\">#69680</a> by contributor <strong>aalekh-sarvam</strong> replaces the <code>searchVector</code> function's SQL query with sqlite-vec's native <strong>KNN (K-Nearest Neighbor) operator</strong>. Previously, every vector search did a full table scan — computing cosine distance against every stored chunk before sorting results. The new approach lets sqlite-vec's <code>vec0</code> index walk shards directly, only computing final cosine scores on the candidates that actually matter.</p><p>The benchmark numbers are stark:</p><p>| Pattern | Time per query |<br />|---|---|<br />| Before (full table scan) | ~8,490 ms |<br />| After (sqlite-vec KNN) | ~50 ms |<br />| Speedup | <strong>~190×</strong> |</p><p>This was measured against a real 10,827-chunk workspace using 4096-dimensional Qwen3-Embedding-8B embeddings — a heavy, realistic workload.</p><h2>Why It Wasn't a Simple Fix</h2><p>The naive approach — just switching to sqlite-vec's <code>v.distance</code> field for ordering — actually breaks results entirely. Here's why: sqlite-vec creates <code>chunks_vec</code> tables with <strong>L2 distance</strong> by default, not cosine distance. Using <code>v.distance</code> directly produces values that can exceed 1, which causes <code>score = 1 - dist</code> to go negative. The downstream <code>minScore</code> filter then drops every result silently.</p><p>The correct fix is more surgical: use <code>MATCH ? AND k = ?</code> solely for <strong>candidate selection</strong> (where the speedup lives), and keep <code>vec_distance_cosine()</code> in the <code>SELECT</code> for the <strong>score</strong>, preserving the existing cosine semantics exactly. The query vector is bound twice — once for the KNN match, once for the cosine calculation — but the result set and ordering are identical to the old implementation.</p><h2>What Stays the Same</h2><ul><li>The fallback path (for installs without sqlite-vec) is untouched</li><li>Existing schemas don't need migration or reindexing</li><li>Score range and ordering behavior are semantically identical to before</li><li>The fix targets the <code>extensions: memory-core</code> label — it's isolated to the memory search layer</li></ul><h2>Why This Matters</h2><p>Memory search is in the hot path for any agent that uses workspace context. With a large knowledge base, the old implementation could push concurrent tool calls into seconds-long queues. The PR author noted that end-to-end latency for memory tool calls dropped from <strong>8–30 seconds</strong> to around <strong>2 seconds</strong> in their setup — and the remaining 2 seconds is now dominated by merge/MMR/decay post-processing, a separate optimization target.</p><p>For users running OpenClaw against large codebases, documentation corpora, or long-running projects with extensive memory accumulation, this is a meaningful quality-of-life improvement that arrives without any action required on your part.</p><h2>When Does This Land?</h2><p>The PR merged into <code>main</code> on April 23, 2026. It will ship in the next versioned release. If you're tracking <code>main</code> directly, you already have it.</p><p>---</p><p><em>PR <a href=\"https://github.com/openclaw/openclaw/pull/69680\">#69680</a> · merged April 23, 2026 · contributor: <a href=\"https://github.com/aalekh-sarvam\">@aalekh-sarvam</a></em></p>",
      "date_published": "2026-04-23T08:00:00.000Z",
      "date_modified": "2026-04-23T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-memory-search-190x-speedup.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-22-hn-adoption-debate/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-22-hn-adoption-debate/",
      "title": "OpenClaw Adoption Gap: The Debate Dividing HN This Week",
      "summary": "A Hacker News thread dissects the gap between OpenClaw's 247K GitHub stars and 35K top-skill installs — and asks whether the hype outpaces real builder adoption.",
      "content_text": "An [Ask HN thread posted this week](https://news.ycombinator.com/item?id=47859207) is asking a question that's been quietly circulating in OpenClaw communities for a while: do the project's headline numbers actually reflect what's happening on the ground?\n\nThe post, titled *\"OpenClaw stats don't add up,\"* walks through a set of observations that are hard to dismiss:\n\n## The Numbers\n\n- **247K GitHub stars** — a figure that made global headlines and drove what commenters are calling the \"lobster trade\" (stock market rallies in publicly-listed companies that announce OpenClaw integrations)\n- **35K installs** on the most-downloaded skill in ClawHub — the marketplace number, not the star number\n- **Most popular skills** are utility connectors: Gmail, web search, Obsidian, Home Assistant — \"things a dozen other tools already do,\" in the poster's words\n- **Dual monetization friction**: users pay both a monthly subscription *and* per-API-call fees, which the poster argues filters the audience toward experimenters rather than production builders\n- **OpenClawRobotics** — a community site for applying OpenClaw to robotics — appears to be abandoned, with a broken signup form\n\nThe original poster drew a direct line between managed hosting becoming mainstream (\"same tier as WordPress\") and late-cycle behavior: \"Infrastructure providers commoditize projects when novelty has passed and recurring revenue becomes the play.\"\n\n## What the Comments Say\n\nThe thread generated genuine discussion. Three angles emerged:\n\n**\"The stars are marketing, not adoption.\"**  \nOne commenter noted that almost no one outside of hosting providers appears to be making money on OpenClaw: \"That is why the OpenClaw hype exists — hosting providers need the stars to justify their pricing.\" This echoes the broader pattern in open-source: GitHub stars reflect cultural momentum, not paying customers.\n\n**\"The billing model filters for curiosity.\"**  \nAnother commenter pointed directly at the dual billing structure: \"Free trials generate the stars, but charging both monthly and per-call fees filters for experimenters over builders. People explore but don't commit to production. Prob why most installs end up as pedestrian connectors.\" This is consistent with skill install patterns skewing toward low-commitment utilities.\n\n**\"It's still early for specialized verticals.\"**  \nA third perspective: the abandoned robotics community and sparse specialized skill coverage suggest OpenClaw is being measured too early for domain-specific use cases. The core product is maturing, but the ecosystem for serious vertical applications is still forming.\n\n## The \"Lobster Trade\" Context\n\nThe post also surfaced the scale of government-backed speculation around OpenClaw. Shenzhen is reportedly offering grants of up to $1.4 million for OpenClaw-based one-person companies; Wuxi has announced grants up to $730K. These programs have fueled what the poster describes as a stock market \"lobster trade\" — where Chinese-listed companies announcing OpenClaw integrations see their shares jump regardless of underlying product traction.\n\nThis creates a peculiar dynamic: OpenClaw's GitHub star count is genuinely meaningful as a signal of developer interest, but it's being amplified by a financial ecosystem that has strong incentives to associate with the brand, independent of whether real software ships.\n\n## What to Make of It\n\nThe gap between stars and installs is real — but context matters. GitHub stars are earned at different lifecycle stages for different users. Some stars come from people who ran the quickstart once and liked it. Some come from organizations doing vendor evaluation. Many come from developers who intend to build something eventually.\n\nThe more relevant signal is probably the 35K installs on the top skill: that represents users who configured a working gateway, connected it to a messaging platform, and trusted it enough to install additional capabilities. That's not a trivial bar. 35K is a meaningful number for an infrastructure project with a non-trivial setup process.\n\nWhether it's \"enough\" depends on what you expected from 247K stars. The honest answer is: it depends on what kind of project you think OpenClaw is. A developer playground, a production automation platform, or a foundation layer for AI-native workflows? The answer is probably all three — with very different adoption curves for each.\n\nThe [full HN thread](https://news.ycombinator.com/item?id=47859207) is worth reading in full.\n\n---\n\n*Source: [Ask HN: OpenClaw stats don't add up](https://news.ycombinator.com/item?id=47859207) (9 points, April 22, 2026)*",
      "content_html": "<p>An <a href=\"https://news.ycombinator.com/item?id=47859207\">Ask HN thread posted this week</a> is asking a question that's been quietly circulating in OpenClaw communities for a while: do the project's headline numbers actually reflect what's happening on the ground?</p><p>The post, titled <em>\"OpenClaw stats don't add up,\"</em> walks through a set of observations that are hard to dismiss:</p><h2>The Numbers</h2><ul><li><strong>247K GitHub stars</strong> — a figure that made global headlines and drove what commenters are calling the \"lobster trade\" (stock market rallies in publicly-listed companies that announce OpenClaw integrations)</li><li><strong>35K installs</strong> on the most-downloaded skill in ClawHub — the marketplace number, not the star number</li><li><strong>Most popular skills</strong> are utility connectors: Gmail, web search, Obsidian, Home Assistant — \"things a dozen other tools already do,\" in the poster's words</li><li><strong>Dual monetization friction</strong>: users pay both a monthly subscription <em>and</em> per-API-call fees, which the poster argues filters the audience toward experimenters rather than production builders</li><li><strong>OpenClawRobotics</strong> — a community site for applying OpenClaw to robotics — appears to be abandoned, with a broken signup form</li></ul><p>The original poster drew a direct line between managed hosting becoming mainstream (\"same tier as WordPress\") and late-cycle behavior: \"Infrastructure providers commoditize projects when novelty has passed and recurring revenue becomes the play.\"</p><h2>What the Comments Say</h2><p>The thread generated genuine discussion. Three angles emerged:</p><p><strong>\"The stars are marketing, not adoption.\"</strong>  <br />One commenter noted that almost no one outside of hosting providers appears to be making money on OpenClaw: \"That is why the OpenClaw hype exists — hosting providers need the stars to justify their pricing.\" This echoes the broader pattern in open-source: GitHub stars reflect cultural momentum, not paying customers.</p><p><strong>\"The billing model filters for curiosity.\"</strong>  <br />Another commenter pointed directly at the dual billing structure: \"Free trials generate the stars, but charging both monthly and per-call fees filters for experimenters over builders. People explore but don't commit to production. Prob why most installs end up as pedestrian connectors.\" This is consistent with skill install patterns skewing toward low-commitment utilities.</p><p><strong>\"It's still early for specialized verticals.\"</strong>  <br />A third perspective: the abandoned robotics community and sparse specialized skill coverage suggest OpenClaw is being measured too early for domain-specific use cases. The core product is maturing, but the ecosystem for serious vertical applications is still forming.</p><h2>The \"Lobster Trade\" Context</h2><p>The post also surfaced the scale of government-backed speculation around OpenClaw. Shenzhen is reportedly offering grants of up to $1.4 million for OpenClaw-based one-person companies; Wuxi has announced grants up to $730K. These programs have fueled what the poster describes as a stock market \"lobster trade\" — where Chinese-listed companies announcing OpenClaw integrations see their shares jump regardless of underlying product traction.</p><p>This creates a peculiar dynamic: OpenClaw's GitHub star count is genuinely meaningful as a signal of developer interest, but it's being amplified by a financial ecosystem that has strong incentives to associate with the brand, independent of whether real software ships.</p><h2>What to Make of It</h2><p>The gap between stars and installs is real — but context matters. GitHub stars are earned at different lifecycle stages for different users. Some stars come from people who ran the quickstart once and liked it. Some come from organizations doing vendor evaluation. Many come from developers who intend to build something eventually.</p><p>The more relevant signal is probably the 35K installs on the top skill: that represents users who configured a working gateway, connected it to a messaging platform, and trusted it enough to install additional capabilities. That's not a trivial bar. 35K is a meaningful number for an infrastructure project with a non-trivial setup process.</p><p>Whether it's \"enough\" depends on what you expected from 247K stars. The honest answer is: it depends on what kind of project you think OpenClaw is. A developer playground, a production automation platform, or a foundation layer for AI-native workflows? The answer is probably all three — with very different adoption curves for each.</p><p>The <a href=\"https://news.ycombinator.com/item?id=47859207\">full HN thread</a> is worth reading in full.</p><p>---</p><p><em>Source: <a href=\"https://news.ycombinator.com/item?id=47859207\">Ask HN: OpenClaw stats don't add up</a> (9 points, April 22, 2026)</em></p>",
      "date_published": "2026-04-22T23:10:00.000Z",
      "date_modified": "2026-04-22T23:10:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-22-hn-adoption-debate.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-22-security-scale-aie-talk/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-22-security-scale-aie-talk/",
      "title": "OpenClaw at Scale: 60x More Security Reports Than curl",
      "summary": "At AIE 2026, Peter Steinberger delivered a sober engineering assessment: OpenClaw faces 60x more security incidents than curl, with an estimated 20% of skill submissions flagged as malicious.",
      "content_text": "While Peter Steinberger's TED talk this week told the inspiring origin story of OpenClaw to a general audience, a parallel talk at the AIE conference painted a considerably more complicated picture — one that anyone running OpenClaw in production should take seriously.\n\nSpeaking to an engineering audience, Steinberger described the hidden operational cost of maintaining what has become the fastest-growing open-source project in history. The numbers he shared are striking.\n\n## The Security Reality at 247K Stars\n\nOpenClaw now receives roughly **60 times more security incident reports than curl** — a comparison Steinberger made deliberately, given curl's reputation as one of the most widely-deployed network libraries in existence and its well-documented security track record.\n\nThe sheer surface area is part of the problem. OpenClaw's skill ecosystem means third-party code runs inside users' local environments with access to messaging platforms, filesystem tools, and in many cases connected home infrastructure. Every skill is a potential attack vector.\n\nMore troubling: Steinberger estimated that **at least 20% of skill submissions to ClawHub are malicious**. That figure aligns with the ClawHavoc incident reported earlier this month, in which a coordinated campaign of weaponized skills was discovered in the skill marketplace. But the AIE disclosure suggests ClawHavoc was less of an anomaly and more of a visible peak in an ongoing problem.\n\n## What This Means for Self-Hosters\n\nFor users running OpenClaw with community skills installed, this is a useful reminder to treat skill installation the same way you would treat adding an npm package to a production app — meaning: review what you're installing, prefer skills with strong maintenance histories and genuine community engagement, and don't assume ClawHub review processes catch everything.\n\nPractical steps:\n\n- **Audit installed skills.** Run `openclaw skills list` and review anything you haven't actively verified. Remove skills you no longer use.\n- **Watch for unscoped storage keys.** The PR merged today ([#70362](https://github.com/openclaw/openclaw/pull/70362)) patched a medium-severity issue where local user identity was stored in an unscoped localStorage key, allowing identity data to bleed between gateway contexts on the same origin. If you run dev and prod on the same host, update.\n- **Keep gateway logs.** The 2026.4.21 release improved logging for failed provider/model candidates at warn level — useful signal when chasing down compromised skill behavior.\n- **Disable skill auto-updates** if you need stability. Manual review on each update is slower but safer in high-risk deployments.\n\n## The Maintenance Burden\n\nBeyond security, Steinberger's AIE talk touched on the general scaling challenges involved in maintaining a project at this velocity. The sessions/maintenance fix in 2026.4.20 ([#69404](https://github.com/openclaw/openclaw/pull/69404)) — which enforces an entry cap and age prune to prevent cron/executor session backlogs from OOM-ing the gateway — is a direct result of this scale. Real deployments were running out of memory.\n\nThe cron state split in 2026.4.20 ([#63105](https://github.com/openclaw/openclaw/pull/63105)) also reflects operational maturity: separating runtime execution state into `jobs-state.json` so the tracked `jobs.json` stays clean for version control is the kind of change you make when you have users who actually manage their configs in git.\n\n## The Bigger Picture\n\nOpenClaw's security posture is not a crisis — but it is a moving target. The project's community-driven skill ecosystem, which is one of its greatest strengths, is also its largest attack surface. The comparison to curl isn't meant to be alarming; it's meant to calibrate expectations. Steinberger is clearly taking it seriously.\n\nThe full AIE talk is available via the [Latent Space AINews digest](https://www.latent.space/p/ainews-the-two-sides-of-openclaw), alongside the moderated AMA that followed.\n\n---\n\n**Related:**\n- [OpenClaw PR #70362 — Personalize local user identity (security notes)](https://github.com/openclaw/openclaw/pull/70362)\n- [OpenClaw 2026.4.21 Release Notes](https://github.com/openclaw/openclaw/releases/tag/v2026.4.21)\n- [Latent Space: The Two Sides of OpenClaw](https://www.latent.space/p/ainews-the-two-sides-of-openclaw)",
      "content_html": "<p>While Peter Steinberger's TED talk this week told the inspiring origin story of OpenClaw to a general audience, a parallel talk at the AIE conference painted a considerably more complicated picture — one that anyone running OpenClaw in production should take seriously.</p><p>Speaking to an engineering audience, Steinberger described the hidden operational cost of maintaining what has become the fastest-growing open-source project in history. The numbers he shared are striking.</p><h2>The Security Reality at 247K Stars</h2><p>OpenClaw now receives roughly <strong>60 times more security incident reports than curl</strong> — a comparison Steinberger made deliberately, given curl's reputation as one of the most widely-deployed network libraries in existence and its well-documented security track record.</p><p>The sheer surface area is part of the problem. OpenClaw's skill ecosystem means third-party code runs inside users' local environments with access to messaging platforms, filesystem tools, and in many cases connected home infrastructure. Every skill is a potential attack vector.</p><p>More troubling: Steinberger estimated that <strong>at least 20% of skill submissions to ClawHub are malicious</strong>. That figure aligns with the ClawHavoc incident reported earlier this month, in which a coordinated campaign of weaponized skills was discovered in the skill marketplace. But the AIE disclosure suggests ClawHavoc was less of an anomaly and more of a visible peak in an ongoing problem.</p><h2>What This Means for Self-Hosters</h2><p>For users running OpenClaw with community skills installed, this is a useful reminder to treat skill installation the same way you would treat adding an npm package to a production app — meaning: review what you're installing, prefer skills with strong maintenance histories and genuine community engagement, and don't assume ClawHub review processes catch everything.</p><p>Practical steps:</p><ul><li><strong>Audit installed skills.</strong> Run <code>openclaw skills list</code> and review anything you haven't actively verified. Remove skills you no longer use.</li><li><strong>Watch for unscoped storage keys.</strong> The PR merged today (<a href=\"https://github.com/openclaw/openclaw/pull/70362\">#70362</a>) patched a medium-severity issue where local user identity was stored in an unscoped localStorage key, allowing identity data to bleed between gateway contexts on the same origin. If you run dev and prod on the same host, update.</li><li><strong>Keep gateway logs.</strong> The 2026.4.21 release improved logging for failed provider/model candidates at warn level — useful signal when chasing down compromised skill behavior.</li><li><strong>Disable skill auto-updates</strong> if you need stability. Manual review on each update is slower but safer in high-risk deployments.</li></ul><h2>The Maintenance Burden</h2><p>Beyond security, Steinberger's AIE talk touched on the general scaling challenges involved in maintaining a project at this velocity. The sessions/maintenance fix in 2026.4.20 (<a href=\"https://github.com/openclaw/openclaw/pull/69404\">#69404</a>) — which enforces an entry cap and age prune to prevent cron/executor session backlogs from OOM-ing the gateway — is a direct result of this scale. Real deployments were running out of memory.</p><p>The cron state split in 2026.4.20 (<a href=\"https://github.com/openclaw/openclaw/pull/63105\">#63105</a>) also reflects operational maturity: separating runtime execution state into <code>jobs-state.json</code> so the tracked <code>jobs.json</code> stays clean for version control is the kind of change you make when you have users who actually manage their configs in git.</p><h2>The Bigger Picture</h2><p>OpenClaw's security posture is not a crisis — but it is a moving target. The project's community-driven skill ecosystem, which is one of its greatest strengths, is also its largest attack surface. The comparison to curl isn't meant to be alarming; it's meant to calibrate expectations. Steinberger is clearly taking it seriously.</p><p>The full AIE talk is available via the <a href=\"https://www.latent.space/p/ainews-the-two-sides-of-openclaw\">Latent Space AINews digest</a>, alongside the moderated AMA that followed.</p><p>---</p><ul><li><strong>Related:</strong></li><li><a href=\"https://github.com/openclaw/openclaw/pull/70362\">OpenClaw PR #70362 — Personalize local user identity (security notes)</a></li><li><a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.21\">OpenClaw 2026.4.21 Release Notes</a></li><li><a href=\"https://www.latent.space/p/ainews-the-two-sides-of-openclaw\">Latent Space: The Two Sides of OpenClaw</a></li></ul>",
      "date_published": "2026-04-22T23:05:00.000Z",
      "date_modified": "2026-04-22T23:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-22-security-scale-aie-talk.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-22-peter-steinberger-ted-talk/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-22-peter-steinberger-ted-talk/",
      "title": "Peter Steinberger Takes OpenClaw to the TED Stage",
      "summary": "OpenClaw creator Peter Steinberger gave his first TED talk this week, telling the origin story of the fastest-growing open-source AI agent project in history.",
      "content_text": "It was already a big week for AI infrastructure, but the moment that stood out was Peter Steinberger stepping onto a TED stage to tell the story of OpenClaw to a general audience. The talk — titled *\"How I Created OpenClaw, the Breakthrough AI Agent\"* — is now live on the TED website and on YouTube, and it marks the kind of cultural milestone that a developer tool rarely reaches.\n\n## The TED Talk: Recapping the Highs\n\nThe TED presentation ([watch here](https://www.youtube.com/watch?v=js1dbmDIYmo)) takes a deliberately accessible approach. Steinberger walks through OpenClaw's origins and the arc of the project — from a personal side project to what has become the fastest-growing open-source project in recorded history, currently sitting at 247K GitHub stars.\n\nAimed at a general audience unfamiliar with the stack, the talk is less about the technical architecture and more about the *why*: why a personal AI agent runtime matters, how OpenClaw gave non-technical users access to AI tooling in their own messaging apps and homes, and what it means to give people control over their AI infrastructure.\n\nIf you've been deep in the OpenClaw ecosystem for months, much of this will feel like a recap. If you have friends or family curious about what you've been running on your home server, this is probably the best 18-minute primer that exists.\n\n## Also This Week on YouTube\n\nWednesday's YouTube roundup turns up a strong batch of community videos. Several stood out:\n\n- **\"OpenClaw 4.20 Just Changed AI Agents Forever\"** — a deep-dive review of the 2026.4.20 release, covering the new Sessions/Maintenance memory cap that prevents OOM on large cron backlogs, Moonshot Kimi K2.6 defaults, and the improved agent system prompts.\n\n- **\"Use OpenClaw With Your Claude Subscription Again\"** — a tutorial covering the restored Claude OAuth flow after recent provider changes. Relevant for anyone who hit authentication issues in the past few weeks.\n\n- **\"OpenClaw stressed me out (308K GitHub stars)\"** — a candid personal take on the learning curve and community pressure, particularly timely given the parallel discussion on Hacker News this week about whether the project's star count reflects real-world adoption.\n\n- **\"The wild rise of OpenClaw...\"** — a retrospective covering the project's trajectory from its early GitHub days through the current \"lobster trade\" era of stock speculation and government grants.\n\n## Why This Moment Matters\n\nThe convergence happening this week is unusual: OpenClaw's creator doing a TED talk for a mainstream audience while, simultaneously, a serious engineering discussion at AIE is digging into the unglamorous side of the project's scale (see [our separate coverage of that talk](/posts/openclaw-2026-4-22-security-scale-aie-talk/)). The public narrative and the practitioner reality are running in parallel — and both are worth watching.\n\nFor OpenClaw users, the TED talk is a useful cultural artifact. It's the version of the story you can share with anyone. For engineers running OpenClaw in production, this week's AIE disclosures are arguably more useful reading.\n\nBoth are linked below.\n\n---\n\n**Links:**\n- [TED Talk: \"How I Created OpenClaw, the Breakthrough AI Agent\"](https://www.ted.com/talks/peter_steinberger_how_i_created_openclaw_the_breakthrough_ai_agent)\n- [YouTube: Peter Steinberger TED Talk](https://www.youtube.com/watch?v=js1dbmDIYmo)\n- [AINews: \"The Two Sides of OpenClaw\" — Latent Space](https://www.latent.space/p/ainews-the-two-sides-of-openclaw)",
      "content_html": "<p>It was already a big week for AI infrastructure, but the moment that stood out was Peter Steinberger stepping onto a TED stage to tell the story of OpenClaw to a general audience. The talk — titled <em>\"How I Created OpenClaw, the Breakthrough AI Agent\"</em> — is now live on the TED website and on YouTube, and it marks the kind of cultural milestone that a developer tool rarely reaches.</p><h2>The TED Talk: Recapping the Highs</h2><p>The TED presentation (<a href=\"https://www.youtube.com/watch?v=js1dbmDIYmo\">watch here</a>) takes a deliberately accessible approach. Steinberger walks through OpenClaw's origins and the arc of the project — from a personal side project to what has become the fastest-growing open-source project in recorded history, currently sitting at 247K GitHub stars.</p><p>Aimed at a general audience unfamiliar with the stack, the talk is less about the technical architecture and more about the <em>why</em>: why a personal AI agent runtime matters, how OpenClaw gave non-technical users access to AI tooling in their own messaging apps and homes, and what it means to give people control over their AI infrastructure.</p><p>If you've been deep in the OpenClaw ecosystem for months, much of this will feel like a recap. If you have friends or family curious about what you've been running on your home server, this is probably the best 18-minute primer that exists.</p><h2>Also This Week on YouTube</h2><p>Wednesday's YouTube roundup turns up a strong batch of community videos. Several stood out:</p><ul><li><strong>\"OpenClaw 4.20 Just Changed AI Agents Forever\"</strong> — a deep-dive review of the 2026.4.20 release, covering the new Sessions/Maintenance memory cap that prevents OOM on large cron backlogs, Moonshot Kimi K2.6 defaults, and the improved agent system prompts.</li></ul><ul><li><strong>\"Use OpenClaw With Your Claude Subscription Again\"</strong> — a tutorial covering the restored Claude OAuth flow after recent provider changes. Relevant for anyone who hit authentication issues in the past few weeks.</li></ul><ul><li><strong>\"OpenClaw stressed me out (308K GitHub stars)\"</strong> — a candid personal take on the learning curve and community pressure, particularly timely given the parallel discussion on Hacker News this week about whether the project's star count reflects real-world adoption.</li></ul><ul><li><strong>\"The wild rise of OpenClaw...\"</strong> — a retrospective covering the project's trajectory from its early GitHub days through the current \"lobster trade\" era of stock speculation and government grants.</li></ul><h2>Why This Moment Matters</h2><p>The convergence happening this week is unusual: OpenClaw's creator doing a TED talk for a mainstream audience while, simultaneously, a serious engineering discussion at AIE is digging into the unglamorous side of the project's scale (see <a href=\"/posts/openclaw-2026-4-22-security-scale-aie-talk/\">our separate coverage of that talk</a>). The public narrative and the practitioner reality are running in parallel — and both are worth watching.</p><p>For OpenClaw users, the TED talk is a useful cultural artifact. It's the version of the story you can share with anyone. For engineers running OpenClaw in production, this week's AIE disclosures are arguably more useful reading.</p><p>Both are linked below.</p><p>---</p><ul><li><strong>Links:</strong></li><li><a href=\"https://www.ted.com/talks/peter_steinberger_how_i_created_openclaw_the_breakthrough_ai_agent\">TED Talk: \"How I Created OpenClaw, the Breakthrough AI Agent\"</a></li><li><a href=\"https://www.youtube.com/watch?v=js1dbmDIYmo\">YouTube: Peter Steinberger TED Talk</a></li><li><a href=\"https://www.latent.space/p/ainews-the-two-sides-of-openclaw\">AINews: \"The Two Sides of OpenClaw\" — Latent Space</a></li></ul>",
      "date_published": "2026-04-22T23:00:00.000Z",
      "date_modified": "2026-04-22T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-22-peter-steinberger-ted-talk.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-22-v2026-4-21/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-22-v2026-4-21/",
      "title": "OpenClaw v2026.4.21: GPT-Image-2 Defaults and Owner Command Security Fix",
      "summary": "OpenClaw v2026.4.21 ships gpt-image-2 as the new default image provider, adds 2K/4K size hints, and patches a permission bypass in owner-only commands.",
      "content_text": "OpenClaw [v2026.4.21](https://github.com/openclaw/openclaw/releases/tag/v2026.4.21) landed early this morning — a focused patch release that updates the default image-generation model, closes a meaningful security gap in owner command enforcement, and sharpens fallback visibility for image providers. Here is what changed and why it matters.\n\n## GPT-Image-2 Is Now the Default\n\nThe bundled image-generation provider and live media smoke tests now default to **gpt-image-2**. Alongside the model bump, OpenClaw advertises newer **2K and 4K size hints** in image-generation docs and tool metadata, so agents can request sharper output without custom configuration.\n\nIf you were already using a pinned model or a different provider, nothing changes for you — the default only applies to installations that were using the bundled provider without an explicit override.\n\n## Owner Command Security Fix\n\nA permission bypass in the auth/commands layer has been patched ([#69774](https://github.com/openclaw/openclaw/pull/69774), thanks **@drobison00**). The previous behavior allowed non-owner senders to reach owner-only commands through a permissive fallback: if `enforceOwnerForCommands=true` was set but `commands.ownerAllowFrom` was left unset, a wildcard `allowFrom` or an empty owner-candidate list was treated as sufficient authorization.\n\nThe fix requires a genuine owner identity match — either an owner-candidate match or internal `operator.admin` — before owner-enforced commands execute. If you run a multi-user or shared gateway and have `enforceOwnerForCommands` enabled, this update is worth applying promptly.\n\n## Better Image Fallback Visibility\n\nFailed provider/model candidates are now **logged at `warn` level** before automatic provider fallback triggers. Previously, if your primary image provider failed silently and a downstream provider succeeded, the gateway log gave no indication that anything went wrong. With this change, OpenAI image failures (and equivalent failures from any provider) will appear in the log even when a later fallback provider produces a result.\n\nThis is especially useful in multi-provider setups where silent fallback can mask configuration problems or quota exhaustion.\n\n## Plugins/Doctor Recovery Improvements\n\nThe `doctor` command can now repair bundled plugin runtime dependencies from doctor paths, allowing packaged installs to recover missing channel or provider dependencies **without running a broad core dependency install**. Useful on constrained environments or setups where dependency resolution is restricted.\n\n## Other Fixes in This Build\n\n- **Slack** — Thread aliases are now preserved in runtime outbound sends, so generic runtime sends stay in the intended Slack thread when the caller supplies `threadTs` ([#62947](https://github.com/openclaw/openclaw/pull/62947), thanks **@bek91**).\n- **Browser** — Invalid `ax` accessibility refs are now rejected immediately in `act` paths rather than waiting for the full browser action timeout ([#69924](https://github.com/openclaw/openclaw/pull/69924), thanks **@Patrick-Erichsen**).\n- **npm install** — The deprecated `node-domexception` chain pulled through Pi/Google runtime dependencies is now suppressed via a root `package.json` override (thanks **@vincentkoc**).\n\n## Upgrading\n\n```bash\nnpm install -g openclaw@latest\n```\n\nOr use the built-in update command:\n\n```bash\nopenclaw update\n```\n\nFull release notes are available on the [GitHub releases page](https://github.com/openclaw/openclaw/releases/tag/v2026.4.21).",
      "content_html": "<p>OpenClaw <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.21\">v2026.4.21</a> landed early this morning — a focused patch release that updates the default image-generation model, closes a meaningful security gap in owner command enforcement, and sharpens fallback visibility for image providers. Here is what changed and why it matters.</p><h2>GPT-Image-2 Is Now the Default</h2><p>The bundled image-generation provider and live media smoke tests now default to <strong>gpt-image-2</strong>. Alongside the model bump, OpenClaw advertises newer <strong>2K and 4K size hints</strong> in image-generation docs and tool metadata, so agents can request sharper output without custom configuration.</p><p>If you were already using a pinned model or a different provider, nothing changes for you — the default only applies to installations that were using the bundled provider without an explicit override.</p><h2>Owner Command Security Fix</h2><p>A permission bypass in the auth/commands layer has been patched (<a href=\"https://github.com/openclaw/openclaw/pull/69774\">#69774</a>, thanks <strong>@drobison00</strong>). The previous behavior allowed non-owner senders to reach owner-only commands through a permissive fallback: if <code>enforceOwnerForCommands=true</code> was set but <code>commands.ownerAllowFrom</code> was left unset, a wildcard <code>allowFrom</code> or an empty owner-candidate list was treated as sufficient authorization.</p><p>The fix requires a genuine owner identity match — either an owner-candidate match or internal <code>operator.admin</code> — before owner-enforced commands execute. If you run a multi-user or shared gateway and have <code>enforceOwnerForCommands</code> enabled, this update is worth applying promptly.</p><h2>Better Image Fallback Visibility</h2><p>Failed provider/model candidates are now <strong>logged at <code>warn</code> level</strong> before automatic provider fallback triggers. Previously, if your primary image provider failed silently and a downstream provider succeeded, the gateway log gave no indication that anything went wrong. With this change, OpenAI image failures (and equivalent failures from any provider) will appear in the log even when a later fallback provider produces a result.</p><p>This is especially useful in multi-provider setups where silent fallback can mask configuration problems or quota exhaustion.</p><h2>Plugins/Doctor Recovery Improvements</h2><p>The <code>doctor</code> command can now repair bundled plugin runtime dependencies from doctor paths, allowing packaged installs to recover missing channel or provider dependencies <strong>without running a broad core dependency install</strong>. Useful on constrained environments or setups where dependency resolution is restricted.</p><h2>Other Fixes in This Build</h2><ul><li><strong>Slack</strong> — Thread aliases are now preserved in runtime outbound sends, so generic runtime sends stay in the intended Slack thread when the caller supplies <code>threadTs</code> (<a href=\"https://github.com/openclaw/openclaw/pull/62947\">#62947</a>, thanks <strong>@bek91</strong>).</li><li><strong>Browser</strong> — Invalid <code>ax</code> accessibility refs are now rejected immediately in <code>act</code> paths rather than waiting for the full browser action timeout (<a href=\"https://github.com/openclaw/openclaw/pull/69924\">#69924</a>, thanks <strong>@Patrick-Erichsen</strong>).</li><li><strong>npm install</strong> — The deprecated <code>node-domexception</code> chain pulled through Pi/Google runtime dependencies is now suppressed via a root <code>package.json</code> override (thanks <strong>@vincentkoc</strong>).</li></ul><h2>Upgrading</h2><p>``<code>bash<br />npm install -g openclaw@latest<br /></code>`<code></p><p>Or use the built-in update command:</p><p></code>`<code>bash<br />openclaw update<br /></code>``</p><p>Full release notes are available on the <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.21\">GitHub releases page</a>.</p>",
      "date_published": "2026-04-22T08:00:00.000Z",
      "date_modified": "2026-04-22T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-22-v2026-4-21.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-21-community-roundup/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-21-community-roundup/",
      "title": "OpenClaw Community Roundup: Brex CrabTrap, Palmier Phone Bridge, and More",
      "summary": "Brex open-sourced CrabTrap for company-wide OpenClaw workflows, Palmier connects agents to your phone, and FlirtingBots shipped as a ClawHub skill — a busy Tuesday in the ecosystem.",
      "content_text": "Between a [stable release](/posts/openclaw-2026-4-21-release-2026420) and a [471-point Hacker News thread](/posts/openclaw-2026-4-21-anthropic-claude-cli-allowed), Tuesday was already a full day for OpenClaw news. But the community also shipped several things worth tracking independently.\n\n## Brex Open-Sources CrabTrap\n\nThe most notable ecosystem drop today: Brex — the corporate card and financial services company — published [brexhq/CrabTrap](https://github.com/brexhq/CrabTrap) to GitHub. Per the [Hacker News submission](https://news.ycombinator.com/item?id=47853956), it is \"the founder's open-sourced stack for running the company through OpenClaw.\"\n\nThe repository represents a real-world, production-scale deployment of OpenClaw as a company operating layer — not a demo or a side project. CrabTrap appears to include the agent configuration, workflow definitions, and integration scaffolding Brex has used internally to route company operations through AI agents.\n\nThis is the kind of public artifact the OpenClaw community has been waiting for: a legitimizing, enterprise-grade reference implementation from a well-known company. Expect it to be widely forked and referenced as a starting point for teams wanting to adopt OpenClaw at organizational scale.\n\nThe repository is brand new as of today. Watch it closely — early stars and forks will tell you how much traction it picks up.\n\n## Palmier: AI Agents Meet Your Phone\n\nShow HN: [Palmier](https://github.com/caihongxu/palmier) ([HN thread #47843841](https://news.ycombinator.com/item?id=47843841)) is a local bridge that does two things:\n\n1. Let you **control AI agents running on your computer from your phone**, anywhere\n2. Give your **agents access to your phone** — push notifications, SMS, calendar, contacts, location, and more\n\nOpenClaw is explicitly listed as one of the 15+ agent CLIs Palmier supports out of the box (alongside Claude Code, Gemini CLI, Codex CLI, and Cursor CLI).\n\nThe architecture is local-first and open source. No GCP, no API keys required to get started. There is an optional MCP server endpoint if you want to expose phone capabilities as native MCP tools for your agents; otherwise the phone app/PWA handles it directly.\n\nPalmier is still in alpha with self-described bugs, but the concept lands squarely in territory that OpenClaw users have been exploring: using the gateway not just for chat or automation, but as a persistent agent that can reach into the physical world through devices you carry around.\n\n**What OpenClaw users can do with it:**\n\n- Start OpenClaw tasks from your phone while away from your desk\n- Let your OpenClaw agent send you SMS or push notifications when tasks complete\n- Give agents calendar and location context without manual config syncing\n\nThe GitHub repositories are at [caihongxu/palmier](https://github.com/caihongxu/palmier) and [caihongxu/palmier-android](https://github.com/caihongxu/palmier-android).\n\n## FlirtingBots Launches as a ClawHub Skill\n\nIn the more experimental corner: [FlirtingBots](https://flirtingbots.com/) — which matches people by having their AI agents talk to each other first — shipped as an OpenClaw skill on ClawHub ([HN thread #47848108](https://news.ycombinator.com/item?id=47848108)).\n\nThe concept: your agent evaluates shared interests and compatibility with another person's agent before surfacing a match with icebreakers already prepared. The ClawHub skill means OpenClaw users running their own agent instances can self-host the integration rather than relying on the FlirtingBots cloud service.\n\nIt is early and niche, but it is also one of the first ClawHub skill launches to come with a standalone consumer product attached — an interesting model for skill distribution that blends open-source self-hosting with a hosted commercial version.\n\n## HN Activity Today: Summary\n\nBeyond the top stories, today's Hacker News surface showed OpenClaw mentioned in half a dozen separate threads — from an \"Ask HN: how can I use AI well?\" thread where a developer describes using OpenClaw with Obsidian for knowledge management, to a comment in an \"Is AI a Bubble\" discussion noting that the openclaw.ai landing page was briefly down.\n\nThe volume of organic mentions across unrelated threads is a good signal: OpenClaw has crossed the threshold where it shows up as background context in broader AI conversations, not just in dedicated OpenClaw threads.",
      "content_html": "<p>Between a <a href=\"/posts/openclaw-2026-4-21-release-2026420\">stable release</a> and a <a href=\"/posts/openclaw-2026-4-21-anthropic-claude-cli-allowed\">471-point Hacker News thread</a>, Tuesday was already a full day for OpenClaw news. But the community also shipped several things worth tracking independently.</p><h2>Brex Open-Sources CrabTrap</h2><p>The most notable ecosystem drop today: Brex — the corporate card and financial services company — published <a href=\"https://github.com/brexhq/CrabTrap\">brexhq/CrabTrap</a> to GitHub. Per the <a href=\"https://news.ycombinator.com/item?id=47853956\">Hacker News submission</a>, it is \"the founder's open-sourced stack for running the company through OpenClaw.\"</p><p>The repository represents a real-world, production-scale deployment of OpenClaw as a company operating layer — not a demo or a side project. CrabTrap appears to include the agent configuration, workflow definitions, and integration scaffolding Brex has used internally to route company operations through AI agents.</p><p>This is the kind of public artifact the OpenClaw community has been waiting for: a legitimizing, enterprise-grade reference implementation from a well-known company. Expect it to be widely forked and referenced as a starting point for teams wanting to adopt OpenClaw at organizational scale.</p><p>The repository is brand new as of today. Watch it closely — early stars and forks will tell you how much traction it picks up.</p><h2>Palmier: AI Agents Meet Your Phone</h2><p>Show HN: <a href=\"https://github.com/caihongxu/palmier\">Palmier</a> (<a href=\"https://news.ycombinator.com/item?id=47843841\">HN thread #47843841</a>) is a local bridge that does two things:</p><ol><li>Let you <strong>control AI agents running on your computer from your phone</strong>, anywhere</li><li>Give your <strong>agents access to your phone</strong> — push notifications, SMS, calendar, contacts, location, and more</li></ol><p>OpenClaw is explicitly listed as one of the 15+ agent CLIs Palmier supports out of the box (alongside Claude Code, Gemini CLI, Codex CLI, and Cursor CLI).</p><p>The architecture is local-first and open source. No GCP, no API keys required to get started. There is an optional MCP server endpoint if you want to expose phone capabilities as native MCP tools for your agents; otherwise the phone app/PWA handles it directly.</p><p>Palmier is still in alpha with self-described bugs, but the concept lands squarely in territory that OpenClaw users have been exploring: using the gateway not just for chat or automation, but as a persistent agent that can reach into the physical world through devices you carry around.</p><p><strong>What OpenClaw users can do with it:</strong></p><ul><li>Start OpenClaw tasks from your phone while away from your desk</li><li>Let your OpenClaw agent send you SMS or push notifications when tasks complete</li><li>Give agents calendar and location context without manual config syncing</li></ul><p>The GitHub repositories are at <a href=\"https://github.com/caihongxu/palmier\">caihongxu/palmier</a> and <a href=\"https://github.com/caihongxu/palmier-android\">caihongxu/palmier-android</a>.</p><h2>FlirtingBots Launches as a ClawHub Skill</h2><p>In the more experimental corner: <a href=\"https://flirtingbots.com/\">FlirtingBots</a> — which matches people by having their AI agents talk to each other first — shipped as an OpenClaw skill on ClawHub (<a href=\"https://news.ycombinator.com/item?id=47848108\">HN thread #47848108</a>).</p><p>The concept: your agent evaluates shared interests and compatibility with another person's agent before surfacing a match with icebreakers already prepared. The ClawHub skill means OpenClaw users running their own agent instances can self-host the integration rather than relying on the FlirtingBots cloud service.</p><p>It is early and niche, but it is also one of the first ClawHub skill launches to come with a standalone consumer product attached — an interesting model for skill distribution that blends open-source self-hosting with a hosted commercial version.</p><h2>HN Activity Today: Summary</h2><p>Beyond the top stories, today's Hacker News surface showed OpenClaw mentioned in half a dozen separate threads — from an \"Ask HN: how can I use AI well?\" thread where a developer describes using OpenClaw with Obsidian for knowledge management, to a comment in an \"Is AI a Bubble\" discussion noting that the openclaw.ai landing page was briefly down.</p><p>The volume of organic mentions across unrelated threads is a good signal: OpenClaw has crossed the threshold where it shows up as background context in broader AI conversations, not just in dedicated OpenClaw threads.</p>",
      "date_published": "2026-04-21T21:00:00.000Z",
      "date_modified": "2026-04-21T21:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-21-community-roundup.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-21-anthropic-claude-cli-allowed/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-21-anthropic-claude-cli-allowed/",
      "title": "Anthropic Confirms OpenClaw-Style Claude CLI Usage Is Allowed",
      "summary": "A Hacker News post linking to OpenClaw's Anthropic provider docs hit 471 points and 268 comments as users confirmed Anthropic now explicitly permits OpenClaw-style Claude CLI access.",
      "content_text": "A story posted to Hacker News late Tuesday — [\"Anthropic says OpenClaw-style Claude CLI usage is allowed again\"](https://news.ycombinator.com/item?id=47844269) — reached **471 points and 268 comments** within hours, making it one of the most-discussed OpenClaw threads in recent memory. The post links directly to the [OpenClaw Anthropic provider docs](https://docs.openclaw.ai/providers/anthropic), which document how OpenClaw routes requests through locally-installed Claude CLI sessions.\n\n## What Changed\n\nFor months, a gray area existed: Anthropic's Terms of Service had language that some interpreted as restricting automated or \"headless\" Claude usage through third-party tools. OpenClaw's Claude CLI integration — which lets you configure `claude-cli` or `claude-p` as a provider and route agent turns through your existing Claude subscription — sat in that ambiguous zone.\n\nThe HN thread confirms that Anthropic has updated its guidance, explicitly permitting OpenClaw-style usage. The OpenClaw docs page linked in the post describes the setup in detail, and the community response has been largely celebratory: users who had been hedging or self-censoring their Claude CLI configs can now operate with confidence.\n\nSeveral comments in the thread note that this unlocks a meaningful cost path. Claude Max subscribers, in particular, can now run extended OpenClaw sessions against their subscription without hitting separate API billing — the same model that Claude Code users have been operating under.\n\n## What the Docs Say\n\nThe [OpenClaw Anthropic provider page](https://docs.openclaw.ai/providers/anthropic) covers three integration paths:\n\n- **`anthropic` (direct API)** — Standard API key usage, billed per token\n- **`claude-cli` / `claude-p`** — Routes through an installed Claude CLI binary using your existing session; requires Claude Code or Claude CLI to be installed and authenticated\n- **`claude-max`** — Specifically for Claude Max subscribers who want to use subscription capacity for agent turns\n\nThe third path is what the HN discussion centers on. The docs note that Anthropic has confirmed this usage pattern is within their acceptable use policy, citing the same framework that permits Claude Code's headless `-p` flag usage.\n\n## Why It Matters for Self-Hosters\n\nThis matters because OpenClaw's value proposition for many users is *not* paying for separate API access on top of subscriptions they already have. If you are paying for Claude Max to use Claude Code, routing your OpenClaw sessions through that same subscription is a natural extension.\n\nThe earlier ambiguity had a chilling effect. Posts across Reddit, HN, and Discord regularly included hedges like \"not sure if this is allowed\" or \"using this at my own risk.\" With explicit documentation from Anthropic, those qualifications go away — and adoption of the Claude CLI provider path is likely to accelerate.\n\n## The 268-Comment Thread\n\nThe discussion itself is worth reading. Beyond the main question of permission, commenters dig into:\n\n- **What \"allowed\" actually means** in practice for different account tiers\n- **How OpenClaw's request routing compares to Claude Code's `-p` path** (they are architecturally similar)\n- **Whether Anthropic could reverse this position** and what signals to watch for\n- **Comparisons to OpenBridge**, a separate Show HN that surfaced on the same day ([#47845540](https://news.ycombinator.com/item?id=47845540)), which takes a more aggressive approach to web-session scraping that some commenters view as risker territory\n\nThe consensus in the thread is that OpenClaw's approach — working through the official CLI binary with an authenticated session — is meaningfully different from browser-session scraping, and that Anthropic's guidance reflects that distinction.\n\n## Setting Up the Claude CLI Provider\n\nIf you want to try this, the setup is straightforward:\n\n```bash\n# Install Claude CLI if you haven't\nnpm install -g @anthropic-ai/claude-code\n\n# In your OpenClaw config, add:\n# providers:\n#   - id: claude-cli\n#     type: claude-cli\n```\n\nFull configuration options, including how to set a specific Claude binary path and configure timeout behavior, are in the [OpenClaw Anthropic provider docs](https://docs.openclaw.ai/providers/anthropic).",
      "content_html": "<p>A story posted to Hacker News late Tuesday — <a href=\"https://news.ycombinator.com/item?id=47844269\">\"Anthropic says OpenClaw-style Claude CLI usage is allowed again\"</a> — reached <strong>471 points and 268 comments</strong> within hours, making it one of the most-discussed OpenClaw threads in recent memory. The post links directly to the <a href=\"https://docs.openclaw.ai/providers/anthropic\">OpenClaw Anthropic provider docs</a>, which document how OpenClaw routes requests through locally-installed Claude CLI sessions.</p><h2>What Changed</h2><p>For months, a gray area existed: Anthropic's Terms of Service had language that some interpreted as restricting automated or \"headless\" Claude usage through third-party tools. OpenClaw's Claude CLI integration — which lets you configure <code>claude-cli</code> or <code>claude-p</code> as a provider and route agent turns through your existing Claude subscription — sat in that ambiguous zone.</p><p>The HN thread confirms that Anthropic has updated its guidance, explicitly permitting OpenClaw-style usage. The OpenClaw docs page linked in the post describes the setup in detail, and the community response has been largely celebratory: users who had been hedging or self-censoring their Claude CLI configs can now operate with confidence.</p><p>Several comments in the thread note that this unlocks a meaningful cost path. Claude Max subscribers, in particular, can now run extended OpenClaw sessions against their subscription without hitting separate API billing — the same model that Claude Code users have been operating under.</p><h2>What the Docs Say</h2><p>The <a href=\"https://docs.openclaw.ai/providers/anthropic\">OpenClaw Anthropic provider page</a> covers three integration paths:</p><ul><li><strong><code>anthropic</code> (direct API)</strong> — Standard API key usage, billed per token</li><li><strong><code>claude-cli</code> / <code>claude-p</code></strong> — Routes through an installed Claude CLI binary using your existing session; requires Claude Code or Claude CLI to be installed and authenticated</li><li><strong><code>claude-max</code></strong> — Specifically for Claude Max subscribers who want to use subscription capacity for agent turns</li></ul><p>The third path is what the HN discussion centers on. The docs note that Anthropic has confirmed this usage pattern is within their acceptable use policy, citing the same framework that permits Claude Code's headless <code>-p</code> flag usage.</p><h2>Why It Matters for Self-Hosters</h2><p>This matters because OpenClaw's value proposition for many users is <em>not</em> paying for separate API access on top of subscriptions they already have. If you are paying for Claude Max to use Claude Code, routing your OpenClaw sessions through that same subscription is a natural extension.</p><p>The earlier ambiguity had a chilling effect. Posts across Reddit, HN, and Discord regularly included hedges like \"not sure if this is allowed\" or \"using this at my own risk.\" With explicit documentation from Anthropic, those qualifications go away — and adoption of the Claude CLI provider path is likely to accelerate.</p><h2>The 268-Comment Thread</h2><p>The discussion itself is worth reading. Beyond the main question of permission, commenters dig into:</p><ul><li><strong>What \"allowed\" actually means</strong> in practice for different account tiers</li><li><strong>How OpenClaw's request routing compares to Claude Code's <code>-p</code> path</strong> (they are architecturally similar)</li><li><strong>Whether Anthropic could reverse this position</strong> and what signals to watch for</li><li><strong>Comparisons to OpenBridge</strong>, a separate Show HN that surfaced on the same day (<a href=\"https://news.ycombinator.com/item?id=47845540\">#47845540</a>), which takes a more aggressive approach to web-session scraping that some commenters view as risker territory</li></ul><p>The consensus in the thread is that OpenClaw's approach — working through the official CLI binary with an authenticated session — is meaningfully different from browser-session scraping, and that Anthropic's guidance reflects that distinction.</p><h2>Setting Up the Claude CLI Provider</h2><p>If you want to try this, the setup is straightforward:</p><p>``<code>bash<br /><h1>Install Claude CLI if you haven't</h1><br />npm install -g @anthropic-ai/claude-code</p><h1>In your OpenClaw config, add:</h1>\n<h1>providers:</h1>\n<h1>- id: claude-cli</h1>\n<h1>type: claude-cli</h1>\n</code>``<p>Full configuration options, including how to set a specific Claude binary path and configure timeout behavior, are in the <a href=\"https://docs.openclaw.ai/providers/anthropic\">OpenClaw Anthropic provider docs</a>.</p>",
      "date_published": "2026-04-21T20:00:00.000Z",
      "date_modified": "2026-04-21T20:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-21-anthropic-claude-cli-allowed.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-21-release-2026420/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-21-release-2026420/",
      "title": "OpenClaw 2026.4.20: Kimi K2.6 Default, OOM Guards, Mattermost Streaming",
      "summary": "OpenClaw 2026.4.20 ships with Moonshot Kimi K2.6 as the new default, session backlog OOM protection, live Mattermost draft streaming, and BlueBubbles group system prompts.",
      "content_text": "OpenClaw 2026.4.20 landed at 19:19 UTC today — a broad quality release that touches model defaults, session resilience, channel integrations, and cron internals. The morning build already shipped [three security fixes](/posts/openclaw-2026-4-21-security-triple-patch) and a [setup wizard polish pass](/posts/openclaw-2026-4-21-setup-wizard-ux); this post covers everything else in the stable release.\n\n## Moonshot Kimi K2.6 Is Now the Default\n\nThe biggest UX change in this release is a default model swap. Both PRs [#69477](https://github.com/openclaw/openclaw/pull/69477) (surface routing) and [#68816](https://github.com/openclaw/openclaw/pull/68816) (thinking config) land together to make **Moonshot Kimi K2.6** the default for the bundled Moonshot integration — including setup, web search, and media-understanding surfaces. Kimi K2.5 stays available for anyone who needs backward compatibility.\n\nAlongside the routing change, the release adds support for `thinking.keep = \"all\"` on K2.6 specifically, while stripping unsupported thinking flags for other Moonshot models and for requests where a pinned `tool_choice` disables reasoning entirely. The result is a cleaner reasoning experience out of the box without requiring manual config.\n\nPR [#67605](https://github.com/openclaw/openclaw/pull/67605) completes the picture by adding **tiered model pricing** support so cached catalogs and configured models can carry per-tier cost data. Bundled Kimi K2.6 and K2.5 cost estimates now flow into token-usage reports — useful for anyone tracking spend across sessions.\n\n## Session Backlog OOM Protection\n\nPR [#69404](https://github.com/openclaw/openclaw/pull/69404) from [@bobrenze-bot](https://github.com/bobrenze-bot) addresses a real-world failure mode: gateways that run heavy cron or executor workloads could accumulate massive session stores over time, eventually triggering out-of-memory crashes before the write path had a chance to prune them.\n\nThe fix enforces the built-in entry cap and age prune **by default** (not just when explicitly configured) and adds a load-time prune that catches oversized stores on startup. Users who have been running long-lived gateways with many automated sessions should see more predictable memory footprints starting with this release.\n\n## Cron State Split: jobs-state.json\n\nPR [#63105](https://github.com/openclaw/openclaw/pull/63105) from [@Feelw00](https://github.com/Feelw00) is a quiet but useful architectural change: runtime execution state is now split into a separate **`jobs-state.json`** file, leaving `jobs.json` stable and appropriate for git-tracking.\n\nPreviously, committing your cron job definitions to version control meant dealing with runtime state noise mixed into the same file. The split keeps job definitions clean and declarative while letting the gateway write execution state wherever it likes.\n\n## Mattermost Gets Live Draft Streaming\n\nPR [#47838](https://github.com/openclaw/openclaw/pull/47838) from [@ninjaa](https://github.com/ninjaa) brings OpenClaw's streaming reply model to Mattermost. Thinking blocks, tool activity, and partial reply text now flow into a single **draft preview post** that finalizes in place when the turn completes safely.\n\nThis matches the pattern that Discord and Slack users have had for a while — watching the agent \"type\" in real time rather than waiting for a completed message to appear. For Mattermost deployments used in team workflows (especially engineering bots that run tool-heavy tasks), this is a meaningful UX improvement.\n\n## BlueBubbles: Per-Group System Prompts\n\nPR [#69198](https://github.com/openclaw/openclaw/pull/69198) from [@omarshahine](https://github.com/omarshahine) closes [#60665](https://github.com/openclaw/openclaw/issues/60665) by forwarding per-group `systemPrompt` config into inbound `GroupSystemPrompt` context on every turn.\n\nIn practice this means BlueBubbles users can configure **different behavioral instructions per chat group** — for example, different reply styles for a family group versus a work team — with wildcard `\"*\"` support as a fallback for groups not explicitly named. The config follows the existing `requireMention` pattern so it should feel natural for anyone already using per-channel settings.\n\n## Agents/Compaction Notices\n\nPR [#67830](https://github.com/openclaw/openclaw/pull/67830) from [@feniix](https://github.com/feniix) adds opt-in start and completion notices during context compaction. Long sessions that trigger background compaction now surface a visible signal rather than silently compacting mid-conversation. Useful for debugging and for users who want to understand when their context window is being managed.\n\n## Plugin/Task Detached Runtime Contract\n\nPR [#68915](https://github.com/openclaw/openclaw/pull/68915) from [@mbelinky](https://github.com/mbelinky) formalizes the extension point for plugin-owned task lifecycle. Plugins can now register as detached runtime owners — responsible for their own task cancellation and state transitions — without reaching into core task internals. This is the kind of boundary that makes third-party extensions more stable across releases.\n\n## Performance: Log Sanitization\n\nPR [#67205](https://github.com/openclaw/openclaw/pull/67205) from [@bulutmuf](https://github.com/bulutmuf) replaces the iterative character-stripping loop in `sanitizeForLog()` with a single regex pass. The functional behavior is unchanged, but high-volume deployments that log aggressively will see measurably lower overhead on the logging path.\n\n## Getting This Release\n\n```bash\nnpm install -g openclaw@latest\n# or update an existing install\nopenclaw update\n```\n\nThe full changelog is available at [github.com/openclaw/openclaw/releases/tag/v2026.4.20](https://github.com/openclaw/openclaw/releases/tag/v2026.4.20).",
      "content_html": "<p>OpenClaw 2026.4.20 landed at 19:19 UTC today — a broad quality release that touches model defaults, session resilience, channel integrations, and cron internals. The morning build already shipped <a href=\"/posts/openclaw-2026-4-21-security-triple-patch\">three security fixes</a> and a <a href=\"/posts/openclaw-2026-4-21-setup-wizard-ux\">setup wizard polish pass</a>; this post covers everything else in the stable release.</p><h2>Moonshot Kimi K2.6 Is Now the Default</h2><p>The biggest UX change in this release is a default model swap. Both PRs <a href=\"https://github.com/openclaw/openclaw/pull/69477\">#69477</a> (surface routing) and <a href=\"https://github.com/openclaw/openclaw/pull/68816\">#68816</a> (thinking config) land together to make <strong>Moonshot Kimi K2.6</strong> the default for the bundled Moonshot integration — including setup, web search, and media-understanding surfaces. Kimi K2.5 stays available for anyone who needs backward compatibility.</p><p>Alongside the routing change, the release adds support for <code>thinking.keep = \"all\"</code> on K2.6 specifically, while stripping unsupported thinking flags for other Moonshot models and for requests where a pinned <code>tool_choice</code> disables reasoning entirely. The result is a cleaner reasoning experience out of the box without requiring manual config.</p><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/67605\">#67605</a> completes the picture by adding <strong>tiered model pricing</strong> support so cached catalogs and configured models can carry per-tier cost data. Bundled Kimi K2.6 and K2.5 cost estimates now flow into token-usage reports — useful for anyone tracking spend across sessions.</p><h2>Session Backlog OOM Protection</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/69404\">#69404</a> from <a href=\"https://github.com/bobrenze-bot\">@bobrenze-bot</a> addresses a real-world failure mode: gateways that run heavy cron or executor workloads could accumulate massive session stores over time, eventually triggering out-of-memory crashes before the write path had a chance to prune them.</p><p>The fix enforces the built-in entry cap and age prune <strong>by default</strong> (not just when explicitly configured) and adds a load-time prune that catches oversized stores on startup. Users who have been running long-lived gateways with many automated sessions should see more predictable memory footprints starting with this release.</p><h2>Cron State Split: jobs-state.json</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/63105\">#63105</a> from <a href=\"https://github.com/Feelw00\">@Feelw00</a> is a quiet but useful architectural change: runtime execution state is now split into a separate <strong><code>jobs-state.json</code></strong> file, leaving <code>jobs.json</code> stable and appropriate for git-tracking.</p><p>Previously, committing your cron job definitions to version control meant dealing with runtime state noise mixed into the same file. The split keeps job definitions clean and declarative while letting the gateway write execution state wherever it likes.</p><h2>Mattermost Gets Live Draft Streaming</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/47838\">#47838</a> from <a href=\"https://github.com/ninjaa\">@ninjaa</a> brings OpenClaw's streaming reply model to Mattermost. Thinking blocks, tool activity, and partial reply text now flow into a single <strong>draft preview post</strong> that finalizes in place when the turn completes safely.</p><p>This matches the pattern that Discord and Slack users have had for a while — watching the agent \"type\" in real time rather than waiting for a completed message to appear. For Mattermost deployments used in team workflows (especially engineering bots that run tool-heavy tasks), this is a meaningful UX improvement.</p><h2>BlueBubbles: Per-Group System Prompts</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/69198\">#69198</a> from <a href=\"https://github.com/omarshahine\">@omarshahine</a> closes <a href=\"https://github.com/openclaw/openclaw/issues/60665\">#60665</a> by forwarding per-group <code>systemPrompt</code> config into inbound <code>GroupSystemPrompt</code> context on every turn.</p><p>In practice this means BlueBubbles users can configure <strong>different behavioral instructions per chat group</strong> — for example, different reply styles for a family group versus a work team — with wildcard <code>\"*\"</code> support as a fallback for groups not explicitly named. The config follows the existing <code>requireMention</code> pattern so it should feel natural for anyone already using per-channel settings.</p><h2>Agents/Compaction Notices</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/67830\">#67830</a> from <a href=\"https://github.com/feniix\">@feniix</a> adds opt-in start and completion notices during context compaction. Long sessions that trigger background compaction now surface a visible signal rather than silently compacting mid-conversation. Useful for debugging and for users who want to understand when their context window is being managed.</p><h2>Plugin/Task Detached Runtime Contract</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/68915\">#68915</a> from <a href=\"https://github.com/mbelinky\">@mbelinky</a> formalizes the extension point for plugin-owned task lifecycle. Plugins can now register as detached runtime owners — responsible for their own task cancellation and state transitions — without reaching into core task internals. This is the kind of boundary that makes third-party extensions more stable across releases.</p><h2>Performance: Log Sanitization</h2><p>PR <a href=\"https://github.com/openclaw/openclaw/pull/67205\">#67205</a> from <a href=\"https://github.com/bulutmuf\">@bulutmuf</a> replaces the iterative character-stripping loop in <code>sanitizeForLog()</code> with a single regex pass. The functional behavior is unchanged, but high-volume deployments that log aggressively will see measurably lower overhead on the logging path.</p><h2>Getting This Release</h2><p>``<code>bash<br />npm install -g openclaw@latest<br /><h1>or update an existing install</h1><br />openclaw update<br /></code>``</p><p>The full changelog is available at <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.20\">github.com/openclaw/openclaw/releases/tag/v2026.4.20</a>.</p>",
      "date_published": "2026-04-21T19:30:00.000Z",
      "date_modified": "2026-04-21T19:30:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-21-release-2026420.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-21-setup-wizard-ux/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-21-setup-wizard-ux/",
      "title": "OpenClaw Setup Wizard Gets Clearer Security Warnings and Searchable Selects",
      "summary": "PR #69553 polishes the OpenClaw onboarding experience with a structured security disclaimer, yellow warning banner, loading spinners on model catalog fetches, and searchable model selection.",
      "content_text": "First impressions matter — and for a tool like OpenClaw, the setup wizard is often where new users decide whether they trust what they are about to configure. [PR #69553](https://github.com/openclaw/openclaw/pull/69553), merged today by [@Patrick-Erichsen](https://github.com/Patrick-Erichsen), is a focused UX pass that makes onboarding cleaner, more informative, and less likely to leave users guessing at a blank screen.\n\n## A Security Disclaimer That Actually Gets Read\n\nThe original onboarding flow included a security disclaimer, but it was presented in a way that was easy to skim past. The update restructures it into a scannable layout with clear headings and — most visibly — a **yellow warning banner** that anchors the key message before users proceed.\n\nThis matters more than it might sound. OpenClaw is a self-hosted AI gateway with access to credentials, external services, and (for many users) personal communications. A first-run disclaimer that users actually absorb reduces the chance of misconfiguration surprises down the line. The new format takes cues from how security-sensitive CLI tools present warnings: structured, visible, unavoidable.\n\n## Loading Spinners During Model Catalog Fetches\n\nAnyone who has run through OpenClaw setup on a slower connection has probably stared at a blank prompt wondering whether something broke. The new PR adds `prompter.progress(...)` loading spinners around both `loadModelCatalog` calls in `model-picker.ts`.\n\nThe fix is minimal — a `try/finally` block ensures spinners always stop even if the fetch fails — but the user experience difference is significant. Instead of apparent hangs, users now see an active indicator. Combined with the API key placeholder text added to credential prompts, the flow feels substantially more polished.\n\n## Searchable Model Selection\n\nFor users with access to many model providers, the model picker previously rendered as a flat list. PR #69553 wires in `searchable: true` via a new `WizardSelectParams` field, activating Clack's autocomplete behavior on the model selection prompt.\n\nThe implementation mirrors the existing `multiselect` pattern in the codebase, keeping it consistent. Users can now type to filter the list — a small change that becomes genuinely useful when you have 30+ models across multiple providers configured.\n\n## What Did Not Change\n\nThe Greptile review gave this PR a confidence score of 5/5, noting that all changes are strictly additive and UX-scoped with no logic or data-flow modifications. The one minor flag was a CHANGELOG wording discrepancy — the entry mentioned channel-setup spinners that were not actually in the diff. The implementation is otherwise clean.\n\n---\n\nIf you have been running OpenClaw for a while, these changes will not affect you directly. But if you help others get started — recommending OpenClaw to teammates, writing guides, or maintaining deployment scripts — the improved first-run experience is worth knowing about. Less friction at setup means fewer questions later.\n\nThe full diff is on [GitHub](https://github.com/openclaw/openclaw/pull/69553).",
      "content_html": "<p>First impressions matter — and for a tool like OpenClaw, the setup wizard is often where new users decide whether they trust what they are about to configure. <a href=\"https://github.com/openclaw/openclaw/pull/69553\">PR #69553</a>, merged today by <a href=\"https://github.com/Patrick-Erichsen\">@Patrick-Erichsen</a>, is a focused UX pass that makes onboarding cleaner, more informative, and less likely to leave users guessing at a blank screen.</p><h2>A Security Disclaimer That Actually Gets Read</h2><p>The original onboarding flow included a security disclaimer, but it was presented in a way that was easy to skim past. The update restructures it into a scannable layout with clear headings and — most visibly — a <strong>yellow warning banner</strong> that anchors the key message before users proceed.</p><p>This matters more than it might sound. OpenClaw is a self-hosted AI gateway with access to credentials, external services, and (for many users) personal communications. A first-run disclaimer that users actually absorb reduces the chance of misconfiguration surprises down the line. The new format takes cues from how security-sensitive CLI tools present warnings: structured, visible, unavoidable.</p><h2>Loading Spinners During Model Catalog Fetches</h2><p>Anyone who has run through OpenClaw setup on a slower connection has probably stared at a blank prompt wondering whether something broke. The new PR adds <code>prompter.progress(...)</code> loading spinners around both <code>loadModelCatalog</code> calls in <code>model-picker.ts</code>.</p><p>The fix is minimal — a <code>try/finally</code> block ensures spinners always stop even if the fetch fails — but the user experience difference is significant. Instead of apparent hangs, users now see an active indicator. Combined with the API key placeholder text added to credential prompts, the flow feels substantially more polished.</p><h2>Searchable Model Selection</h2><p>For users with access to many model providers, the model picker previously rendered as a flat list. PR #69553 wires in <code>searchable: true</code> via a new <code>WizardSelectParams</code> field, activating Clack's autocomplete behavior on the model selection prompt.</p><p>The implementation mirrors the existing <code>multiselect</code> pattern in the codebase, keeping it consistent. Users can now type to filter the list — a small change that becomes genuinely useful when you have 30+ models across multiple providers configured.</p><h2>What Did Not Change</h2><p>The Greptile review gave this PR a confidence score of 5/5, noting that all changes are strictly additive and UX-scoped with no logic or data-flow modifications. The one minor flag was a CHANGELOG wording discrepancy — the entry mentioned channel-setup spinners that were not actually in the diff. The implementation is otherwise clean.</p><p>---</p><p>If you have been running OpenClaw for a while, these changes will not affect you directly. But if you help others get started — recommending OpenClaw to teammates, writing guides, or maintaining deployment scripts — the improved first-run experience is worth knowing about. Less friction at setup means fewer questions later.</p><p>The full diff is on <a href=\"https://github.com/openclaw/openclaw/pull/69553\">GitHub</a>.</p>",
      "date_published": "2026-04-21T08:05:00.000Z",
      "date_modified": "2026-04-21T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-21-setup-wizard-ux.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-21-security-triple-patch/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-21-security-triple-patch/",
      "title": "OpenClaw Closes Three Security Gaps: Cron, MCP Stdio, and Media Upload",
      "summary": "OpenClaw merged three security PRs on April 21st, patching a cron message-tool bypass, an MCP stdio env injection flaw, and an SSRF gap in media upload paths.",
      "content_text": "Tuesday morning brought a flurry of security-focused merges to the OpenClaw main branch. Three separate PRs — each targeting a distinct attack surface — landed within hours of each other, continuing a pattern of proactive hardening that has picked up pace across recent releases.\n\nHere is what changed and why it matters.\n\n## Fix 1: Cron Isolated-Agent Message Tool Bypass (PR #69587)\n\n**Merged by:** [@obviyus](https://github.com/obviyus)\n\nThis is the highest-severity fix of the three. The automated cron system runs isolated agent turns to deliver scheduled outputs. Before this patch, `runCronIsolatedAgentTurn` had a policy function — `resolveCronToolPolicy` — that *force-enabled* the `message` tool for most delivery modes and simultaneously dropped the `requireExplicitMessageTarget` safety rail.\n\nThe combined effect was dangerous for unattended workloads:\n\n- Cron jobs running with `delivery.mode: none` (no intended outbound messaging) could still invoke `message` to send content elsewhere\n- With `requireExplicitMessageTarget: false`, the agent could resolve a target from ambient session context — including the \"last\" channel from an unrelated main session\n- This created a **cross-session messaging risk**: a cron job triggered in one context could inadvertently (or via a prompt-injected payload) send messages into a completely different user's channel\n\nThe fix reintroduces a restrictive default: the `message` tool is no longer force-enabled in cron contexts unless delivery is explicitly requested and a verified target is resolved. The `forceMessageTool: true` flag has been removed in favor of conditional enablement that only activates when `deliveryRequested && deliveryMode !== \"webhook\"`.\n\n**Who is affected:** Anyone running cron jobs with `delivery.mode: none` or `announce` who processes untrusted input in those jobs. Update recommended.\n\n## Fix 2: MCP Stdio Empty-Env Injection Block (PR #69540)\n\n**Merged by:** [@drobison00](https://github.com/drobison00)\n\nOpenClaw's MCP stdio transport already filtered out known dangerous environment variable overrides like `NODE_OPTIONS`, `LD_PRELOAD`, and similar host-injection vectors. The problem was what happened *after* filtering: when every supplied env key was blocked, `toMcpEnvRecord()` returned an explicit empty object `{}` rather than `undefined`.\n\nIn Node.js process spawning, `env: {}` does not mean \"inherit parent env with no overrides\" — it means \"spawn with no environment at all.\" That strips `PATH`, `HOME`, SSL certificate paths, proxy settings, and anything else the MCP server process would normally inherit. The consequences range from subtle misbehavior (wrong config paths, missing certificates) to hard startup failures (no `PATH` means `node` or `python` cannot be found).\n\nThe patch changes `toMcpEnvRecord()` to return `undefined` when all keys are filtered out, allowing the child process to inherit `process.env` naturally. For configs that provide *some* safe env keys, those are now merged on top of parent env rather than replacing it entirely.\n\n**Who is affected:** Users who configure MCP servers with custom `env` blocks that happen to contain only blocked keys (e.g., someone who accidentally included `NODE_OPTIONS` as their only env var). Edge case, but worth knowing.\n\n## Fix 3: SSRF Guard on Media Upload URL Paths (PR #69595)\n\n**Merged by:** [@pgondhi987](https://github.com/pgondhi987)\n\nThe ChatGPT/Codex connector's `uploadC2CMedia` and `uploadGroupMedia` functions now run supplied URLs through `assertDirectUploadUrlAllowed` before making any outbound request. This adds two layers of protection:\n\n1. **HTTPS enforcement** — plain HTTP upload URLs are rejected outright\n2. **Hostname policy check** — the target hostname is validated via `resolvePinnedHostnameWithPolicy`, blocking requests to internal addresses, loopback ranges, and other SSRF-prone targets\n\nWithout this guard, a crafted media URL could have directed the upload function to fetch from internal infrastructure — a classic server-side request forgery vector in apps that accept user-supplied URLs.\n\nThis continues a broader SSRF hardening pass that has touched several media and webhook paths in recent releases.\n\n## The Bigger Picture\n\nThree security PRs in a single morning is not coincidental. The OpenClaw team and its security-review tooling (the \"Aisle\" bot visible in these PRs) appear to be running a systematic sweep of boundary conditions in agent runtime, MCP transport, and connector code. The attack surface for an AI gateway — one that manages multi-session state, spawns child processes, and talks to external services — is large, and these fixes show that attention.\n\nIf you are running a publicly-accessible OpenClaw instance or one that processes untrusted agent inputs, tracking the security changelog closely is worthwhile. The [GitHub releases page](https://github.com/openclaw/openclaw/releases) and the [merged PRs](https://github.com/openclaw/openclaw/pulls?q=is%3Amerged&sort=updated) are the fastest signals.",
      "content_html": "<p>Tuesday morning brought a flurry of security-focused merges to the OpenClaw main branch. Three separate PRs — each targeting a distinct attack surface — landed within hours of each other, continuing a pattern of proactive hardening that has picked up pace across recent releases.</p><p>Here is what changed and why it matters.</p><h2>Fix 1: Cron Isolated-Agent Message Tool Bypass (PR #69587)</h2><p><strong>Merged by:</strong> <a href=\"https://github.com/obviyus\">@obviyus</a></p><p>This is the highest-severity fix of the three. The automated cron system runs isolated agent turns to deliver scheduled outputs. Before this patch, <code>runCronIsolatedAgentTurn</code> had a policy function — <code>resolveCronToolPolicy</code> — that <em>force-enabled</em> the <code>message</code> tool for most delivery modes and simultaneously dropped the <code>requireExplicitMessageTarget</code> safety rail.</p><p>The combined effect was dangerous for unattended workloads:</p><ul><li>Cron jobs running with <code>delivery.mode: none</code> (no intended outbound messaging) could still invoke <code>message</code> to send content elsewhere</li><li>With <code>requireExplicitMessageTarget: false</code>, the agent could resolve a target from ambient session context — including the \"last\" channel from an unrelated main session</li><li>This created a <strong>cross-session messaging risk</strong>: a cron job triggered in one context could inadvertently (or via a prompt-injected payload) send messages into a completely different user's channel</li></ul><p>The fix reintroduces a restrictive default: the <code>message</code> tool is no longer force-enabled in cron contexts unless delivery is explicitly requested and a verified target is resolved. The <code>forceMessageTool: true</code> flag has been removed in favor of conditional enablement that only activates when <code>deliveryRequested && deliveryMode !== \"webhook\"</code>.</p><p><strong>Who is affected:</strong> Anyone running cron jobs with <code>delivery.mode: none</code> or <code>announce</code> who processes untrusted input in those jobs. Update recommended.</p><h2>Fix 2: MCP Stdio Empty-Env Injection Block (PR #69540)</h2><p><strong>Merged by:</strong> <a href=\"https://github.com/drobison00\">@drobison00</a></p><p>OpenClaw's MCP stdio transport already filtered out known dangerous environment variable overrides like <code>NODE_OPTIONS</code>, <code>LD_PRELOAD</code>, and similar host-injection vectors. The problem was what happened <em>after</em> filtering: when every supplied env key was blocked, <code>toMcpEnvRecord()</code> returned an explicit empty object <code>{}</code> rather than <code>undefined</code>.</p><p>In Node.js process spawning, <code>env: {}</code> does not mean \"inherit parent env with no overrides\" — it means \"spawn with no environment at all.\" That strips <code>PATH</code>, <code>HOME</code>, SSL certificate paths, proxy settings, and anything else the MCP server process would normally inherit. The consequences range from subtle misbehavior (wrong config paths, missing certificates) to hard startup failures (no <code>PATH</code> means <code>node</code> or <code>python</code> cannot be found).</p><p>The patch changes <code>toMcpEnvRecord()</code> to return <code>undefined</code> when all keys are filtered out, allowing the child process to inherit <code>process.env</code> naturally. For configs that provide <em>some</em> safe env keys, those are now merged on top of parent env rather than replacing it entirely.</p><p><strong>Who is affected:</strong> Users who configure MCP servers with custom <code>env</code> blocks that happen to contain only blocked keys (e.g., someone who accidentally included <code>NODE_OPTIONS</code> as their only env var). Edge case, but worth knowing.</p><h2>Fix 3: SSRF Guard on Media Upload URL Paths (PR #69595)</h2><p><strong>Merged by:</strong> <a href=\"https://github.com/pgondhi987\">@pgondhi987</a></p><p>The ChatGPT/Codex connector's <code>uploadC2CMedia</code> and <code>uploadGroupMedia</code> functions now run supplied URLs through <code>assertDirectUploadUrlAllowed</code> before making any outbound request. This adds two layers of protection:</p><ol><li><strong>HTTPS enforcement</strong> — plain HTTP upload URLs are rejected outright</li><li><strong>Hostname policy check</strong> — the target hostname is validated via <code>resolvePinnedHostnameWithPolicy</code>, blocking requests to internal addresses, loopback ranges, and other SSRF-prone targets</li></ol><p>Without this guard, a crafted media URL could have directed the upload function to fetch from internal infrastructure — a classic server-side request forgery vector in apps that accept user-supplied URLs.</p><p>This continues a broader SSRF hardening pass that has touched several media and webhook paths in recent releases.</p><h2>The Bigger Picture</h2><p>Three security PRs in a single morning is not coincidental. The OpenClaw team and its security-review tooling (the \"Aisle\" bot visible in these PRs) appear to be running a systematic sweep of boundary conditions in agent runtime, MCP transport, and connector code. The attack surface for an AI gateway — one that manages multi-session state, spawns child processes, and talks to external services — is large, and these fixes show that attention.</p><p>If you are running a publicly-accessible OpenClaw instance or one that processes untrusted agent inputs, tracking the security changelog closely is worthwhile. The <a href=\"https://github.com/openclaw/openclaw/releases\">GitHub releases page</a> and the <a href=\"https://github.com/openclaw/openclaw/pulls?q=is%3Amerged&sort=updated\">merged PRs</a> are the fastest signals.</p>",
      "date_published": "2026-04-21T08:00:00.000Z",
      "date_modified": "2026-04-21T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-21-security-triple-patch.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-20-monday-community-roundup/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-20-monday-community-roundup/",
      "title": "OpenClaw Community Roundup: Monday, April 20, 2026",
      "summary": "From a CNBC feature on AI agent inefficiency to OpenClaw-inspired open-source tools, Monday brought a fresh wave of ecosystem signal worth tracking.",
      "content_text": "Beyond the viral Hacker News security debate that dominated Monday's conversation, a handful of other OpenClaw-adjacent stories surfaced across the community today. Here is a quick scan of what crossed the feeds.\n\n## CNBC Spotlights AI Agent Inefficiency — OpenClaw in the Frame\n\nCNBC published a piece titled *\"Silicon Valley's AI agent hiccups: Wasted tokens and 'chaotic' systems\"* ([cnbc.com](https://www.cnbc.com/2026/04/19/siiicon-valley-ai-agent-openclaw-problems.html)) picking up real-world friction reports from teams running AI agents at scale. OpenClaw is referenced in the piece as a widely deployed example platform — putting it in the center of a mainstream technology media conversation about whether current agent architectures are production-ready.\n\nIt landed on Hacker News on Monday with [story #47834159](https://news.ycombinator.com/item?id=47834159). The headline frames the problem around token waste and coordination overhead in multi-agent systems — familiar territory for anyone who has watched a long OpenClaw session spiral into a context-compaction loop.\n\n## Show HN: Comrade — a Security-Focused AI Workspace Inspired by OpenClaw\n\nA new open-source project called **Comrade** launched on Hacker News today ([story #47839520](https://news.ycombinator.com/item?id=47839520), 5 points). The author describes it as:\n\n> \"An open-source AI workspace for teams focused on security. It provides a premium interface for AI-powered workflows, built with transparency, extensibility, and local-first principles.\"\n\nThe submission explicitly credits OpenClaw's success as the inspiration. The project lives at [github.com/LaurentiuGabriel/comrade](https://github.com/LaurentiuGabriel/comrade). It is early-stage, but the fact that OpenClaw is increasingly the reference point when security-focused builders design agent tools is itself worth noting.\n\n## Show HN: The Trawl CLI — Mining OpenClaw Session Logs for Insight\n\nA lighter tool that got a quiet Show HN post today: **The Trawl CLI** ([story #47835009](https://news.ycombinator.com/item?id=47835009)), built to comb through AI harness session logs for memorable or instructive moments. OpenClaw is listed as one of the target log sources alongside Claude Code and Codex — with the author noting plans to add full OpenClaw session trawling in a future version.\n\nThe pitch: rather than losing the good (and hilariously bad) moments buried in hundreds of agent turns, surface them automatically. A few examples from the Show HN post capture the flavor — parallel agents returning simultaneously to report \"Perfect!\" like \"eager minions reporting to a Bond villain,\" and Claude parallelizing itself into a context window it could not compress.\n\nGitHub: [github.com/The-Daily-Claude/the-daily-claude](https://github.com/The-Daily-Claude/the-daily-claude)\n\n## \"OpenClaw Is Toast\" — An Overstated HN Headline\n\nOne low-signal entry worth flagging: a Twitter link posted to HN under the title *\"OpenClaw is toast. OpenHuman just landed\"* ([story #47839564](https://news.ycombinator.com/item?id=47839564)) collected 3 points and a handful of dismissive replies before fading. The linked tweet does not appear to contain substantive technical claims. Treat this one as noise for now — worth watching only if OpenHuman materializes into something with a real GitHub presence.\n\n## What Monday Adds Up To\n\nTaken together, today's ecosystem activity points to a few trends:\n\n- **OpenClaw is increasingly the reference point** for new AI agent tooling — both as something to build on top of and as a contrast case for alternative architectures.\n- **Mainstream media is paying attention** to AI agent friction, and OpenClaw is named in those conversations.\n- **The community is actively building around the edges** — log miners, security wrappers, alternative gateways. This is what healthy open-source ecosystem growth looks like.\n\nMonday was a heavy news day for OpenClaw. The viral Hacker News security debate (covered [in our separate post](/posts/openclaw-2026-4-20-hacker-news-security-debate)) was the lead story, but the surrounding signal confirms this is a platform the broader tech community is watching closely.",
      "content_html": "<p>Beyond the viral Hacker News security debate that dominated Monday's conversation, a handful of other OpenClaw-adjacent stories surfaced across the community today. Here is a quick scan of what crossed the feeds.</p><h2>CNBC Spotlights AI Agent Inefficiency — OpenClaw in the Frame</h2><p>CNBC published a piece titled <em>\"Silicon Valley's AI agent hiccups: Wasted tokens and 'chaotic' systems\"</em> (<a href=\"https://www.cnbc.com/2026/04/19/siiicon-valley-ai-agent-openclaw-problems.html\">cnbc.com</a>) picking up real-world friction reports from teams running AI agents at scale. OpenClaw is referenced in the piece as a widely deployed example platform — putting it in the center of a mainstream technology media conversation about whether current agent architectures are production-ready.</p><p>It landed on Hacker News on Monday with <a href=\"https://news.ycombinator.com/item?id=47834159\">story #47834159</a>. The headline frames the problem around token waste and coordination overhead in multi-agent systems — familiar territory for anyone who has watched a long OpenClaw session spiral into a context-compaction loop.</p><h2>Show HN: Comrade — a Security-Focused AI Workspace Inspired by OpenClaw</h2><p>A new open-source project called <strong>Comrade</strong> launched on Hacker News today (<a href=\"https://news.ycombinator.com/item?id=47839520\">story #47839520</a>, 5 points). The author describes it as:</p><p>> \"An open-source AI workspace for teams focused on security. It provides a premium interface for AI-powered workflows, built with transparency, extensibility, and local-first principles.\"</p><p>The submission explicitly credits OpenClaw's success as the inspiration. The project lives at <a href=\"https://github.com/LaurentiuGabriel/comrade\">github.com/LaurentiuGabriel/comrade</a>. It is early-stage, but the fact that OpenClaw is increasingly the reference point when security-focused builders design agent tools is itself worth noting.</p><h2>Show HN: The Trawl CLI — Mining OpenClaw Session Logs for Insight</h2><p>A lighter tool that got a quiet Show HN post today: <strong>The Trawl CLI</strong> (<a href=\"https://news.ycombinator.com/item?id=47835009\">story #47835009</a>), built to comb through AI harness session logs for memorable or instructive moments. OpenClaw is listed as one of the target log sources alongside Claude Code and Codex — with the author noting plans to add full OpenClaw session trawling in a future version.</p><p>The pitch: rather than losing the good (and hilariously bad) moments buried in hundreds of agent turns, surface them automatically. A few examples from the Show HN post capture the flavor — parallel agents returning simultaneously to report \"Perfect!\" like \"eager minions reporting to a Bond villain,\" and Claude parallelizing itself into a context window it could not compress.</p><p>GitHub: <a href=\"https://github.com/The-Daily-Claude/the-daily-claude\">github.com/The-Daily-Claude/the-daily-claude</a></p><h2>\"OpenClaw Is Toast\" — An Overstated HN Headline</h2><p>One low-signal entry worth flagging: a Twitter link posted to HN under the title <em>\"OpenClaw is toast. OpenHuman just landed\"</em> (<a href=\"https://news.ycombinator.com/item?id=47839564\">story #47839564</a>) collected 3 points and a handful of dismissive replies before fading. The linked tweet does not appear to contain substantive technical claims. Treat this one as noise for now — worth watching only if OpenHuman materializes into something with a real GitHub presence.</p><h2>What Monday Adds Up To</h2><p>Taken together, today's ecosystem activity points to a few trends:</p><ul><li><strong>OpenClaw is increasingly the reference point</strong> for new AI agent tooling — both as something to build on top of and as a contrast case for alternative architectures.</li><li><strong>Mainstream media is paying attention</strong> to AI agent friction, and OpenClaw is named in those conversations.</li><li><strong>The community is actively building around the edges</strong> — log miners, security wrappers, alternative gateways. This is what healthy open-source ecosystem growth looks like.</li></ul><p>Monday was a heavy news day for OpenClaw. The viral Hacker News security debate (covered <a href=\"/posts/openclaw-2026-4-20-hacker-news-security-debate\">in our separate post</a>) was the lead story, but the surrounding signal confirms this is a platform the broader tech community is watching closely.</p>",
      "date_published": "2026-04-20T23:10:00.000Z",
      "date_modified": "2026-04-20T23:10:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-20-monday-community-roundup.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-20-hacker-news-security-debate/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-20-hacker-news-security-debate/",
      "title": "OpenClaw Security Model Draws 262-Point Hacker News Debate",
      "summary": "A flyingpenguin.com post comparing OpenClaw's gateway sandbox to MS-DOS-era security hit Hacker News with 262 points and 294 comments on Monday.",
      "content_text": "A single blog post published Monday on [flyingpenguin.com](https://www.flyingpenguin.com/build-an-openclaw-free-secure-always-on-local-ai-agent/) ignited one of the most engaged OpenClaw community conversations in recent memory. Posted to Hacker News under the title *\"OpenClaw isn't fooling me. I remember MS-DOS,\"* it climbed to 262 points with 294 comments by Monday evening UTC — easily among the most-discussed OpenClaw threads this month.\n\n## The Core Argument: MS-DOS All Over Again\n\nThe author — who is building an alternative agent gateway called [Wirken](https://wirken.ai) — draws a sharp parallel between the current state of AI agent security and the pre-Unix era of MS-DOS. In that world, any program could \"peek and poke the kernel, hook interrupts, write anywhere on disk.\" The fix, history showed, was not a better shell or a more careful wrapper. It was a fundamentally different architecture: process separation, virtual memory, ACLs, and privilege rings.\n\nThe argument applied to OpenClaw is that many AI agent gateways are making the same foundational error — sandboxing the **whole agent** in a container, rather than enforcing permissions at the **tool layer**:\n\n> \"Agent gateways feel like we are racing backwards into the MS-DOS era. When you look at gateways out there they can hand the model an exec tool and trust it. One process, one token, with the LLM holding the line.\"\n\n## The NVIDIA NemoClaw Tutorial as a Case Study\n\nThe post uses NVIDIA's published [NemoClaw + OpenClaw tutorial](https://developer.nvidia.com/blog/build-a-secure-always-on-local-ai-agent-with-nvidia-nemoclaw-and-openclaw/) — which walks through deploying OpenClaw with NemoClaw on a DGX Spark — as evidence that the whole-agent-sandbox approach forces awkward compromises at every step:\n\n- **Bind Ollama to 0.0.0.0** — because the sandboxed agent cannot reach a loopback interface across its own network namespace.\n- **Pair via the chat channel** — because there is no separate identity plane for secure key exchange.\n- **Approve connections at the network boundary** — because the tool layer itself has no concept of permissions.\n\nThe author's framing: \"Each of those is a compromise; response to a constraint. The constraint is worth revisiting.\"\n\n## What Wirken Does Differently\n\nWirken runs the gateway as a host process. Each channel gets its own Ed25519 identity. The vault runs out-of-process. Inference stays on loopback. Shell exec runs inside a hardened Docker container with capabilities dropped (`cap_drop ALL`), a read-only rootfs, 64MB tmpfs at `/tmp`, and no network access.\n\nThe post includes hash-chained audit logs from a live session showing the approach in action: a `curl` command denied at the tool layer before it ever touches a network boundary, and a `sh` compound command confirmed to be running against a read-only filesystem with an isolated tmpfs:\n\n```\n[ 4] assistant_tool_calls\n     call: exec({\"command\":\"curl https://httpbin.org/get\"})\n[ 5] permission_denied\n     action_key='shell:curl' tier=tier3\n[ 6] tool_result\n     tool=exec success=False\n     output: Permission denied: 'exec' requires tier3 approval.\n```\n\nThe audit trail is hash-chained, not just logged — each turn's attestation covers the leaf hashes of all prior events in the session.\n\n## What the HN Thread Is Actually Debating\n\nWith 294 comments, this is not a one-note pile-on. The thread branches in several directions:\n\n- Whether container-level sandboxing is good enough for most self-hosted OpenClaw deployments\n- The practicality of per-tool enforcement vs. per-agent sandboxing, and what each trades off in usability\n- Historical comparisons to how Unix/Linux enforced privilege separation, and whether that analogy even holds for AI agents\n- Skepticism toward Wirken's own claims — the author is selling an alternative product, and HN noticed\n\nThe MS-DOS framing clearly resonated with an audience that remembers what happened the last time the industry sprinted ahead of its security architecture. Whether or not you agree with the conclusion, the engagement numbers suggest this is the kind of question the OpenClaw community is ready to take seriously.\n\n## Why This Matters for OpenClaw Users\n\nOpenClaw's exec tool, gateway bearer token model, and channel-scoped permissions have been discussed in security-focused corners of the community before. This post puts the critique in a historical frame that is harder to dismiss than a standard bug report or CVE.\n\nIf you are running OpenClaw in a multi-user environment, on a shared server, or in any context where the exec tool is enabled and credential isolation matters — this thread is worth reading end-to-end.\n\nThe broader takeaway may simply be that the community is ready for a more structured conversation about what \"secure by default\" actually means for a self-hosted AI agent platform. OpenClaw's security posture has improved significantly across recent releases, but the fundamental architectural question the flyingpenguin post raises — sandbox boundary vs. tool-layer enforcement — is not one the release notes have addressed head-on.\n\n**[Read the full flyingpenguin article →](https://www.flyingpenguin.com/build-an-openclaw-free-secure-always-on-local-ai-agent/)**\n\n**[Join the Hacker News discussion →](https://news.ycombinator.com/item?id=47831437)**",
      "content_html": "<p>A single blog post published Monday on <a href=\"https://www.flyingpenguin.com/build-an-openclaw-free-secure-always-on-local-ai-agent/\">flyingpenguin.com</a> ignited one of the most engaged OpenClaw community conversations in recent memory. Posted to Hacker News under the title <em>\"OpenClaw isn't fooling me. I remember MS-DOS,\"</em> it climbed to 262 points with 294 comments by Monday evening UTC — easily among the most-discussed OpenClaw threads this month.</p><h2>The Core Argument: MS-DOS All Over Again</h2><p>The author — who is building an alternative agent gateway called <a href=\"https://wirken.ai\">Wirken</a> — draws a sharp parallel between the current state of AI agent security and the pre-Unix era of MS-DOS. In that world, any program could \"peek and poke the kernel, hook interrupts, write anywhere on disk.\" The fix, history showed, was not a better shell or a more careful wrapper. It was a fundamentally different architecture: process separation, virtual memory, ACLs, and privilege rings.</p><p>The argument applied to OpenClaw is that many AI agent gateways are making the same foundational error — sandboxing the <strong>whole agent</strong> in a container, rather than enforcing permissions at the <strong>tool layer</strong>:</p><p>> \"Agent gateways feel like we are racing backwards into the MS-DOS era. When you look at gateways out there they can hand the model an exec tool and trust it. One process, one token, with the LLM holding the line.\"</p><h2>The NVIDIA NemoClaw Tutorial as a Case Study</h2><p>The post uses NVIDIA's published <a href=\"https://developer.nvidia.com/blog/build-a-secure-always-on-local-ai-agent-with-nvidia-nemoclaw-and-openclaw/\">NemoClaw + OpenClaw tutorial</a> — which walks through deploying OpenClaw with NemoClaw on a DGX Spark — as evidence that the whole-agent-sandbox approach forces awkward compromises at every step:</p><ul><li><strong>Bind Ollama to 0.0.0.0</strong> — because the sandboxed agent cannot reach a loopback interface across its own network namespace.</li><li><strong>Pair via the chat channel</strong> — because there is no separate identity plane for secure key exchange.</li><li><strong>Approve connections at the network boundary</strong> — because the tool layer itself has no concept of permissions.</li></ul><p>The author's framing: \"Each of those is a compromise; response to a constraint. The constraint is worth revisiting.\"</p><h2>What Wirken Does Differently</h2><p>Wirken runs the gateway as a host process. Each channel gets its own Ed25519 identity. The vault runs out-of-process. Inference stays on loopback. Shell exec runs inside a hardened Docker container with capabilities dropped (<code>cap_drop ALL</code>), a read-only rootfs, 64MB tmpfs at <code>/tmp</code>, and no network access.</p><p>The post includes hash-chained audit logs from a live session showing the approach in action: a <code>curl</code> command denied at the tool layer before it ever touches a network boundary, and a <code>sh</code> compound command confirmed to be running against a read-only filesystem with an isolated tmpfs:</p><p>``<code><br />[ 4] assistant_tool_calls<br />     call: exec({\"command\":\"curl https://httpbin.org/get\"})<br />[ 5] permission_denied<br />     action_key='shell:curl' tier=tier3<br />[ 6] tool_result<br />     tool=exec success=False<br />     output: Permission denied: 'exec' requires tier3 approval.<br /></code>``</p><p>The audit trail is hash-chained, not just logged — each turn's attestation covers the leaf hashes of all prior events in the session.</p><h2>What the HN Thread Is Actually Debating</h2><p>With 294 comments, this is not a one-note pile-on. The thread branches in several directions:</p><ul><li>Whether container-level sandboxing is good enough for most self-hosted OpenClaw deployments</li><li>The practicality of per-tool enforcement vs. per-agent sandboxing, and what each trades off in usability</li><li>Historical comparisons to how Unix/Linux enforced privilege separation, and whether that analogy even holds for AI agents</li><li>Skepticism toward Wirken's own claims — the author is selling an alternative product, and HN noticed</li></ul><p>The MS-DOS framing clearly resonated with an audience that remembers what happened the last time the industry sprinted ahead of its security architecture. Whether or not you agree with the conclusion, the engagement numbers suggest this is the kind of question the OpenClaw community is ready to take seriously.</p><h2>Why This Matters for OpenClaw Users</h2><p>OpenClaw's exec tool, gateway bearer token model, and channel-scoped permissions have been discussed in security-focused corners of the community before. This post puts the critique in a historical frame that is harder to dismiss than a standard bug report or CVE.</p><p>If you are running OpenClaw in a multi-user environment, on a shared server, or in any context where the exec tool is enabled and credential isolation matters — this thread is worth reading end-to-end.</p><p>The broader takeaway may simply be that the community is ready for a more structured conversation about what \"secure by default\" actually means for a self-hosted AI agent platform. OpenClaw's security posture has improved significantly across recent releases, but the fundamental architectural question the flyingpenguin post raises — sandbox boundary vs. tool-layer enforcement — is not one the release notes have addressed head-on.</p><p><strong><a href=\"https://www.flyingpenguin.com/build-an-openclaw-free-secure-always-on-local-ai-agent/\">Read the full flyingpenguin article →</a></strong></p><p><strong><a href=\"https://news.ycombinator.com/item?id=47831437\">Join the Hacker News discussion →</a></strong></p>",
      "date_published": "2026-04-20T23:00:00.000Z",
      "date_modified": "2026-04-20T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-20-hacker-news-security-debate.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-20-gateway-pairing-polish/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-20-gateway-pairing-polish/",
      "title": "OpenClaw Gateway Status and Pairing UX Get a Major Cleanup",
      "summary": "Six PRs merged to OpenClaw main on April 20 sharpen gateway capability reporting, device pairing guidance, and channel send reliability.",
      "content_text": "This morning's wave of pull requests landing in OpenClaw's main branch tells a consistent story: the team is methodically cleaning up the rough edges in gateway status reporting, device pairing UX, and channel send reliability. No single change is earth-shattering — but together, they represent a meaningful step forward in day-to-day usability.\n\nHere's what landed on April 20.\n\n## Gateway Probe: Capability vs. Reachability\n\n[PR #69215](https://github.com/openclaw/openclaw/pull/69215) — **Split gateway probe capability from reachability** — is the headline improvement for `openclaw gateway status` users.\n\nPreviously, the gateway probe bundled two distinct signals into one: whether the gateway is reachable, and what it's capable of doing (read-only vs. read-write). Conflating these made it harder to diagnose subtle auth problems — you'd get a \"gateway OK\" result even when you only had limited access.\n\nThe new implementation introduces a `GatewayProbeCapability` type and a dedicated `resolveGatewayProbeCapability` function. Status output now clearly separates connectivity from capability, so you can tell at a glance whether your gateway is reachable *and* whether you have the access level you expect.\n\n## Better Pairing Error Messages During Reconnects\n\n[PR #69221](https://github.com/openclaw/openclaw/pull/69221) — **Explain pairing scope upgrades during reconnects** — makes the reconnect experience less confusing when a scope upgrade is required.\n\nWhen a device reconnects to a gateway that now requires a higher permission level, OpenClaw previously surfaced a somewhat cryptic error. This PR adds clear explanatory text describing why a scope upgrade is happening and what the user needs to do. Combined with [PR #69227](https://github.com/openclaw/openclaw/pull/69227) — **Fix pairing-required recovery details** — the error recovery path during pairing failures is now much cleaner and more actionable.\n\n## `openclaw doctor` Now Detects Pairing Auth Drift\n\n[PR #69210](https://github.com/openclaw/openclaw/pull/69210) — **Surface device pairing auth drift in doctor** — adds a new health check to `openclaw doctor` that notices when a paired device's current permissions no longer match what the gateway expects.\n\nThis is particularly useful for long-running setups where permissions have evolved over time — paired devices that worked fine months ago may have drifted out of sync without any obvious signal. Doctor now surfaces this proactively, pointing you toward the fix before it becomes a real problem.\n\n## Slack Send Path: Tolerates Unresolved SecretRefs\n\n[PR #68954](https://github.com/openclaw/openclaw/pull/68954) by [@openperf](https://github.com/openperf) — **Tolerate unresolved channel SecretRef on outbound send path** — fixes a frustrating edge case where Slack sends would fail if a channel's credentials were configured via `SecretRef` and hadn't resolved yet at send time.\n\nThe fix introduces a tolerant mode for the outbound path, so transient SecretRef resolution delays don't block message delivery. The strict mode required for inbound auth is preserved.\n\n## Telegram: Numeric IDs Only in Setup\n\n[PR #69191](https://github.com/openclaw/openclaw/pull/69191) — **Require numeric allowFrom ids in setup** — simplifies Telegram onboarding by removing the `@username` → numeric ID resolution path entirely.\n\nThe underlying Bot API lookup was never reliably supported for DM users. Rather than paper over a broken feature, the PR removes it entirely and updates docs to be explicit: `allowFrom` takes numeric sender IDs only. It's a cleaner contract, and the documentation now reflects reality instead of a best-effort approximation.\n\n## Still Heading to a Release\n\nAll of these are merged to `main` but not yet in a numbered release. The most recent stable is [v2026.4.15](https://github.com/openclaw/openclaw/releases/tag/v2026.4.15). If you're on beta or building from source, these improvements are available now. Watch the [releases page](https://github.com/openclaw/openclaw/releases) for the next tag.",
      "content_html": "<p>This morning's wave of pull requests landing in OpenClaw's main branch tells a consistent story: the team is methodically cleaning up the rough edges in gateway status reporting, device pairing UX, and channel send reliability. No single change is earth-shattering — but together, they represent a meaningful step forward in day-to-day usability.</p><p>Here's what landed on April 20.</p><h2>Gateway Probe: Capability vs. Reachability</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/69215\">PR #69215</a> — <strong>Split gateway probe capability from reachability</strong> — is the headline improvement for <code>openclaw gateway status</code> users.</p><p>Previously, the gateway probe bundled two distinct signals into one: whether the gateway is reachable, and what it's capable of doing (read-only vs. read-write). Conflating these made it harder to diagnose subtle auth problems — you'd get a \"gateway OK\" result even when you only had limited access.</p><p>The new implementation introduces a <code>GatewayProbeCapability</code> type and a dedicated <code>resolveGatewayProbeCapability</code> function. Status output now clearly separates connectivity from capability, so you can tell at a glance whether your gateway is reachable <em>and</em> whether you have the access level you expect.</p><h2>Better Pairing Error Messages During Reconnects</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/69221\">PR #69221</a> — <strong>Explain pairing scope upgrades during reconnects</strong> — makes the reconnect experience less confusing when a scope upgrade is required.</p><p>When a device reconnects to a gateway that now requires a higher permission level, OpenClaw previously surfaced a somewhat cryptic error. This PR adds clear explanatory text describing why a scope upgrade is happening and what the user needs to do. Combined with <a href=\"https://github.com/openclaw/openclaw/pull/69227\">PR #69227</a> — <strong>Fix pairing-required recovery details</strong> — the error recovery path during pairing failures is now much cleaner and more actionable.</p><h2><code>openclaw doctor</code> Now Detects Pairing Auth Drift</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/69210\">PR #69210</a> — <strong>Surface device pairing auth drift in doctor</strong> — adds a new health check to <code>openclaw doctor</code> that notices when a paired device's current permissions no longer match what the gateway expects.</p><p>This is particularly useful for long-running setups where permissions have evolved over time — paired devices that worked fine months ago may have drifted out of sync without any obvious signal. Doctor now surfaces this proactively, pointing you toward the fix before it becomes a real problem.</p><h2>Slack Send Path: Tolerates Unresolved SecretRefs</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/68954\">PR #68954</a> by <a href=\"https://github.com/openperf\">@openperf</a> — <strong>Tolerate unresolved channel SecretRef on outbound send path</strong> — fixes a frustrating edge case where Slack sends would fail if a channel's credentials were configured via <code>SecretRef</code> and hadn't resolved yet at send time.</p><p>The fix introduces a tolerant mode for the outbound path, so transient SecretRef resolution delays don't block message delivery. The strict mode required for inbound auth is preserved.</p><h2>Telegram: Numeric IDs Only in Setup</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/69191\">PR #69191</a> — <strong>Require numeric allowFrom ids in setup</strong> — simplifies Telegram onboarding by removing the <code>@username</code> → numeric ID resolution path entirely.</p><p>The underlying Bot API lookup was never reliably supported for DM users. Rather than paper over a broken feature, the PR removes it entirely and updates docs to be explicit: <code>allowFrom</code> takes numeric sender IDs only. It's a cleaner contract, and the documentation now reflects reality instead of a best-effort approximation.</p><h2>Still Heading to a Release</h2><p>All of these are merged to <code>main</code> but not yet in a numbered release. The most recent stable is <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.15\">v2026.4.15</a>. If you're on beta or building from source, these improvements are available now. Watch the <a href=\"https://github.com/openclaw/openclaw/releases\">releases page</a> for the next tag.</p>",
      "date_published": "2026-04-20T08:05:00.000Z",
      "date_modified": "2026-04-20T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-20-gateway-pairing-polish.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-20-copilot-claude-opus-default/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-20-copilot-claude-opus-default/",
      "title": "OpenClaw Defaults GitHub Copilot Onboarding to Claude Opus 4.6",
      "summary": "OpenClaw now bootstraps GitHub Copilot users to Claude Opus 4.6 by default, replacing GPT-4o as the out-of-the-box model for new setups.",
      "content_text": "A small change with real-world impact landed in OpenClaw's main branch this morning: GitHub Copilot users who go through onboarding will now default to **Claude Opus 4.6** rather than GPT-4o.\n\n[PR #69207](https://github.com/openclaw/openclaw/pull/69207), merged by [@obviyus](https://github.com/obviyus) on April 20, swaps the default model for the `github-copilot` provider in OpenClaw's plugin entry point, the bundled defaults list, and the associated contract tests — a clean, well-scoped change that affects every fresh Copilot integration going forward.\n\n## Why This Matters\n\nGitHub Copilot's provider in OpenClaw routes requests through Copilot's OpenAI-compatible endpoint. For a long time, GPT-4o was the sensible default: fast, capable, and broadly supported. But as Anthropic's Claude family has matured — and as OpenClaw has moved aggressively to align defaults with the strongest available models — the calculation has shifted.\n\nClaude Opus 4.6 brings deeper reasoning and longer context handling to the table, and has already become the preferred model for Anthropic-native OpenClaw setups. Extending that default to Copilot users means new users get the better model without having to know to ask for it.\n\nThis follows a broader pattern in recent OpenClaw releases: the v2026.4.15 stable release updated default Anthropic selections, opus aliases, and Claude CLI defaults to **Claude Opus 4.7**, and the bundled image understanding provider moved to Opus 4.7 as well. The Copilot default change is the next step in that alignment — the Copilot path catches up to where the rest of the ecosystem already sits.\n\n## What Changes for You\n\nIf you have an existing GitHub Copilot setup in OpenClaw, **nothing changes automatically**. The default only kicks in for new onboarding flows — your existing configuration is preserved.\n\nIf you're setting up OpenClaw with GitHub Copilot for the first time after this lands in a release, you'll get Claude Opus 4.6 out of the box. If you prefer GPT-4o or another model, you can override it in your config:\n\n```json\n{\n  \"agents\": {\n    \"defaults\": {\n      \"model\": \"github-copilot/gpt-4o\"\n    }\n  }\n}\n```\n\n## Still Flowing to Main\n\nThis change is merged to the `main` branch as of this morning but has not yet shipped in a numbered release. The last stable release is [v2026.4.15](https://github.com/openclaw/openclaw/releases/tag/v2026.4.15), and beta builds have been rolling through daily since then. Expect this to land in the next release tag.\n\nFor those running on beta or building from source, it's live now.\n\n## The Bigger Picture\n\nOpenClaw's approach to defaults has shifted noticeably in 2026: rather than picking a cautious middle-of-the-road model, the team has been consistently moving defaults toward the most capable option available. That's good for users who just want the best result, and easy to override for those who have specific needs.\n\nThe GitHub Copilot default switch is a small but deliberate continuation of that philosophy.",
      "content_html": "<p>A small change with real-world impact landed in OpenClaw's main branch this morning: GitHub Copilot users who go through onboarding will now default to <strong>Claude Opus 4.6</strong> rather than GPT-4o.</p><p><a href=\"https://github.com/openclaw/openclaw/pull/69207\">PR #69207</a>, merged by <a href=\"https://github.com/obviyus\">@obviyus</a> on April 20, swaps the default model for the <code>github-copilot</code> provider in OpenClaw's plugin entry point, the bundled defaults list, and the associated contract tests — a clean, well-scoped change that affects every fresh Copilot integration going forward.</p><h2>Why This Matters</h2><p>GitHub Copilot's provider in OpenClaw routes requests through Copilot's OpenAI-compatible endpoint. For a long time, GPT-4o was the sensible default: fast, capable, and broadly supported. But as Anthropic's Claude family has matured — and as OpenClaw has moved aggressively to align defaults with the strongest available models — the calculation has shifted.</p><p>Claude Opus 4.6 brings deeper reasoning and longer context handling to the table, and has already become the preferred model for Anthropic-native OpenClaw setups. Extending that default to Copilot users means new users get the better model without having to know to ask for it.</p><p>This follows a broader pattern in recent OpenClaw releases: the v2026.4.15 stable release updated default Anthropic selections, opus aliases, and Claude CLI defaults to <strong>Claude Opus 4.7</strong>, and the bundled image understanding provider moved to Opus 4.7 as well. The Copilot default change is the next step in that alignment — the Copilot path catches up to where the rest of the ecosystem already sits.</p><h2>What Changes for You</h2><p>If you have an existing GitHub Copilot setup in OpenClaw, <strong>nothing changes automatically</strong>. The default only kicks in for new onboarding flows — your existing configuration is preserved.</p><p>If you're setting up OpenClaw with GitHub Copilot for the first time after this lands in a release, you'll get Claude Opus 4.6 out of the box. If you prefer GPT-4o or another model, you can override it in your config:</p><p>``<code>json<br />{<br />  \"agents\": {<br />    \"defaults\": {<br />      \"model\": \"github-copilot/gpt-4o\"<br />    }<br />  }<br />}<br /></code>`<code></p><h2>Still Flowing to Main</h2><p>This change is merged to the </code>main` branch as of this morning but has not yet shipped in a numbered release. The last stable release is <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.15\">v2026.4.15</a>, and beta builds have been rolling through daily since then. Expect this to land in the next release tag.</p><p>For those running on beta or building from source, it's live now.</p><h2>The Bigger Picture</h2><p>OpenClaw's approach to defaults has shifted noticeably in 2026: rather than picking a cautious middle-of-the-road model, the team has been consistently moving defaults toward the most capable option available. That's good for users who just want the best result, and easy to override for those who have specific needs.</p><p>The GitHub Copilot default switch is a small but deliberate continuation of that philosophy.</p>",
      "date_published": "2026-04-20T08:00:00.000Z",
      "date_modified": "2026-04-20T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-20-copilot-claude-opus-default.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-openclawdex-macos-coding-agent-ui/",
      "url": "https://openclawchronicles.com/posts/openclaw-openclawdex-macos-coding-agent-ui/",
      "title": "OpenClawdex: A Native macOS UI for Claude Code and Codex on OpenClaw",
      "summary": "OpenClawdex is a free, MIT-licensed macOS app that lets you run Claude Code and OpenAI Codex agents side by side, using your existing CLI auth — no API keys needed.",
      "content_text": "If you run OpenClaw on a Mac and regularly switch between Claude Code and OpenAI Codex, a new open-source project called **OpenClawdex** is worth five minutes of your time. It is a native macOS desktop application — MIT licensed, free, built by [alekseyrozh](https://github.com/alekseyrozh) — that wraps both coding agents in a single interface with the kind of platform-native polish that most AI tooling skips entirely.\n\nThe project [surfaced on Hacker News](https://news.ycombinator.com/item?id=47823501) today and the GitHub repo is already detailed enough to get started immediately.\n\n---\n\n## What It Does\n\nOpenClawdex spawns Claude Code and OpenAI Codex as subprocesses and bridges their output to a React UI running inside an Electron shell. The experience is intentionally minimal: no built-in diff sidebar, no custom file viewer. Instead, clicking any file path or diff link jumps straight into VS Code, Cursor, or whichever editor you have configured.\n\nThe core feature list:\n\n- **No separate login** — Uses your existing `claude` and `codex` CLI authentication. Your Claude Max subscription and ChatGPT/Codex plan both work out of the box without pasting API keys or completing an OAuth flow.\n- **Two agents, one UI** — Run Claude Code and OpenAI Codex side by side. Switch model and reasoning effort per thread independently.\n- **Parallel threads** — Spawn as many concurrent agent sessions as you want. Each runs in its own subprocess.\n- **Project organization** — Group threads by project, support multiple folders per project, drag-and-drop threads between projects.\n- **Persistent history** — Threads survive restarts. Codex history is rebuilt from `~/.codex/sessions` rollouts; Claude history comes through the Agent SDK.\n- **Interactive prompts** — Inline cards for tool approvals, plan approvals, and `AskUserQuestion` requests from agents.\n- **Permission modes** — Switch between `ask`, `plan`, `accept-edits`, or `bypass-permissions` per thread.\n- **Native macOS feel** — Vibrancy sidebar, hidden-inset title bar, traffic lights, dark theme with blue accent.\n\n---\n\n## How It Works Under the Hood\n\nThe architecture is a straightforward pnpm monorepo:\n\n```\napps/\n  web/      React + Vite + Tailwind v4 frontend\n  desktop/  Electron shell + CLI agent integration\npackages/\n  shared/   Zod schemas for IPC messages\n```\n\nThe Electron main process spawns `claude` via the Agent SDK (with `--output-format stream-json`) and `codex` via its app-server JSON-RPC interface. Both agents output over IPC to the React frontend. It is macOS-only for now, but the author notes the architecture can be extended to other platforms.\n\n---\n\n## Installing It\n\nDownload the latest `.dmg` from the [Releases page](https://github.com/alekseyrozh/openclawdex/releases), drag it to Applications, and launch. You will need at least one CLI agent installed and authenticated:\n\n- **Claude Code**: `npm install -g @anthropic-ai/claude-code` then `claude auth login`\n- **OpenAI Codex**: `npm install -g @openai/codex` then `codex login`\n\nThe model picker greys out whichever provider is not available, so having just one installed is fine.\n\n---\n\n## Why This Matters for the OpenClaw Ecosystem\n\nOpenClawdex is not an OpenClaw feature — it is a standalone tool built by a community developer. But its existence is a meaningful signal. The project explicitly calls out OpenClaw in its HN launch post (\"I wanted a lightweight UI... for my OpenClaw setup\"), and the feature set maps cleanly onto how OpenClaw users already think about managing agent sessions.\n\nThird-party native tooling built around OpenClaw workflows is still relatively rare. When it does appear — and when it ships with this level of polish on day one — it is worth paying attention.\n\n**Links:**\n- [GitHub: alekseyrozh/openclawdex](https://github.com/alekseyrozh/openclawdex)\n- [Show HN discussion](https://news.ycombinator.com/item?id=47823501)",
      "content_html": "<p>If you run OpenClaw on a Mac and regularly switch between Claude Code and OpenAI Codex, a new open-source project called <strong>OpenClawdex</strong> is worth five minutes of your time. It is a native macOS desktop application — MIT licensed, free, built by <a href=\"https://github.com/alekseyrozh\">alekseyrozh</a> — that wraps both coding agents in a single interface with the kind of platform-native polish that most AI tooling skips entirely.</p><p>The project <a href=\"https://news.ycombinator.com/item?id=47823501\">surfaced on Hacker News</a> today and the GitHub repo is already detailed enough to get started immediately.</p><p>---</p><h2>What It Does</h2><p>OpenClawdex spawns Claude Code and OpenAI Codex as subprocesses and bridges their output to a React UI running inside an Electron shell. The experience is intentionally minimal: no built-in diff sidebar, no custom file viewer. Instead, clicking any file path or diff link jumps straight into VS Code, Cursor, or whichever editor you have configured.</p><p>The core feature list:</p><ul><li><strong>No separate login</strong> — Uses your existing <code>claude</code> and <code>codex</code> CLI authentication. Your Claude Max subscription and ChatGPT/Codex plan both work out of the box without pasting API keys or completing an OAuth flow.</li><li><strong>Two agents, one UI</strong> — Run Claude Code and OpenAI Codex side by side. Switch model and reasoning effort per thread independently.</li><li><strong>Parallel threads</strong> — Spawn as many concurrent agent sessions as you want. Each runs in its own subprocess.</li><li><strong>Project organization</strong> — Group threads by project, support multiple folders per project, drag-and-drop threads between projects.</li><li><strong>Persistent history</strong> — Threads survive restarts. Codex history is rebuilt from <code>~/.codex/sessions</code> rollouts; Claude history comes through the Agent SDK.</li><li><strong>Interactive prompts</strong> — Inline cards for tool approvals, plan approvals, and <code>AskUserQuestion</code> requests from agents.</li><li><strong>Permission modes</strong> — Switch between <code>ask</code>, <code>plan</code>, <code>accept-edits</code>, or <code>bypass-permissions</code> per thread.</li><li><strong>Native macOS feel</strong> — Vibrancy sidebar, hidden-inset title bar, traffic lights, dark theme with blue accent.</li></ul><p>---</p><h2>How It Works Under the Hood</h2><p>The architecture is a straightforward pnpm monorepo:</p><p>``<code><br />apps/<br />  web/      React + Vite + Tailwind v4 frontend<br />  desktop/  Electron shell + CLI agent integration<br />packages/<br />  shared/   Zod schemas for IPC messages<br /></code>`<code></p><p>The Electron main process spawns </code>claude<code> via the Agent SDK (with </code>--output-format stream-json<code>) and </code>codex<code> via its app-server JSON-RPC interface. Both agents output over IPC to the React frontend. It is macOS-only for now, but the author notes the architecture can be extended to other platforms.</p><p>---</p><h2>Installing It</h2><p>Download the latest </code>.dmg<code> from the <a href=\"https://github.com/alekseyrozh/openclawdex/releases\">Releases page</a>, drag it to Applications, and launch. You will need at least one CLI agent installed and authenticated:</p><ul><li><strong>Claude Code</strong>: </code>npm install -g @anthropic-ai/claude-code<code> then </code>claude auth login<code></li><li><strong>OpenAI Codex</strong>: </code>npm install -g @openai/codex<code> then </code>codex login`</li></ul><p>The model picker greys out whichever provider is not available, so having just one installed is fine.</p><p>---</p><h2>Why This Matters for the OpenClaw Ecosystem</h2><p>OpenClawdex is not an OpenClaw feature — it is a standalone tool built by a community developer. But its existence is a meaningful signal. The project explicitly calls out OpenClaw in its HN launch post (\"I wanted a lightweight UI... for my OpenClaw setup\"), and the feature set maps cleanly onto how OpenClaw users already think about managing agent sessions.</p><p>Third-party native tooling built around OpenClaw workflows is still relatively rare. When it does appear — and when it ships with this level of polish on day one — it is worth paying attention.</p><ul><li><strong>Links:</strong></li><li><a href=\"https://github.com/alekseyrozh/openclawdex\">GitHub: alekseyrozh/openclawdex</a></li><li><a href=\"https://news.ycombinator.com/item?id=47823501\">Show HN discussion</a></li></ul>",
      "date_published": "2026-04-19T23:05:00.000Z",
      "date_modified": "2026-04-19T23:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-openclawdex-macos-coding-agent-ui.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-19-sunday-hn-roundup/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-19-sunday-hn-roundup/",
      "title": "OpenClaw Creator Speaks, HN Responds: A Big Sunday for the Community",
      "summary": "Peter Steinberger shares a TedTalk on creating OpenClaw, 386 HN users answer who uses it, and OpenClawdex debuts as a native macOS coding agent UI.",
      "content_text": "Sunday April 19th turned into an unexpectedly active day for OpenClaw on Hacker News. Three separate threads are pulling attention at once, and together they paint a picture of a project that has quietly crossed a threshold from niche power-tool to something a much wider audience is seriously evaluating.\n\n---\n\n## Peter Steinberger Talks About Creating OpenClaw\n\nThe most notable item of the day: a YouTube video titled **\"I Created OpenClaw\"** attributed to Peter Steinberger — the person behind the project — was [shared on Hacker News](https://news.ycombinator.com/item?id=47826500) this afternoon. The submission itself is light on engagement so far (it landed Sunday evening), but the signal it sends is meaningful.\n\nFor a project that has largely spread through word of mouth, GitHub stars, and community tutorials, having the creator speak publicly in a long-form video format is a natural inflection point. It gives new users a place to understand the *why* behind OpenClaw's design decisions, not just the what. If you have been on the fence about investing time in learning the platform, a first-person founder account is usually the highest-quality onboarding material available.\n\nThe video is available on YouTube: [Watch here](https://www.youtube.com/watch?v=7rzYDM6vMtI).\n\n---\n\n## \"Ask HN: Who Is Using OpenClaw?\" — 386 Answers and Counting\n\nOne thread from earlier this week has become one of the more remarkable OpenClaw community documents in recent memory. The question — [\"Ask HN: Who is using OpenClaw?\"](https://news.ycombinator.com/item?id=47783940) — was posted by a skeptic who noted they did not personally use it despite feeling plugged into the AI world.\n\nThe answers that poured in tell a different story:\n\n- **Developers using it as a persistent multi-agent OS**, not just a chatbot interface\n- **Remote workers** running OpenClaw on home servers and accessing it from phones and tablets\n- **Teams** using it for email triage, calendar management, and code review on shared infrastructure\n- **Self-hosters** who appreciate that no data leaves their machine\n- **People in non-English-speaking countries** who find it easier to think in their native language and let OpenClaw handle the translation layer with AI\n\nAs of Sunday night, the thread sits at **336 points and 386 comments** — with activity continuing into April 19th. The range of use cases on display is wider than most OpenClaw coverage suggests, and it is worth reading if you want a realistic picture of who is actually running this software in 2026.\n\n---\n\n## OpenClawdex Launches: A Native macOS UI for Claude Code and Codex\n\nAlso surfacing on HN today: [OpenClawdex](https://github.com/alekseyrozh/openclawdex) — an open-source, MIT-licensed macOS desktop application that lets you orchestrate Claude Code and OpenAI Codex agents from a single native UI. The [Show HN post](https://news.ycombinator.com/item?id=47823501) earned early discussion and highlights a trend of third-party tooling building specifically around the OpenClaw ecosystem.\n\nOpenClawdex is covered in more detail in [our dedicated post](#), but the short version: it uses your existing CLI auth (no API keys, no OAuth), runs agents in parallel threads, and has the kind of native macOS polish — vibrancy sidebar, hidden title bar, traffic lights — that most cross-platform tools skip.\n\n---\n\n## What This Week Signals\n\nThree data points from one Sunday:\n\n1. The creator is telling the project's origin story publicly for the first time in video form.\n2. A candid community thread has pulled in nearly 400 first-person accounts of real-world OpenClaw usage.\n3. Third-party developers are shipping polished native tooling on top of the platform.\n\nNone of these are version releases. None are changelog entries. They are the kind of qualitative signals that show up before the quantitative metrics catch up. OpenClaw's adoption story is getting louder.",
      "content_html": "<p>Sunday April 19th turned into an unexpectedly active day for OpenClaw on Hacker News. Three separate threads are pulling attention at once, and together they paint a picture of a project that has quietly crossed a threshold from niche power-tool to something a much wider audience is seriously evaluating.</p><p>---</p><h2>Peter Steinberger Talks About Creating OpenClaw</h2><p>The most notable item of the day: a YouTube video titled <strong>\"I Created OpenClaw\"</strong> attributed to Peter Steinberger — the person behind the project — was <a href=\"https://news.ycombinator.com/item?id=47826500\">shared on Hacker News</a> this afternoon. The submission itself is light on engagement so far (it landed Sunday evening), but the signal it sends is meaningful.</p><p>For a project that has largely spread through word of mouth, GitHub stars, and community tutorials, having the creator speak publicly in a long-form video format is a natural inflection point. It gives new users a place to understand the <em>why</em> behind OpenClaw's design decisions, not just the what. If you have been on the fence about investing time in learning the platform, a first-person founder account is usually the highest-quality onboarding material available.</p><p>The video is available on YouTube: <a href=\"https://www.youtube.com/watch?v=7rzYDM6vMtI\">Watch here</a>.</p><p>---</p><h2>\"Ask HN: Who Is Using OpenClaw?\" — 386 Answers and Counting</h2><p>One thread from earlier this week has become one of the more remarkable OpenClaw community documents in recent memory. The question — <a href=\"https://news.ycombinator.com/item?id=47783940\">\"Ask HN: Who is using OpenClaw?\"</a> — was posted by a skeptic who noted they did not personally use it despite feeling plugged into the AI world.</p><p>The answers that poured in tell a different story:</p><ul><li><strong>Developers using it as a persistent multi-agent OS</strong>, not just a chatbot interface</li><li><strong>Remote workers</strong> running OpenClaw on home servers and accessing it from phones and tablets</li><li><strong>Teams</strong> using it for email triage, calendar management, and code review on shared infrastructure</li><li><strong>Self-hosters</strong> who appreciate that no data leaves their machine</li><li><strong>People in non-English-speaking countries</strong> who find it easier to think in their native language and let OpenClaw handle the translation layer with AI</li></ul><p>As of Sunday night, the thread sits at <strong>336 points and 386 comments</strong> — with activity continuing into April 19th. The range of use cases on display is wider than most OpenClaw coverage suggests, and it is worth reading if you want a realistic picture of who is actually running this software in 2026.</p><p>---</p><h2>OpenClawdex Launches: A Native macOS UI for Claude Code and Codex</h2><p>Also surfacing on HN today: <a href=\"https://github.com/alekseyrozh/openclawdex\">OpenClawdex</a> — an open-source, MIT-licensed macOS desktop application that lets you orchestrate Claude Code and OpenAI Codex agents from a single native UI. The <a href=\"https://news.ycombinator.com/item?id=47823501\">Show HN post</a> earned early discussion and highlights a trend of third-party tooling building specifically around the OpenClaw ecosystem.</p><p>OpenClawdex is covered in more detail in <a href=\"#\">our dedicated post</a>, but the short version: it uses your existing CLI auth (no API keys, no OAuth), runs agents in parallel threads, and has the kind of native macOS polish — vibrancy sidebar, hidden title bar, traffic lights — that most cross-platform tools skip.</p><p>---</p><h2>What This Week Signals</h2><p>Three data points from one Sunday:</p><ol><li>The creator is telling the project's origin story publicly for the first time in video form.</li><li>A candid community thread has pulled in nearly 400 first-person accounts of real-world OpenClaw usage.</li><li>Third-party developers are shipping polished native tooling on top of the platform.</li></ol><p>None of these are version releases. None are changelog entries. They are the kind of qualitative signals that show up before the quantitative metrics catch up. OpenClaw's adoption story is getting louder.</p>",
      "date_published": "2026-04-19T23:00:00.000Z",
      "date_modified": "2026-04-19T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-19-sunday-hn-roundup.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-19-usage-reporting-local-backends/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-19-usage-reporting-local-backends/",
      "title": "OpenClaw Fixes Token Usage Reporting for Local AI Backends",
      "summary": "A new OpenClaw beta ensures stream_options.include_usage is always sent, so Ollama, LM Studio, and custom OpenAI-compatible backends finally report real context usage.",
      "content_text": "One of the most common complaints from users running OpenClaw against local models — Ollama, LM Studio, vLLM, or any other OpenAI-compatible backend — has been broken token usage reporting. Context percentages showed as 0% in `/status`, and compaction logic couldn't make accurate decisions about when to summarize. A fix landed this morning in [v2026.4.19-beta.2](https://github.com/openclaw/openclaw/releases).\n\n## What Was Broken\n\nWhen OpenClaw makes a streaming completion request, it relies on the usage data returned at the end of the stream to track how many tokens are in the active context window. OpenAI's own infrastructure sends this automatically, but many local and custom OpenAI-compatible backends only include usage data when explicitly asked via `stream_options.include_usage: true` in the request payload.\n\nOpenClaw wasn't consistently sending this flag for all streaming requests. The result: backends that require the explicit ask would silently return no usage data, and the agent would show 0% context utilization — even when the context was nearly full. Worse, the compaction engine (which decides when to summarize long sessions) was flying blind on usage, potentially missing when a context window was filling up.\n\n## The Fix\n\n[PR #68746](https://github.com/openclaw/openclaw/pull/68746) (thanks [@kagura-agent](https://github.com/kagura-agent)) ensures `stream_options.include_usage` is always sent on streaming requests in the OpenAI-completions agent path. This is the transport path used by Ollama, LM Studio, OpenRouter, and any other server that speaks the OpenAI Chat Completions API.\n\nThe fix is unconditional on the streaming path — it doesn't try to guess whether your backend needs the flag. This means:\n\n- **Ollama users**: context usage will now appear correctly in `/status` after updating\n- **LM Studio / vLLM / LocalAI users**: same benefit — real token counts, not zeros\n- **OpenRouter users**: already worked for most models, but edge cases involving older proxy layers should now be covered\n- **Compaction**: the engine can now make accurate decisions about when to compact, reducing the risk of silent context overflow on long sessions with local models\n\n## Companion Fix for Status Persistence\n\nA related fix also in beta.2 ([#67695](https://github.com/openclaw/openclaw/pull/67695)) handles a different but complementary edge case: providers that return usage data on *most* replies but omit it on some (for example, certain tool-use responses or mid-stream partial chunks). Previously this would cause the displayed context percentage to drop back to 0% or \"unknown\" whenever a usage-omitting response came through.\n\nThe fix carries the last known token total forward in these cases, so `/status` shows a stable, non-flickering context percentage even across heterogeneous response streams.\n\n## Who Should Update\n\nIf you run OpenClaw against any local model backend or custom OpenAI-compatible endpoint and have ever seen 0% context usage in `/status`, this beta is worth testing:\n\n```bash\nnpm install -g openclaw@beta\n```\n\nFor users on the stable channel, this fix will land in the next stable release. Watch the [releases page](https://github.com/openclaw/openclaw/releases) for the stable tag.\n\n## Why This Matters Beyond the UI\n\nThe token usage display in `/status` isn't just cosmetic — it feeds into OpenClaw's automatic context management. Accurate usage numbers mean the agent knows when to compact, when to warn about approaching limits, and when to trigger model failover due to context pressure. Getting this right for local backends is especially important since those models often have smaller context windows than cloud providers, making accurate tracking even more critical.",
      "content_html": "<p>One of the most common complaints from users running OpenClaw against local models — Ollama, LM Studio, vLLM, or any other OpenAI-compatible backend — has been broken token usage reporting. Context percentages showed as 0% in <code>/status</code>, and compaction logic couldn't make accurate decisions about when to summarize. A fix landed this morning in <a href=\"https://github.com/openclaw/openclaw/releases\">v2026.4.19-beta.2</a>.</p><h2>What Was Broken</h2><p>When OpenClaw makes a streaming completion request, it relies on the usage data returned at the end of the stream to track how many tokens are in the active context window. OpenAI's own infrastructure sends this automatically, but many local and custom OpenAI-compatible backends only include usage data when explicitly asked via <code>stream_options.include_usage: true</code> in the request payload.</p><p>OpenClaw wasn't consistently sending this flag for all streaming requests. The result: backends that require the explicit ask would silently return no usage data, and the agent would show 0% context utilization — even when the context was nearly full. Worse, the compaction engine (which decides when to summarize long sessions) was flying blind on usage, potentially missing when a context window was filling up.</p><h2>The Fix</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/68746\">PR #68746</a> (thanks <a href=\"https://github.com/kagura-agent\">@kagura-agent</a>) ensures <code>stream_options.include_usage</code> is always sent on streaming requests in the OpenAI-completions agent path. This is the transport path used by Ollama, LM Studio, OpenRouter, and any other server that speaks the OpenAI Chat Completions API.</p><p>The fix is unconditional on the streaming path — it doesn't try to guess whether your backend needs the flag. This means:</p><ul><li><strong>Ollama users</strong>: context usage will now appear correctly in <code>/status</code> after updating</li><li><strong>LM Studio / vLLM / LocalAI users</strong>: same benefit — real token counts, not zeros</li><li><strong>OpenRouter users</strong>: already worked for most models, but edge cases involving older proxy layers should now be covered</li><li><strong>Compaction</strong>: the engine can now make accurate decisions about when to compact, reducing the risk of silent context overflow on long sessions with local models</li></ul><h2>Companion Fix for Status Persistence</h2><p>A related fix also in beta.2 (<a href=\"https://github.com/openclaw/openclaw/pull/67695\">#67695</a>) handles a different but complementary edge case: providers that return usage data on <em>most</em> replies but omit it on some (for example, certain tool-use responses or mid-stream partial chunks). Previously this would cause the displayed context percentage to drop back to 0% or \"unknown\" whenever a usage-omitting response came through.</p><p>The fix carries the last known token total forward in these cases, so <code>/status</code> shows a stable, non-flickering context percentage even across heterogeneous response streams.</p><h2>Who Should Update</h2><p>If you run OpenClaw against any local model backend or custom OpenAI-compatible endpoint and have ever seen 0% context usage in <code>/status</code>, this beta is worth testing:</p><p>``<code>bash<br />npm install -g openclaw@beta<br /></code>`<code></p><p>For users on the stable channel, this fix will land in the next stable release. Watch the <a href=\"https://github.com/openclaw/openclaw/releases\">releases page</a> for the stable tag.</p><h2>Why This Matters Beyond the UI</h2><p>The token usage display in </code>/status` isn't just cosmetic — it feeds into OpenClaw's automatic context management. Accurate usage numbers mean the agent knows when to compact, when to warn about approaching limits, and when to trigger model failover due to context pressure. Getting this right for local backends is especially important since those models often have smaller context windows than cloud providers, making accurate tracking even more critical.</p>",
      "date_published": "2026-04-19T08:05:00.000Z",
      "date_modified": "2026-04-19T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-19-usage-reporting-local-backends.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-19-nested-agent-fix/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-19-nested-agent-fix/",
      "title": "OpenClaw Beta Fixes Nested Agent Session Blocking",
      "summary": "Two new OpenClaw beta releases land on April 19th, squashing a session head-of-line block in nested agent lanes and restoring token usage visibility.",
      "content_text": "Two beta releases landed overnight for OpenClaw — [v2026.4.19-beta.1](https://github.com/openclaw/openclaw/releases) at 02:01 UTC and [v2026.4.19-beta.2](https://github.com/openclaw/openclaw/releases) at 05:55 UTC — delivering targeted fixes for anyone running multi-agent setups, local AI backends, or Codex-powered threads.\n\n## The Big One: Nested Agent Lane Blocking\n\nThe headline fix in beta.2 addresses a frustrating head-of-line blocking bug in nested agent lanes. Previously, a long-running nested agent task on one session could block **unrelated sessions across the entire gateway** — meaning a slow background agent job on Session A would freeze agents on Sessions B and C until it finished.\n\nThe fix ([#67785](https://github.com/openclaw/openclaw/pull/67785), thanks [@stainlu](https://github.com/stainlu)) scopes nested agent work per target session. Each session now has its own nested lane budget, so a busy session no longer starves others. If you run multiple concurrent agents — or just have one slow Codex thread running while trying to chat normally — this should make a noticeable difference.\n\n## Session Token Totals Preserved Across Providers\n\nAlso in beta.2: a fix ([#67695](https://github.com/openclaw/openclaw/pull/67695)) for providers that skip usage metadata on some responses. OpenClaw now carries forward the last known context usage instead of resetting to 0% or \"unknown\" — so `/status` and `openclaw sessions` keep showing meaningful token counts even when a provider omits usage in a specific reply.\n\n## Cross-Agent Channel Account Routing\n\nBeta.1 landed a fix ([#67508](https://github.com/openclaw/openclaw/pull/67508), thanks [@lukeboyett](https://github.com/lukeboyett) and [@gumadeiras](https://github.com/gumadeiras)) for cross-agent subagent spawns in shared rooms and multi-account setups. Child sessions were inheriting the caller's channel account rather than using the target agent's bound account — leading to messages being sent from the wrong account in shared workspaces or Discord servers with multiple bot accounts.\n\nThe fix routes cross-agent subagent spawns through the target agent's bound channel account, while still preserving peer and workspace/role-scoped bindings. Multi-account Discord or Slack setups should see cleaner message attribution after this lands in stable.\n\n## Codex Context Inflation Fix\n\nCodex users will appreciate the fix in beta.1 ([#64669](https://github.com/openclaw/openclaw/pull/64669), thanks [@cyrusaf](https://github.com/cyrusaf)): cumulative app-server token totals were being misread as fresh per-turn context usage, causing `/status` to report wildly inflated context percentages in long Codex threads. The session now correctly measures only what's in the active context window.\n\n## Telegram and Browser/CDP Improvements\n\nRounding out beta.1:\n\n- **Telegram/callbacks** ([#68588](https://github.com/openclaw/openclaw/pull/68588)): Stale pagination buttons on Telegram commands no longer wedge the update watermark, blocking newer updates from landing.\n- **Browser/CDP** ([#68207](https://github.com/openclaw/openclaw/pull/68207)): WSL-to-Windows Chrome endpoints no longer appear offline under strict SSRF defaults. Phase-specific diagnostics also now surface exactly which part of the CDP handshake failed.\n\n## Trying the Betas\n\nTo opt into the beta channel:\n\n```bash\nnpm install -g openclaw@beta\n```\n\nThese are pre-releases — run them on a staging gateway or secondary install if you're cautious. The fixes target real production pain points, but stable users should wait for the next tagged release.\n\nBoth betas are available now on the [OpenClaw GitHub releases page](https://github.com/openclaw/openclaw/releases).",
      "content_html": "<p>Two beta releases landed overnight for OpenClaw — <a href=\"https://github.com/openclaw/openclaw/releases\">v2026.4.19-beta.1</a> at 02:01 UTC and <a href=\"https://github.com/openclaw/openclaw/releases\">v2026.4.19-beta.2</a> at 05:55 UTC — delivering targeted fixes for anyone running multi-agent setups, local AI backends, or Codex-powered threads.</p><h2>The Big One: Nested Agent Lane Blocking</h2><p>The headline fix in beta.2 addresses a frustrating head-of-line blocking bug in nested agent lanes. Previously, a long-running nested agent task on one session could block <strong>unrelated sessions across the entire gateway</strong> — meaning a slow background agent job on Session A would freeze agents on Sessions B and C until it finished.</p><p>The fix (<a href=\"https://github.com/openclaw/openclaw/pull/67785\">#67785</a>, thanks <a href=\"https://github.com/stainlu\">@stainlu</a>) scopes nested agent work per target session. Each session now has its own nested lane budget, so a busy session no longer starves others. If you run multiple concurrent agents — or just have one slow Codex thread running while trying to chat normally — this should make a noticeable difference.</p><h2>Session Token Totals Preserved Across Providers</h2><p>Also in beta.2: a fix (<a href=\"https://github.com/openclaw/openclaw/pull/67695\">#67695</a>) for providers that skip usage metadata on some responses. OpenClaw now carries forward the last known context usage instead of resetting to 0% or \"unknown\" — so <code>/status</code> and <code>openclaw sessions</code> keep showing meaningful token counts even when a provider omits usage in a specific reply.</p><h2>Cross-Agent Channel Account Routing</h2><p>Beta.1 landed a fix (<a href=\"https://github.com/openclaw/openclaw/pull/67508\">#67508</a>, thanks <a href=\"https://github.com/lukeboyett\">@lukeboyett</a> and <a href=\"https://github.com/gumadeiras\">@gumadeiras</a>) for cross-agent subagent spawns in shared rooms and multi-account setups. Child sessions were inheriting the caller's channel account rather than using the target agent's bound account — leading to messages being sent from the wrong account in shared workspaces or Discord servers with multiple bot accounts.</p><p>The fix routes cross-agent subagent spawns through the target agent's bound channel account, while still preserving peer and workspace/role-scoped bindings. Multi-account Discord or Slack setups should see cleaner message attribution after this lands in stable.</p><h2>Codex Context Inflation Fix</h2><p>Codex users will appreciate the fix in beta.1 (<a href=\"https://github.com/openclaw/openclaw/pull/64669\">#64669</a>, thanks <a href=\"https://github.com/cyrusaf\">@cyrusaf</a>): cumulative app-server token totals were being misread as fresh per-turn context usage, causing <code>/status</code> to report wildly inflated context percentages in long Codex threads. The session now correctly measures only what's in the active context window.</p><h2>Telegram and Browser/CDP Improvements</h2><p>Rounding out beta.1:</p><ul><li><strong>Telegram/callbacks</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/68588\">#68588</a>): Stale pagination buttons on Telegram commands no longer wedge the update watermark, blocking newer updates from landing.</li><li><strong>Browser/CDP</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/68207\">#68207</a>): WSL-to-Windows Chrome endpoints no longer appear offline under strict SSRF defaults. Phase-specific diagnostics also now surface exactly which part of the CDP handshake failed.</li></ul><h2>Trying the Betas</h2><p>To opt into the beta channel:</p><p>``<code>bash<br />npm install -g openclaw@beta<br /></code>``</p><p>These are pre-releases — run them on a staging gateway or secondary install if you're cautious. The fixes target real production pain points, but stable users should wait for the next tagged release.</p><p>Both betas are available now on the <a href=\"https://github.com/openclaw/openclaw/releases\">OpenClaw GitHub releases page</a>.</p>",
      "date_published": "2026-04-19T08:00:00.000Z",
      "date_modified": "2026-04-19T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-19-nested-agent-fix.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-18-nilbox-zero-token-sandbox/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-18-nilbox-zero-token-sandbox/",
      "title": "Nilbox Brings Zero-Token Security to OpenClaw With a VM Sandbox",
      "summary": "Nilbox wraps OpenClaw in an isolated VM where real API tokens never enter the sandbox, eliminating key theft, data leakage, and runaway API bills.",
      "content_text": "Running OpenClaw locally means handing the agent real API keys — keys that live as environment variables accessible to every process on your machine, every npm package you've installed, and potentially any prompt injection that sneaks through. A new open-source project called **Nilbox** is trying to fix that at the architecture level.\n\n[Nilbox](https://nilbox.run) appeared on Hacker News on April 18th with the pitch: *\"Run OpenClaw without exposing your API tokens.\"* The approach is elegant: OpenClaw runs inside an isolated VM, but the VM never receives your real credentials. Instead, it gets a dummy placeholder token. A lightweight proxy on your host machine intercepts outbound API calls, swaps in the real token at the network layer, and forwards the request to the provider — all without the VM ever knowing a real key existed.\n\n## The Problem Nilbox Is Solving\n\nIf you've ever spun up OpenClaw on a shared machine, a laptop with dozens of npm packages installed, or a server with multiple services running, you've accepted a quiet risk: your API keys sit in plain text as environment variables. Any process with access to `process.env` (or a clever enough prompt injection) can read and exfiltrate them.\n\nThe standard advice — \"use a dedicated machine\" — is impractical for most people. Nilbox offers a different path.\n\n## Zero-Token Architecture\n\nThe core idea behind Nilbox is what they call **Zero Token Architecture**:\n\n1. **OpenClaw runs inside a VM** — a private sandbox on your existing PC, Mac, or Linux machine. No dedicated hardware required.\n2. **The VM gets a dummy token** — something like `ANTHROPIC_API_KEY=ANTHROPIC_API_KEY`. OpenClaw sees it as a valid-looking key and runs normally.\n3. **The host proxy intercepts and swaps** — when OpenClaw makes an API call, the nilbox proxy on the host intercepts the request, replaces the dummy token with your real credential, and forwards it to the cloud provider.\n4. **Zero attack surface** — even if the VM is fully compromised, there are no real credentials to steal.\n\nBeyond token security, Nilbox layers on additional controls:\n\n- **Directory-level access control**: OpenClaw can only read directories you explicitly allow. Your `~/.ssh`, `~/.env`, and `~/Documents` stay invisible unless you open them.\n- **Network allowlist**: Outbound traffic from the VM is blocked by default. You approve specific destinations (like `api.anthropic.com`). Everything else is silently dropped.\n- **Spending caps**: Set daily and monthly limits per provider. Once the cap is hit, Nilbox automatically blocks further requests — no more overnight bill shock.\n\n## Setup\n\nNilbox is open-source and described as a one-click install that works on macOS, Windows, and Linux. The project's landing page emphasizes that no admin privileges or terminal experience is required — the VM spins up from a single UI action.\n\nThe GitHub repository is at [github.com/rednakta/nilbox](https://github.com/rednakta/nilbox) (based on the HN author's handle; check the site for the official link).\n\n## Why This Matters\n\nThe security concerns Nilbox addresses aren't hypothetical. Prompt injection attacks against OpenClaw agents are an active research area, and the attack surface grows with each new plugin and channel integration you add. Keeping real credentials entirely outside the agent's execution environment is a sound defense-in-depth approach.\n\nThe project is still early — the HN post (3 points at time of writing) hasn't caught fire yet — but the architecture is interesting enough to watch. Similar zero-trust approaches have worked well in other agentic contexts (see: ArmorClaw's intent-assurance plugin), and \"sandbox the whole thing\" is a natural next step for users who want to give their OpenClaw agent access to sensitive systems without fully trusting every line of the agent's tool chain.\n\n## Try It\n\n- **Site**: [nilbox.run](https://nilbox.run)\n- **HN discussion**: [Show HN: Nilbox – Run OpenClaw without exposing your API tokens](https://news.ycombinator.com/item?id=47812193)\n\nIf API token security is a concern in your OpenClaw setup — especially if you're running the agent on a shared machine, giving it access to email or files, or using community-built skills from ClawHub — Nilbox is worth a look.",
      "content_html": "<p>Running OpenClaw locally means handing the agent real API keys — keys that live as environment variables accessible to every process on your machine, every npm package you've installed, and potentially any prompt injection that sneaks through. A new open-source project called <strong>Nilbox</strong> is trying to fix that at the architecture level.</p><p><a href=\"https://nilbox.run\">Nilbox</a> appeared on Hacker News on April 18th with the pitch: <em>\"Run OpenClaw without exposing your API tokens.\"</em> The approach is elegant: OpenClaw runs inside an isolated VM, but the VM never receives your real credentials. Instead, it gets a dummy placeholder token. A lightweight proxy on your host machine intercepts outbound API calls, swaps in the real token at the network layer, and forwards the request to the provider — all without the VM ever knowing a real key existed.</p><h2>The Problem Nilbox Is Solving</h2><p>If you've ever spun up OpenClaw on a shared machine, a laptop with dozens of npm packages installed, or a server with multiple services running, you've accepted a quiet risk: your API keys sit in plain text as environment variables. Any process with access to <code>process.env</code> (or a clever enough prompt injection) can read and exfiltrate them.</p><p>The standard advice — \"use a dedicated machine\" — is impractical for most people. Nilbox offers a different path.</p><h2>Zero-Token Architecture</h2><p>The core idea behind Nilbox is what they call <strong>Zero Token Architecture</strong>:</p><ol><li><strong>OpenClaw runs inside a VM</strong> — a private sandbox on your existing PC, Mac, or Linux machine. No dedicated hardware required.</li><li><strong>The VM gets a dummy token</strong> — something like <code>ANTHROPIC_API_KEY=ANTHROPIC_API_KEY</code>. OpenClaw sees it as a valid-looking key and runs normally.</li><li><strong>The host proxy intercepts and swaps</strong> — when OpenClaw makes an API call, the nilbox proxy on the host intercepts the request, replaces the dummy token with your real credential, and forwards it to the cloud provider.</li><li><strong>Zero attack surface</strong> — even if the VM is fully compromised, there are no real credentials to steal.</li></ol><p>Beyond token security, Nilbox layers on additional controls:</p><ul><li><strong>Directory-level access control</strong>: OpenClaw can only read directories you explicitly allow. Your <code>~/.ssh</code>, <code>~/.env</code>, and <code>~/Documents</code> stay invisible unless you open them.</li><li><strong>Network allowlist</strong>: Outbound traffic from the VM is blocked by default. You approve specific destinations (like <code>api.anthropic.com</code>). Everything else is silently dropped.</li><li><strong>Spending caps</strong>: Set daily and monthly limits per provider. Once the cap is hit, Nilbox automatically blocks further requests — no more overnight bill shock.</li></ul><h2>Setup</h2><p>Nilbox is open-source and described as a one-click install that works on macOS, Windows, and Linux. The project's landing page emphasizes that no admin privileges or terminal experience is required — the VM spins up from a single UI action.</p><p>The GitHub repository is at <a href=\"https://github.com/rednakta/nilbox\">github.com/rednakta/nilbox</a> (based on the HN author's handle; check the site for the official link).</p><h2>Why This Matters</h2><p>The security concerns Nilbox addresses aren't hypothetical. Prompt injection attacks against OpenClaw agents are an active research area, and the attack surface grows with each new plugin and channel integration you add. Keeping real credentials entirely outside the agent's execution environment is a sound defense-in-depth approach.</p><p>The project is still early — the HN post (3 points at time of writing) hasn't caught fire yet — but the architecture is interesting enough to watch. Similar zero-trust approaches have worked well in other agentic contexts (see: ArmorClaw's intent-assurance plugin), and \"sandbox the whole thing\" is a natural next step for users who want to give their OpenClaw agent access to sensitive systems without fully trusting every line of the agent's tool chain.</p><h2>Try It</h2><ul><li><strong>Site</strong>: <a href=\"https://nilbox.run\">nilbox.run</a></li><li><strong>HN discussion</strong>: <a href=\"https://news.ycombinator.com/item?id=47812193\">Show HN: Nilbox – Run OpenClaw without exposing your API tokens</a></li></ul><p>If API token security is a concern in your OpenClaw setup — especially if you're running the agent on a shared machine, giving it access to email or files, or using community-built skills from ClawHub — Nilbox is worth a look.</p>",
      "date_published": "2026-04-18T23:00:00.000Z",
      "date_modified": "2026-04-18T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-18-nilbox-zero-token-sandbox.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-18-telegram-abort-ghost-reply/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-18-telegram-abort-ghost-reply/",
      "title": "OpenClaw Patches Telegram Ghost Reply Bug After Session Aborts",
      "summary": "A race condition in OpenClaw's Telegram dispatcher could resurface old replies after a turn was aborted. PR #68100 seals the escape hatches with a per-session abort fence.",
      "content_text": "Telegram users running OpenClaw have occasionally seen a strange ghost: after hitting abort, the agent sends the old reply anyway — or leaves stale reactions pinned to a message that was already superseded. [PR #68100](https://github.com/openclaw/openclaw/pull/68100) by **rubencu**, merged April 18th, tracks down every escape hatch and seals them.\n\n## The Problem: A Race With Two Lanes\n\nTelegram routes normal message traffic and abort commands through **separate control lanes**. That's by design, but it creates a window where an abort can overtake a reply that's already in flight.\n\nThe failure sequence looks like this:\n\n1. Turn A starts — the agent begins composing a reply, possibly showing a preview\n2. Turn B arrives on Telegram's control lane and aborts the active run\n3. Telegram correctly displays ⚙️ *Agent was aborted.*\n4. Stale finalization work from Turn A is still in flight — and it completes anyway, sending the old answer or leaving stale reactions behind\n\nThe bug had four distinct escape hatches, all in `extensions/telegram/src/bot-message-dispatch.ts`:\n\n- **Pre-dispatch async work** could delay fence registration long enough for an abort to land and clear before the guard existed\n- **Queued draft-lane callbacks** could miss an abort that arrived while they were waiting\n- **Pre-dispatch setup errors** could exit before the `finally` cleanup, leaking abort-fence state for the session\n- **Superseded cleanup** could still call `stream.stop()`, which final-flushes hidden short partials into a brand-new stale preview\n\n## The Fix: A Per-Session Abort Generation Fence\n\nThe solution is a session-scoped generation counter keyed by `CommandTargetSessionKey` (falling back to `SessionKey`, then chat/thread). Here's the approach:\n\n- **Fence is registered before any awaited pre-dispatch work** — no more window where an abort can land before the guard is in place\n- **Only abort requests increment the generation** — normal replies don't interfere with each other\n- **Stale same-session work becomes a no-op** after supersession, covering queued callbacks, late previews, final delivery/edit paths, and fallback sends\n- **Supersession is re-checked after draining queued draft-lane work** so cleanup sees aborts that landed during the drain\n- **Pre-dispatch errors release the fence** instead of leaking per-session state on throw\n- A new **non-flushing `discard()` shutdown** prevents superseded hidden partials from materializing into stale previews\n\n## Test Coverage\n\nThe PR adds comprehensive coverage to `bot-message-dispatch.test.ts`:\n\n- Same-session abort suppresses stale old answer finalization\n- Different-session abort does **not** suppress the older answer (important isolation case)\n- Same-session abort on the control lane still supersedes via `CommandTargetSessionKey`\n- Aborts during async pre-dispatch work still fence the older reply\n- Aborts during queued draft-lane drain don't clear an already-visible preview\n- Hidden short partials are discarded, not flushed, after abort\n\nPlus new tests in `draft-stream.test.ts` covering the `discard()` behavior.\n\n## What Changes for You\n\nIf you use OpenClaw over Telegram, this fix:\n\n- Eliminates ghost replies appearing after you abort a turn\n- Clears stale \"thinking\" or reaction states properly\n- Keeps multi-session setups isolated — other sessions' aborts don't bleed over\n\nNo config changes required. The fix is scoped entirely to Telegram's dispatcher and does not touch shared reply or runtime contracts.\n\nTrack the release on [GitHub](https://github.com/openclaw/openclaw/releases) or pull from main if you need this immediately.",
      "content_html": "<p>Telegram users running OpenClaw have occasionally seen a strange ghost: after hitting abort, the agent sends the old reply anyway — or leaves stale reactions pinned to a message that was already superseded. <a href=\"https://github.com/openclaw/openclaw/pull/68100\">PR #68100</a> by <strong>rubencu</strong>, merged April 18th, tracks down every escape hatch and seals them.</p><h2>The Problem: A Race With Two Lanes</h2><p>Telegram routes normal message traffic and abort commands through <strong>separate control lanes</strong>. That's by design, but it creates a window where an abort can overtake a reply that's already in flight.</p><p>The failure sequence looks like this:</p><ol><li>Turn A starts — the agent begins composing a reply, possibly showing a preview</li><li>Turn B arrives on Telegram's control lane and aborts the active run</li><li>Telegram correctly displays ⚙️ <em>Agent was aborted.</em></li><li>Stale finalization work from Turn A is still in flight — and it completes anyway, sending the old answer or leaving stale reactions behind</li></ol><p>The bug had four distinct escape hatches, all in <code>extensions/telegram/src/bot-message-dispatch.ts</code>:</p><ul><li><strong>Pre-dispatch async work</strong> could delay fence registration long enough for an abort to land and clear before the guard existed</li><li><strong>Queued draft-lane callbacks</strong> could miss an abort that arrived while they were waiting</li><li><strong>Pre-dispatch setup errors</strong> could exit before the <code>finally</code> cleanup, leaking abort-fence state for the session</li><li><strong>Superseded cleanup</strong> could still call <code>stream.stop()</code>, which final-flushes hidden short partials into a brand-new stale preview</li></ul><h2>The Fix: A Per-Session Abort Generation Fence</h2><p>The solution is a session-scoped generation counter keyed by <code>CommandTargetSessionKey</code> (falling back to <code>SessionKey</code>, then chat/thread). Here's the approach:</p><ul><li><strong>Fence is registered before any awaited pre-dispatch work</strong> — no more window where an abort can land before the guard is in place</li><li><strong>Only abort requests increment the generation</strong> — normal replies don't interfere with each other</li><li><strong>Stale same-session work becomes a no-op</strong> after supersession, covering queued callbacks, late previews, final delivery/edit paths, and fallback sends</li><li><strong>Supersession is re-checked after draining queued draft-lane work</strong> so cleanup sees aborts that landed during the drain</li><li><strong>Pre-dispatch errors release the fence</strong> instead of leaking per-session state on throw</li><li>A new <strong>non-flushing <code>discard()</code> shutdown</strong> prevents superseded hidden partials from materializing into stale previews</li></ul><h2>Test Coverage</h2><p>The PR adds comprehensive coverage to <code>bot-message-dispatch.test.ts</code>:</p><ul><li>Same-session abort suppresses stale old answer finalization</li><li>Different-session abort does <strong>not</strong> suppress the older answer (important isolation case)</li><li>Same-session abort on the control lane still supersedes via <code>CommandTargetSessionKey</code></li><li>Aborts during async pre-dispatch work still fence the older reply</li><li>Aborts during queued draft-lane drain don't clear an already-visible preview</li><li>Hidden short partials are discarded, not flushed, after abort</li></ul><p>Plus new tests in <code>draft-stream.test.ts</code> covering the <code>discard()</code> behavior.</p><h2>What Changes for You</h2><p>If you use OpenClaw over Telegram, this fix:</p><ul><li>Eliminates ghost replies appearing after you abort a turn</li><li>Clears stale \"thinking\" or reaction states properly</li><li>Keeps multi-session setups isolated — other sessions' aborts don't bleed over</li></ul><p>No config changes required. The fix is scoped entirely to Telegram's dispatcher and does not touch shared reply or runtime contracts.</p><p>Track the release on <a href=\"https://github.com/openclaw/openclaw/releases\">GitHub</a> or pull from main if you need this immediately.</p>",
      "date_published": "2026-04-18T08:05:00.000Z",
      "date_modified": "2026-04-18T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-18-telegram-abort-ghost-reply.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-18-control-ui-mic-fix/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-18-control-ui-mic-fix/",
      "title": "OpenClaw Fixes Silent Mic Failure in the Control UI Chat",
      "summary": "A Permissions-Policy header was quietly blocking the Control UI mic button for all users. PR #68368 unlocks same-origin microphone access so voice input finally works.",
      "content_text": "If you've ever clicked the microphone button in OpenClaw's Control UI chat and watched it silently reset — no error, no recording, nothing — you weren't imagining things. A years-old Permissions-Policy header was blocking browser microphone access for the page itself.\n\n[PR #68368](https://github.com/openclaw/openclaw/pull/68368) by contributor **visionik**, merged April 18th, finally closes [issue #51085](https://github.com/openclaw/openclaw/issues/51085) and restores working voice input to the dashboard.\n\n## What Was Broken\n\nOpenClaw's gateway sets a suite of security headers on its HTTP responses, including a `Permissions-Policy` header. The problem: that header included `microphone=()`, which is the most restrictive setting possible. It tells the browser to deny microphone access to **every** origin — including the page itself.\n\nWhen the Control UI's `speech.ts` tried to start the browser's Web Speech API, it hit an immediate policy wall:\n\n```\nPermissions policy violation: microphone is not allowed in this document\n```\n\nThe mic button would silently reset. No feedback, no fallback. Users running voice-based workflows were left in the dark.\n\n## The Fix\n\nThe solution is surgical. Rather than opening up microphone access broadly, the new policy sets `microphone=(self)` — the same-origin allowlist. The gateway's own web interface can now use the Web Speech API, while third-party frames remain fully blocked.\n\nCamera and geolocation stay at their existing deny-all settings. The change is narrowly scoped to the one capability that OpenClaw's Control UI actually needs.\n\n```\nBefore: Permissions-Policy: microphone=()\nAfter:  Permissions-Policy: microphone=(self)\n```\n\n## What You Get\n\n- **Voice input works again** in the Control UI chat panel\n- No browser warnings, no silent resets\n- Third-party iframes remain blocked from accessing your microphone\n- Full test coverage added: unit tests assert the new same-origin policy, fuzz tests cover invariants across all HTTP response helpers\n\nThe PR also ships a comprehensive test suite for `src/gateway/http-common.ts` — 33 unit tests plus 13 fuzz-style property tests — bringing that module to 100% line, branch, function, and statement coverage.\n\n## How to Get It\n\nThis fix is in-flight toward the next OpenClaw release. Track it on the [GitHub releases page](https://github.com/openclaw/openclaw/releases). If you're building from main, it's in now.\n\nIf voice input matters to your workflow, this one's worth pulling early.",
      "content_html": "<p>If you've ever clicked the microphone button in OpenClaw's Control UI chat and watched it silently reset — no error, no recording, nothing — you weren't imagining things. A years-old Permissions-Policy header was blocking browser microphone access for the page itself.</p><p><a href=\"https://github.com/openclaw/openclaw/pull/68368\">PR #68368</a> by contributor <strong>visionik</strong>, merged April 18th, finally closes <a href=\"https://github.com/openclaw/openclaw/issues/51085\">issue #51085</a> and restores working voice input to the dashboard.</p><h2>What Was Broken</h2><p>OpenClaw's gateway sets a suite of security headers on its HTTP responses, including a <code>Permissions-Policy</code> header. The problem: that header included <code>microphone=()</code>, which is the most restrictive setting possible. It tells the browser to deny microphone access to <strong>every</strong> origin — including the page itself.</p><p>When the Control UI's <code>speech.ts</code> tried to start the browser's Web Speech API, it hit an immediate policy wall:</p><p>``<code><br />Permissions policy violation: microphone is not allowed in this document<br /></code>`<code></p><p>The mic button would silently reset. No feedback, no fallback. Users running voice-based workflows were left in the dark.</p><h2>The Fix</h2><p>The solution is surgical. Rather than opening up microphone access broadly, the new policy sets </code>microphone=(self)<code> — the same-origin allowlist. The gateway's own web interface can now use the Web Speech API, while third-party frames remain fully blocked.</p><p>Camera and geolocation stay at their existing deny-all settings. The change is narrowly scoped to the one capability that OpenClaw's Control UI actually needs.</p><p></code>`<code><br />Before: Permissions-Policy: microphone=()<br />After:  Permissions-Policy: microphone=(self)<br /></code>`<code></p><h2>What You Get</h2><ul><li><strong>Voice input works again</strong> in the Control UI chat panel</li><li>No browser warnings, no silent resets</li><li>Third-party iframes remain blocked from accessing your microphone</li><li>Full test coverage added: unit tests assert the new same-origin policy, fuzz tests cover invariants across all HTTP response helpers</li></ul><p>The PR also ships a comprehensive test suite for </code>src/gateway/http-common.ts` — 33 unit tests plus 13 fuzz-style property tests — bringing that module to 100% line, branch, function, and statement coverage.</p><h2>How to Get It</h2><p>This fix is in-flight toward the next OpenClaw release. Track it on the <a href=\"https://github.com/openclaw/openclaw/releases\">GitHub releases page</a>. If you're building from main, it's in now.</p><p>If voice input matters to your workflow, this one's worth pulling early.</p>",
      "date_published": "2026-04-18T08:00:00.000Z",
      "date_modified": "2026-04-18T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-18-control-ui-mic-fix.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-17-engineering-managers-hate-openclaw/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-17-engineering-managers-hate-openclaw/",
      "title": "Why Engineering Managers Are Going to Hate OpenClaw",
      "summary": "A viral newsletter argues OpenClaw's proactive heartbeat feature is launching a new agentic AI wave — one that could hit dev teams harder than the last ChatGPT hype cycle.",
      "content_text": "A Substack post making the rounds today from software engineering newsletter [manager.dev](https://newsletter.manager.dev) makes a bold prediction: the rise of OpenClaw's proactive \"heartbeat\" feature is about to cause the same kind of organizational chaos that the first wave of ChatGPT integrations did — except this time, the blast radius is bigger.\n\nThe piece, titled *\"Engineering Managers are going to hate OpenClaw\"*, was written by Zaide Anton and drew immediate attention on Hacker News after landing on April 17. Its central argument is worth unpacking.\n\n## OpenClaw Just Passed React on GitHub\n\nAnton opens with a striking data point: OpenClaw has surpassed React to become the 8th most-starred GitHub project, sitting at over 350,000 stars. It's the fastest-growing open-source project in GitHub history, built by Austrian developer Peter Steinberger — who connected a messaging app, an LLM, and a terminal, then assumed Google or OpenAI would replicate it within weeks. They didn't.\n\nThat rapid rise is one reason this piece is resonating. OpenClaw is no longer a niche power-user tool. It's becoming a platform.\n\n## The Three Things That Made OpenClaw Go Viral\n\nAnton breaks down what separates OpenClaw from a simple Claude Code setup: memory (plain Markdown files written to your filesystem), channels (interact via Slack, iMessage, WhatsApp, Telegram), and the heartbeat.\n\nThe **heartbeat** is the crux. Every 30 minutes, an OpenClaw agent wakes up, checks for things that need doing, and proactively sends you messages. It can monitor Gmail, watch deployments, summarize Slack, file expenses. It's reactive automation made accessible — and that's exactly what makes it dangerous in the wrong hands.\n\n> \"With prompting, you are much less careful,\" Anton writes. \"A chatbot that gives wrong answers is embarrassing, but an agent that acts on wrong assumptions is like a bomb.\"\n\n## A Cautionary History\n\nThe piece draws a direct parallel to the 2023 chatbot hype wave: companies that bolted ChatGPT onto products their users never asked to talk to. The Chevrolet bot that sold a car for $1. The supermarket bot suggesting poisonous recipes. Snapchat's 1-star review spike.\n\nAnton's concern is that the \"agentic wave\" will follow the same pattern — CPOs pushing for OpenClaw-like features because the board read some hype tweets, without engineering managers in the room early enough to scope the risk. The difference now is that agents don't just say wrong things; they *do* wrong things.\n\n## What This Means in Practice\n\nThe piece includes some vivid examples of how agents could go sideways at scale: a Notion agent that reorganizes your workspace overnight \"because it decided your folder structure was too messy,\" or a McDonald's agent that orders food before you open the app. These aren't hypotheticals designed to scare — they're extrapolations from real patterns already emerging in early OpenClaw deployments.\n\nAt the same time, Anton acknowledges genuinely compelling use cases. [Linear's agent](https://linear.app/changelog/2026-03-24-introducing-linear-agent), for example, is shifting issue tracking from a UI people click through to a database agents operate against. If Salesforce becomes a backend that OpenClaw queries rather than a product users log into, entire product categories may be disrupted.\n\n## The Recommendation\n\nAnton's advice for engineering managers is measured: don't dismiss this as hype. Set aside two hours to actually run OpenClaw, NanoClaw, or PaperClip — not because you need to become an expert, but because your PM is already thinking about it and \"having at least some early experience on the consumer side can help you a lot in upcoming conversations.\"\n\nIt's a pragmatic take: neither \"ban it\" nor \"ship it everywhere.\" Understand it before your org gets a requirement that was designed without you.\n\n## Why This Piece Matters for OpenClaw's Trajectory\n\nThe significance of this article is less about the arguments it makes and more about where it's appearing. Manager.dev is read by engineering leads at mid-to-large companies. When newsletters aimed at technical management start writing about OpenClaw — not to explain what it is, but to warn about *how to handle pressure to deploy it* — that signals the tool has crossed from enthusiast project to enterprise consideration.\n\nThat's a different kind of moment than a GitHub star milestone. It's the indicator that decisions about OpenClaw adoption are moving up the org chart, with or without input from the people who understand what's actually involved.\n\nYou can read the full piece at [newsletter.manager.dev](https://newsletter.manager.dev).",
      "content_html": "<p>A Substack post making the rounds today from software engineering newsletter <a href=\"https://newsletter.manager.dev\">manager.dev</a> makes a bold prediction: the rise of OpenClaw's proactive \"heartbeat\" feature is about to cause the same kind of organizational chaos that the first wave of ChatGPT integrations did — except this time, the blast radius is bigger.</p><p>The piece, titled <em>\"Engineering Managers are going to hate OpenClaw\"</em>, was written by Zaide Anton and drew immediate attention on Hacker News after landing on April 17. Its central argument is worth unpacking.</p><h2>OpenClaw Just Passed React on GitHub</h2><p>Anton opens with a striking data point: OpenClaw has surpassed React to become the 8th most-starred GitHub project, sitting at over 350,000 stars. It's the fastest-growing open-source project in GitHub history, built by Austrian developer Peter Steinberger — who connected a messaging app, an LLM, and a terminal, then assumed Google or OpenAI would replicate it within weeks. They didn't.</p><p>That rapid rise is one reason this piece is resonating. OpenClaw is no longer a niche power-user tool. It's becoming a platform.</p><h2>The Three Things That Made OpenClaw Go Viral</h2><p>Anton breaks down what separates OpenClaw from a simple Claude Code setup: memory (plain Markdown files written to your filesystem), channels (interact via Slack, iMessage, WhatsApp, Telegram), and the heartbeat.</p><p>The <strong>heartbeat</strong> is the crux. Every 30 minutes, an OpenClaw agent wakes up, checks for things that need doing, and proactively sends you messages. It can monitor Gmail, watch deployments, summarize Slack, file expenses. It's reactive automation made accessible — and that's exactly what makes it dangerous in the wrong hands.</p><p>> \"With prompting, you are much less careful,\" Anton writes. \"A chatbot that gives wrong answers is embarrassing, but an agent that acts on wrong assumptions is like a bomb.\"</p><h2>A Cautionary History</h2><p>The piece draws a direct parallel to the 2023 chatbot hype wave: companies that bolted ChatGPT onto products their users never asked to talk to. The Chevrolet bot that sold a car for $1. The supermarket bot suggesting poisonous recipes. Snapchat's 1-star review spike.</p><p>Anton's concern is that the \"agentic wave\" will follow the same pattern — CPOs pushing for OpenClaw-like features because the board read some hype tweets, without engineering managers in the room early enough to scope the risk. The difference now is that agents don't just say wrong things; they <em>do</em> wrong things.</p><h2>What This Means in Practice</h2><p>The piece includes some vivid examples of how agents could go sideways at scale: a Notion agent that reorganizes your workspace overnight \"because it decided your folder structure was too messy,\" or a McDonald's agent that orders food before you open the app. These aren't hypotheticals designed to scare — they're extrapolations from real patterns already emerging in early OpenClaw deployments.</p><p>At the same time, Anton acknowledges genuinely compelling use cases. <a href=\"https://linear.app/changelog/2026-03-24-introducing-linear-agent\">Linear's agent</a>, for example, is shifting issue tracking from a UI people click through to a database agents operate against. If Salesforce becomes a backend that OpenClaw queries rather than a product users log into, entire product categories may be disrupted.</p><h2>The Recommendation</h2><p>Anton's advice for engineering managers is measured: don't dismiss this as hype. Set aside two hours to actually run OpenClaw, NanoClaw, or PaperClip — not because you need to become an expert, but because your PM is already thinking about it and \"having at least some early experience on the consumer side can help you a lot in upcoming conversations.\"</p><p>It's a pragmatic take: neither \"ban it\" nor \"ship it everywhere.\" Understand it before your org gets a requirement that was designed without you.</p><h2>Why This Piece Matters for OpenClaw's Trajectory</h2><p>The significance of this article is less about the arguments it makes and more about where it's appearing. Manager.dev is read by engineering leads at mid-to-large companies. When newsletters aimed at technical management start writing about OpenClaw — not to explain what it is, but to warn about <em>how to handle pressure to deploy it</em> — that signals the tool has crossed from enthusiast project to enterprise consideration.</p><p>That's a different kind of moment than a GitHub star milestone. It's the indicator that decisions about OpenClaw adoption are moving up the org chart, with or without input from the people who understand what's actually involved.</p><p>You can read the full piece at <a href=\"https://newsletter.manager.dev\">newsletter.manager.dev</a>.</p>",
      "date_published": "2026-04-17T23:00:00.000Z",
      "date_modified": "2026-04-17T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-17-engineering-managers-hate-openclaw.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-17-macos-screen-snapshot/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-17-macos-screen-snapshot/",
      "title": "OpenClaw macOS Node Gains Screen Snapshot Capability",
      "summary": "OpenClaw's macOS node now supports a screen.snapshot command, letting AI agents capture display content directly to enable new visual automation workflows.",
      "content_text": "The OpenClaw macOS node has a new trick: it can now take a screenshot of your display on demand. [PR #67954](https://github.com/openclaw/openclaw/pull/67954), contributed by BunsDev, adds a `screen.snapshot` command to the macOS node's IPC bridge — giving paired agents direct visual access to what's on screen.\n\n## What screen.snapshot Does\n\nWhen a macOS device is paired to an OpenClaw gateway, agents can invoke `screen.snapshot` to capture the contents of a connected monitor. The command is implemented in Swift using Apple's ScreenCaptureKit framework, the same technology powering macOS's built-in screen recording. Captured frames are encoded (JPEG or PNG), base64-encoded, and returned over the IPC bridge to the gateway where the agent can inspect or act on them.\n\nThe command accepts a handful of parameters:\n\n- **`maxWidth`** — resize the capture to fit within a maximum pixel width, reducing payload size for large displays\n- **`format`** — `jpeg` (default, compressed) or `png` (lossless, larger)\n- **Display selector** — target a specific monitor on multi-display setups\n\n## Use Cases\n\nThe addition unlocks a range of visual automation patterns:\n\n**Visual context for agents** — instead of describing what's on screen in a chat message, you can send a screenshot directly. Ask the agent \"what's wrong with this error dialog?\" and it has the full visual context.\n\n**Automated UI verification** — an agent running a deployment pipeline can snapshot the screen to confirm that a build artifact launched correctly or that a dashboard is showing expected values.\n\n**Remote monitoring** — check what's displayed on a paired Mac without physically accessing it, useful for headless or shared workstations.\n\n**Pairing with the browser tool** — combine screen snapshots with OpenClaw's built-in browser automation to build workflows that mix native macOS UI context with web-level interactions.\n\n## Security Considerations\n\nThe Aisle Security analysis flagged three medium-severity issues: OS error strings from ScreenCaptureKit being forwarded verbatim to remote callers (CWE-209), the `screen.snapshot` command proceeding on malformed params due to a silent decode fallback (CWE-20), and the potential for oversized PNG captures to cause excessive memory use (CWE-400).\n\nThese were noted and documented during review. The PR was merged with the expectation that targeted hardening follows in a subsequent iteration. In the meantime, users should ensure their gateway's device authentication is properly configured — the `screen.snapshot` command is only accessible to clients authorized to invoke node commands, so a properly locked-down gateway contains the exposure.\n\n## How to Use It\n\nYou'll need a paired macOS node running the latest OpenClaw macOS app. Once paired, the `screen.snapshot` tool becomes available in agent sessions connected to that node. The [OpenClaw nodes documentation](https://docs.openclaw.ai/nodes) covers the pairing flow.\n\nThe feature is available on the current `main` branch and will ship as part of the next numbered release. It rounds out the macOS node's growing set of device-level capabilities — alongside camera access, clipboard integration, and local app control — making paired Mac setups considerably more powerful as agent execution environments.",
      "content_html": "<p>The OpenClaw macOS node has a new trick: it can now take a screenshot of your display on demand. <a href=\"https://github.com/openclaw/openclaw/pull/67954\">PR #67954</a>, contributed by BunsDev, adds a <code>screen.snapshot</code> command to the macOS node's IPC bridge — giving paired agents direct visual access to what's on screen.</p><h2>What screen.snapshot Does</h2><p>When a macOS device is paired to an OpenClaw gateway, agents can invoke <code>screen.snapshot</code> to capture the contents of a connected monitor. The command is implemented in Swift using Apple's ScreenCaptureKit framework, the same technology powering macOS's built-in screen recording. Captured frames are encoded (JPEG or PNG), base64-encoded, and returned over the IPC bridge to the gateway where the agent can inspect or act on them.</p><p>The command accepts a handful of parameters:</p><ul><li><strong><code>maxWidth</code></strong> — resize the capture to fit within a maximum pixel width, reducing payload size for large displays</li><li><strong><code>format</code></strong> — <code>jpeg</code> (default, compressed) or <code>png</code> (lossless, larger)</li><li><strong>Display selector</strong> — target a specific monitor on multi-display setups</li></ul><h2>Use Cases</h2><p>The addition unlocks a range of visual automation patterns:</p><p><strong>Visual context for agents</strong> — instead of describing what's on screen in a chat message, you can send a screenshot directly. Ask the agent \"what's wrong with this error dialog?\" and it has the full visual context.</p><p><strong>Automated UI verification</strong> — an agent running a deployment pipeline can snapshot the screen to confirm that a build artifact launched correctly or that a dashboard is showing expected values.</p><p><strong>Remote monitoring</strong> — check what's displayed on a paired Mac without physically accessing it, useful for headless or shared workstations.</p><p><strong>Pairing with the browser tool</strong> — combine screen snapshots with OpenClaw's built-in browser automation to build workflows that mix native macOS UI context with web-level interactions.</p><h2>Security Considerations</h2><p>The Aisle Security analysis flagged three medium-severity issues: OS error strings from ScreenCaptureKit being forwarded verbatim to remote callers (CWE-209), the <code>screen.snapshot</code> command proceeding on malformed params due to a silent decode fallback (CWE-20), and the potential for oversized PNG captures to cause excessive memory use (CWE-400).</p><p>These were noted and documented during review. The PR was merged with the expectation that targeted hardening follows in a subsequent iteration. In the meantime, users should ensure their gateway's device authentication is properly configured — the <code>screen.snapshot</code> command is only accessible to clients authorized to invoke node commands, so a properly locked-down gateway contains the exposure.</p><h2>How to Use It</h2><p>You'll need a paired macOS node running the latest OpenClaw macOS app. Once paired, the <code>screen.snapshot</code> tool becomes available in agent sessions connected to that node. The <a href=\"https://docs.openclaw.ai/nodes\">OpenClaw nodes documentation</a> covers the pairing flow.</p><p>The feature is available on the current <code>main</code> branch and will ship as part of the next numbered release. It rounds out the macOS node's growing set of device-level capabilities — alongside camera access, clipboard integration, and local app control — making paired Mac setups considerably more powerful as agent execution environments.</p>",
      "date_published": "2026-04-17T08:05:00.000Z",
      "date_modified": "2026-04-17T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-17-macos-screen-snapshot.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-17-oauth-multi-agent-refresh-fix/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-17-oauth-multi-agent-refresh-fix/",
      "title": "OpenClaw Fixes OAuth Token Refresh Race in Multi-Agent Setups",
      "summary": "A new cross-agent file lock in OpenClaw serializes OAuth token refreshes, eliminating the refresh_token_reused storms that plagued large Codex deployments.",
      "content_text": "Multi-agent OpenClaw deployments — particularly those running large pools of Codex agents against a shared GitHub Copilot OAuth profile — have long been plagued by a subtle but disruptive failure mode. A new fix merged today puts an end to it.\n\n[PR #67876](https://github.com/openclaw/openclaw/pull/67876), contributed by visionik and co-authored with HeroSizy, lands a cross-agent OAuth refresh serialization layer that resolves the long-tracked issue [#26322](https://github.com/openclaw/openclaw/issues/26322).\n\n## The Problem: Refresh Token Storms\n\nWhen a shared OAuth token expires and multiple agents hold it simultaneously, every agent races to refresh it at the same time. Providers like GitHub rotate the refresh token on each successful call, which means only the *first* agent's refresh wins. Every other agent receives a `refresh_token_reused` HTTP 401 — and cascades into model fallback, even though fresh credentials are now available.\n\nFor anyone running 10–20 Codex agents sharing a single Copilot profile, this created a token-expiry storm roughly every 12 hours: a burst of cascading fallback errors that required manual recovery.\n\n## The Fix: Three Layers of Serialization\n\nThe solution stacks three distinct serialization mechanisms:\n\n**1. Cross-process file lock**  \nA new lock path at `$STATE_DIR/locks/oauth-refresh/sha256-<hex>` ensures that agents across separate OS processes queue up on a single coordination point before touching a shared profile's token. The lock key now includes both the provider name and the profile ID (NUL-separated to prevent concatenation collisions), so distinct providers never needlessly block each other.\n\n**2. In-process Promise queue**  \nWithin a single OpenClaw process, a keyed Promise chain prevents concurrent async calls from slipping past the file lock simultaneously — a scenario that's easy to hit in the async JS runtime when many tool calls land at once.\n\n**3. Credential mirroring with identity validation**  \nAfter a successful refresh, the fresh credential is mirrored back into the main-agent store. Peers that acquire the lock afterward skip their own HTTP refresh and *adopt* the already-fresh credential instead. This collapses N serialized refreshes into **1 real refresh + (N-1) cheap adoptions**.\n\nA new `isSafeToCopyOAuthIdentity` gate guards the mirror and adoption paths: it allows credential copies only when there is no positive identity mismatch *and* the incoming credential carries at least as much identity evidence (`accountId`, `email`) as the existing entry. This prevents a misconfigured sub-agent from overwriting the main store with foreign-account tokens — closing a CWE-284-class authorization issue that Aisle flagged during review.\n\n## Lock Timeout Safety\n\nThe file lock carries a 3-minute stale timeout (`stale = 180,000ms`). A hard `OAUTH_REFRESH_CALL_TIMEOUT_MS = 120,000ms` cap on the underlying HTTP call guarantees the invariant: every legitimate refresh completes before the lock can be reclaimed by a waiting peer. The two constants are explicitly tested to enforce the `call_timeout < stale` relationship.\n\n## Who Is Affected?\n\nThis fix is most impactful for:\n\n- **Multi-agent Codex setups** — 10+ `openai-codex` agents sharing a GitHub Copilot OAuth profile\n- **Team deployments** — isolated agent sessions that share a single provider account\n- **Automated pipelines** — long-running agent pools with periodic token expiration\n\nSingle-agent installs will see no behavioral change. The lock overhead is negligible for the common case.\n\n## Test Coverage\n\nThe PR ships 80+ new tests across 11 files, including seeded-RNG fuzz suites covering ~4,500 adversarial inputs. The headline test — `oauth.concurrent-20-agents.test.ts` — fires 20 agents simultaneously against a single profile and asserts that exactly one HTTP refresh call fires while all 20 agents receive the same fresh token. Lock path safety is validated against 2,700+ adversarial profile ID inputs.\n\nThe fix is available in the current main branch and will ship in the next OpenClaw release.",
      "content_html": "<p>Multi-agent OpenClaw deployments — particularly those running large pools of Codex agents against a shared GitHub Copilot OAuth profile — have long been plagued by a subtle but disruptive failure mode. A new fix merged today puts an end to it.</p><p><a href=\"https://github.com/openclaw/openclaw/pull/67876\">PR #67876</a>, contributed by visionik and co-authored with HeroSizy, lands a cross-agent OAuth refresh serialization layer that resolves the long-tracked issue <a href=\"https://github.com/openclaw/openclaw/issues/26322\">#26322</a>.</p><h2>The Problem: Refresh Token Storms</h2><p>When a shared OAuth token expires and multiple agents hold it simultaneously, every agent races to refresh it at the same time. Providers like GitHub rotate the refresh token on each successful call, which means only the <em>first</em> agent's refresh wins. Every other agent receives a <code>refresh_token_reused</code> HTTP 401 — and cascades into model fallback, even though fresh credentials are now available.</p><p>For anyone running 10–20 Codex agents sharing a single Copilot profile, this created a token-expiry storm roughly every 12 hours: a burst of cascading fallback errors that required manual recovery.</p><h2>The Fix: Three Layers of Serialization</h2><p>The solution stacks three distinct serialization mechanisms:</p><p><strong>1. Cross-process file lock</strong>  <br />A new lock path at <code>$STATE_DIR/locks/oauth-refresh/sha256-<hex></code> ensures that agents across separate OS processes queue up on a single coordination point before touching a shared profile's token. The lock key now includes both the provider name and the profile ID (NUL-separated to prevent concatenation collisions), so distinct providers never needlessly block each other.</p><p><strong>2. In-process Promise queue</strong>  <br />Within a single OpenClaw process, a keyed Promise chain prevents concurrent async calls from slipping past the file lock simultaneously — a scenario that's easy to hit in the async JS runtime when many tool calls land at once.</p><p><strong>3. Credential mirroring with identity validation</strong>  <br />After a successful refresh, the fresh credential is mirrored back into the main-agent store. Peers that acquire the lock afterward skip their own HTTP refresh and <em>adopt</em> the already-fresh credential instead. This collapses N serialized refreshes into <strong>1 real refresh + (N-1) cheap adoptions</strong>.</p><p>A new <code>isSafeToCopyOAuthIdentity</code> gate guards the mirror and adoption paths: it allows credential copies only when there is no positive identity mismatch <em>and</em> the incoming credential carries at least as much identity evidence (<code>accountId</code>, <code>email</code>) as the existing entry. This prevents a misconfigured sub-agent from overwriting the main store with foreign-account tokens — closing a CWE-284-class authorization issue that Aisle flagged during review.</p><h2>Lock Timeout Safety</h2><p>The file lock carries a 3-minute stale timeout (<code>stale = 180,000ms</code>). A hard <code>OAUTH_REFRESH_CALL_TIMEOUT_MS = 120,000ms</code> cap on the underlying HTTP call guarantees the invariant: every legitimate refresh completes before the lock can be reclaimed by a waiting peer. The two constants are explicitly tested to enforce the <code>call_timeout < stale</code> relationship.</p><h2>Who Is Affected?</h2><p>This fix is most impactful for:</p><ul><li><strong>Multi-agent Codex setups</strong> — 10+ <code>openai-codex</code> agents sharing a GitHub Copilot OAuth profile</li><li><strong>Team deployments</strong> — isolated agent sessions that share a single provider account</li><li><strong>Automated pipelines</strong> — long-running agent pools with periodic token expiration</li></ul><p>Single-agent installs will see no behavioral change. The lock overhead is negligible for the common case.</p><h2>Test Coverage</h2><p>The PR ships 80+ new tests across 11 files, including seeded-RNG fuzz suites covering ~4,500 adversarial inputs. The headline test — <code>oauth.concurrent-20-agents.test.ts</code> — fires 20 agents simultaneously against a single profile and asserts that exactly one HTTP refresh call fires while all 20 agents receive the same fresh token. Lock path safety is validated against 2,700+ adversarial profile ID inputs.</p><p>The fix is available in the current main branch and will ship in the next OpenClaw release.</p>",
      "date_published": "2026-04-17T08:00:00.000Z",
      "date_modified": "2026-04-17T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-17-oauth-multi-agent-refresh-fix.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-16-ecosystem-roundup/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-16-ecosystem-roundup/",
      "title": "OpenClaw Ecosystem Roundup: OpenTalon, Agent Hubs, and April Momentum",
      "summary": "OpenTalon debuts as a rival on Hacker News, agent-hub unifies multi-machine agent access, and the OpenClaw community keeps building in April 2026.",
      "content_text": "April is turning into a busy month for the OpenClaw community. Beyond the core releases, a cluster of new tools and discussions surfaced this week that paint a clear picture: the ecosystem is growing up fast.\n\n## OpenTalon: A \"Robust Alternative\" Appears on Hacker News\n\nA [Show HN post](https://news.ycombinator.com/item?id=47789664) this morning introduced **OpenTalon** ([github.com/opentalon/opentalon](https://github.com/opentalon/opentalon)), billed as a \"robust alternative to OpenClaw.\" The project is early — three points and a handful of comments — but the framing is notable. Rather than positioning itself as a wrapper or extension, OpenTalon is pitching a ground-up replacement.\n\nThis is exactly the kind of competitive signal that marks a maturing open-source category. OpenClaw's architecture has become the reference point that new agent runtimes define themselves against, for better or worse. Projects that frame themselves as alternatives implicitly validate that the original solved something worth solving.\n\nWhether OpenTalon builds meaningful traction is worth watching over the coming weeks.\n\n## agent-hub: One Interface for All Your Agents\n\nAnother HN debut today: **agent-hub** ([github.com/Potarix/agent-hub](https://github.com/Potarix/agent-hub)), from [@YoungGato](https://news.ycombinator.com/item?id=47799990). The premise is simple and useful — a single open-source interface for talking to agents running locally or on remote machines, with explicit support for Claude Code, Codex, Hermes, and OpenClaw.\n\nThe pain point it addresses is real. If you run agents across multiple machines, context-switching between them is friction-heavy. Existing orchestration tools like Conductor feel tied to specific workflows (Git-based, coding-centric), leaving multi-agent setups without a clean hub. Agent-hub is a weekend project, rough around the edges, but the author has ambitions including a mobile companion app.\n\nIt also reinforces a broader trend: OpenClaw is increasingly being listed alongside Claude Code and Codex as a first-class agent runtime in third-party tool descriptions.\n\n## Eustella: Building European AI With OpenClaw in Mind\n\nA [brief HN submission](https://news.ycombinator.com/item?id=47789423) introduced **Eustella** ([eustella.com](https://eustella.com)), pitched as a \"ChatGPT for Europeans\" built with OpenClaw architecture as a reference. Details are sparse, but the signal matters: international builders are treating OpenClaw's design as something worth consciously emulating — not just installing.\n\n## The \"Ask HN: Who Is Using OpenClaw?\" Thread Is Still Going\n\nIf you missed it: yesterday's [Ask HN thread](https://news.ycombinator.com/item?id=47783940) hit 318 points and over 360 comments — among the most engaged OpenClaw discussions on Hacker News to date. The starter comment was skeptical (\"I don't use it personally...\") which seems to have triggered a wave of people sharing real setups.\n\nUse cases that came up repeatedly: home automation, personal email triage, coding assistance, custom Slack bots, and personal CRM. Worth a scroll to get a ground-level view of where real-world deployments actually live in 2026.\n\n## What This Week's Ecosystem Activity Signals\n\nThe pattern emerging from this week is consistent: OpenClaw is moving from \"thing developers install\" to \"architecture that other projects are defined by.\"\n\n- **OpenTalon** frames itself against OpenClaw\n- **Mercury** (a16z-backed agent orchestration platform) listed it alongside Claude Code in their HN pitch last week\n- **ArmorClaw** built a cryptographic intent-assurance plugin on top of it\n- **agent-hub** lists it as a first-class supported runtime\n\nThat kind of gravity comes from genuine adoption. It's not manufactured.\n\nThe next few weeks should be interesting as Gemini TTS from today's release starts reaching users and more builders test the new security hardening in production environments.",
      "content_html": "<p>April is turning into a busy month for the OpenClaw community. Beyond the core releases, a cluster of new tools and discussions surfaced this week that paint a clear picture: the ecosystem is growing up fast.</p><h2>OpenTalon: A \"Robust Alternative\" Appears on Hacker News</h2><p>A <a href=\"https://news.ycombinator.com/item?id=47789664\">Show HN post</a> this morning introduced <strong>OpenTalon</strong> (<a href=\"https://github.com/opentalon/opentalon\">github.com/opentalon/opentalon</a>), billed as a \"robust alternative to OpenClaw.\" The project is early — three points and a handful of comments — but the framing is notable. Rather than positioning itself as a wrapper or extension, OpenTalon is pitching a ground-up replacement.</p><p>This is exactly the kind of competitive signal that marks a maturing open-source category. OpenClaw's architecture has become the reference point that new agent runtimes define themselves against, for better or worse. Projects that frame themselves as alternatives implicitly validate that the original solved something worth solving.</p><p>Whether OpenTalon builds meaningful traction is worth watching over the coming weeks.</p><h2>agent-hub: One Interface for All Your Agents</h2><p>Another HN debut today: <strong>agent-hub</strong> (<a href=\"https://github.com/Potarix/agent-hub\">github.com/Potarix/agent-hub</a>), from <a href=\"https://news.ycombinator.com/item?id=47799990\">@YoungGato</a>. The premise is simple and useful — a single open-source interface for talking to agents running locally or on remote machines, with explicit support for Claude Code, Codex, Hermes, and OpenClaw.</p><p>The pain point it addresses is real. If you run agents across multiple machines, context-switching between them is friction-heavy. Existing orchestration tools like Conductor feel tied to specific workflows (Git-based, coding-centric), leaving multi-agent setups without a clean hub. Agent-hub is a weekend project, rough around the edges, but the author has ambitions including a mobile companion app.</p><p>It also reinforces a broader trend: OpenClaw is increasingly being listed alongside Claude Code and Codex as a first-class agent runtime in third-party tool descriptions.</p><h2>Eustella: Building European AI With OpenClaw in Mind</h2><p>A <a href=\"https://news.ycombinator.com/item?id=47789423\">brief HN submission</a> introduced <strong>Eustella</strong> (<a href=\"https://eustella.com\">eustella.com</a>), pitched as a \"ChatGPT for Europeans\" built with OpenClaw architecture as a reference. Details are sparse, but the signal matters: international builders are treating OpenClaw's design as something worth consciously emulating — not just installing.</p><h2>The \"Ask HN: Who Is Using OpenClaw?\" Thread Is Still Going</h2><p>If you missed it: yesterday's <a href=\"https://news.ycombinator.com/item?id=47783940\">Ask HN thread</a> hit 318 points and over 360 comments — among the most engaged OpenClaw discussions on Hacker News to date. The starter comment was skeptical (\"I don't use it personally...\") which seems to have triggered a wave of people sharing real setups.</p><p>Use cases that came up repeatedly: home automation, personal email triage, coding assistance, custom Slack bots, and personal CRM. Worth a scroll to get a ground-level view of where real-world deployments actually live in 2026.</p><h2>What This Week's Ecosystem Activity Signals</h2><p>The pattern emerging from this week is consistent: OpenClaw is moving from \"thing developers install\" to \"architecture that other projects are defined by.\"</p><ul><li><strong>OpenTalon</strong> frames itself against OpenClaw</li><li><strong>Mercury</strong> (a16z-backed agent orchestration platform) listed it alongside Claude Code in their HN pitch last week</li><li><strong>ArmorClaw</strong> built a cryptographic intent-assurance plugin on top of it</li><li><strong>agent-hub</strong> lists it as a first-class supported runtime</li></ul><p>That kind of gravity comes from genuine adoption. It's not manufactured.</p><p>The next few weeks should be interesting as Gemini TTS from today's release starts reaching users and more builders test the new security hardening in production environments.</p>",
      "date_published": "2026-04-16T23:00:00.000Z",
      "date_modified": "2026-04-16T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-16-ecosystem-roundup.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-16-gemini-tts-security-fix/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-16-gemini-tts-security-fix/",
      "title": "OpenClaw v2026.4.16 Brings Gemini TTS and Security Hardening",
      "summary": "OpenClaw v2026.4.16-beta.1 ships Google Gemini text-to-speech, Claude Opus 4.7 defaults, and a fix blocking tool name injection via client definitions.",
      "content_text": "OpenClaw's April 16th beta release lands with a notable new channel capability, a model default bump, and a security fix that operators running untrusted environments should pay attention to.\n\n## Gemini Text-to-Speech Is Now Bundled\n\nThe headlining addition is Google Gemini TTS support, contributed by [@barronlroth](https://github.com/barronlroth) in [#67515](https://github.com/openclaw/openclaw/pull/67515). The bundled Google plugin now includes a full TTS provider with:\n\n- **WAV reply output** for standard voice responses\n- **PCM telephony output** for voice-call integration pipelines\n- Voice selection and provider registration\n- Full setup docs and guidance\n\nIf you're running OpenClaw as a voice assistant or integrating it into telephony workflows, this opens up Gemini's TTS quality as a first-class option alongside existing providers. The PR wires it directly into the bundled Google plugin — no extra package required, just configure your Gemini credentials and select the voice.\n\n## Claude Opus 4.7 Is the New Anthropic Default\n\nDefault model selections for Anthropic integrations — including Claude CLI defaults and bundled image understanding — have been updated to Claude Opus 4.7. Opus aliases resolve correctly to the new version. If you have prompt-tuned setups relying on a previous default model behavior, it's worth a test run after updating.\n\n## Gateway Security: Tool Name Collision Prevention\n\nA quieter but meaningful security fix landed in [#67303](https://github.com/openclaw/openclaw/pull/67303). The gateway now:\n\n1. **Anchors MEDIA: passthrough trust** to the exact raw names of registered built-in tools for the current run only\n2. **Rejects client tool definitions** whose names normalize-collide with any built-in or with another client-supplied tool in the same request\n\nBoth JSON and SSE paths return `400 invalid_request_error` on collision. Previously, a client-supplied tool with a name that normalized to match a built-in could inherit that built-in's local-media trust level. That escape route is now closed.\n\nThis matters most in multi-tenant setups or environments where MCP tool definitions arrive from third-party or untrusted sources.\n\n## BlueBubbles and Node 22+ Image Attachments Fixed\n\nUsers running BlueBubbles on Node 22+ were hitting broken inbound image attachment downloads. The fix ([#67510](https://github.com/openclaw/openclaw/pull/67510)) strips incompatible bundled-undici dispatchers from the non-SSRF fetch path, adds event-type-aware dedup keys so attachment follow-ups aren't rejected as duplicates, and adds a retry pass against the BB API when the initial webhook arrives with an empty array. Fixes [#64105](https://github.com/openclaw/openclaw/issues/64105), [#61861](https://github.com/openclaw/openclaw/issues/61861), and [#65430](https://github.com/openclaw/openclaw/issues/65430).\n\n## Other Notable Fixes\n\n- **CLI/update** ([#66959](https://github.com/openclaw/openclaw/pull/66959)): Stale packaged dist chunks are pruned after npm upgrades, and downgrade/verify inventory checks are now compat-safe — fixing global upgrades that failed with stale chunk imports.\n- **OpenAI Codex models** ([#67635](https://github.com/openclaw/openclaw/pull/67635)): Legacy `openai-codex` rows with missing API metadata or stale `https://chatgpt.com/backend-api/v1` references now self-heal to the canonical Codex transport, stopping requests from routing through broken HTML/Cloudflare paths.\n- **Agents/skills**: Available skills entries are now sorted by name after merging sources, so `skills.load.extraDirs` ordering no longer shifts prompt-cache prefixes across restarts ([#64198](https://github.com/openclaw/openclaw/pull/64198)).\n\n## How to Update\n\n```bash\nnpm install -g openclaw\nopenclaw --version\n```\n\nFull release notes are on the [OpenClaw GitHub releases page](https://github.com/openclaw/openclaw/releases).",
      "content_html": "<p>OpenClaw's April 16th beta release lands with a notable new channel capability, a model default bump, and a security fix that operators running untrusted environments should pay attention to.</p><h2>Gemini Text-to-Speech Is Now Bundled</h2><p>The headlining addition is Google Gemini TTS support, contributed by <a href=\"https://github.com/barronlroth\">@barronlroth</a> in <a href=\"https://github.com/openclaw/openclaw/pull/67515\">#67515</a>. The bundled Google plugin now includes a full TTS provider with:</p><ul><li><strong>WAV reply output</strong> for standard voice responses</li><li><strong>PCM telephony output</strong> for voice-call integration pipelines</li><li>Voice selection and provider registration</li><li>Full setup docs and guidance</li></ul><p>If you're running OpenClaw as a voice assistant or integrating it into telephony workflows, this opens up Gemini's TTS quality as a first-class option alongside existing providers. The PR wires it directly into the bundled Google plugin — no extra package required, just configure your Gemini credentials and select the voice.</p><h2>Claude Opus 4.7 Is the New Anthropic Default</h2><p>Default model selections for Anthropic integrations — including Claude CLI defaults and bundled image understanding — have been updated to Claude Opus 4.7. Opus aliases resolve correctly to the new version. If you have prompt-tuned setups relying on a previous default model behavior, it's worth a test run after updating.</p><h2>Gateway Security: Tool Name Collision Prevention</h2><p>A quieter but meaningful security fix landed in <a href=\"https://github.com/openclaw/openclaw/pull/67303\">#67303</a>. The gateway now:</p><ol><li><strong>Anchors MEDIA: passthrough trust</strong> to the exact raw names of registered built-in tools for the current run only</li><li><strong>Rejects client tool definitions</strong> whose names normalize-collide with any built-in or with another client-supplied tool in the same request</li></ol><p>Both JSON and SSE paths return <code>400 invalid_request_error</code> on collision. Previously, a client-supplied tool with a name that normalized to match a built-in could inherit that built-in's local-media trust level. That escape route is now closed.</p><p>This matters most in multi-tenant setups or environments where MCP tool definitions arrive from third-party or untrusted sources.</p><h2>BlueBubbles and Node 22+ Image Attachments Fixed</h2><p>Users running BlueBubbles on Node 22+ were hitting broken inbound image attachment downloads. The fix (<a href=\"https://github.com/openclaw/openclaw/pull/67510\">#67510</a>) strips incompatible bundled-undici dispatchers from the non-SSRF fetch path, adds event-type-aware dedup keys so attachment follow-ups aren't rejected as duplicates, and adds a retry pass against the BB API when the initial webhook arrives with an empty array. Fixes <a href=\"https://github.com/openclaw/openclaw/issues/64105\">#64105</a>, <a href=\"https://github.com/openclaw/openclaw/issues/61861\">#61861</a>, and <a href=\"https://github.com/openclaw/openclaw/issues/65430\">#65430</a>.</p><h2>Other Notable Fixes</h2><ul><li><strong>CLI/update</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/66959\">#66959</a>): Stale packaged dist chunks are pruned after npm upgrades, and downgrade/verify inventory checks are now compat-safe — fixing global upgrades that failed with stale chunk imports.</li><li><strong>OpenAI Codex models</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/67635\">#67635</a>): Legacy <code>openai-codex</code> rows with missing API metadata or stale <code>https://chatgpt.com/backend-api/v1</code> references now self-heal to the canonical Codex transport, stopping requests from routing through broken HTML/Cloudflare paths.</li><li><strong>Agents/skills</strong>: Available skills entries are now sorted by name after merging sources, so <code>skills.load.extraDirs</code> ordering no longer shifts prompt-cache prefixes across restarts (<a href=\"https://github.com/openclaw/openclaw/pull/64198\">#64198</a>).</li></ul><h2>How to Update</h2><p>``<code>bash<br />npm install -g openclaw<br />openclaw --version<br /></code>``</p><p>Full release notes are on the <a href=\"https://github.com/openclaw/openclaw/releases\">OpenClaw GitHub releases page</a>.</p>",
      "date_published": "2026-04-16T23:00:00.000Z",
      "date_modified": "2026-04-16T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-16-gemini-tts-security-fix.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-16-cli-transcripts-ollama-fix/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-16-cli-transcripts-ollama-fix/",
      "title": "OpenClaw CLI Transcript Persistence and Ollama Provider Fix Ship Today",
      "summary": "Two PRs merged overnight bring CLI agent session history to OpenClaw and fix a frustrating Ollama 404 error caused by an un-stripped provider prefix.",
      "content_text": "Two pull requests merged in the early hours of April 16 bring a meaningful new capability and a well-targeted bug fix to OpenClaw. Together they improve the experience for anyone running CLI-backed agents like Codex or Claude Code through the gateway, and for anyone using Ollama as a local model provider.\n\n## CLI Agent Turns Now Persist to Session Transcripts\n\n[PR #67490](https://github.com/openclaw/openclaw/pull/67490) by [@obviyus](https://github.com/obviyus) adds `persistCliTurnTranscript()` to the attempt execution layer. When a CLI-backed agent (one where `result.meta.executionTrace?.runner === \"cli\"`) completes a turn, OpenClaw now writes both the user prompt and the assistant reply into the session transcript via `SessionManager.appendMessage`.\n\n### What This Unlocks\n\nBefore this change, conversations with CLI agents like Codex effectively vanished after each session — OpenClaw had no durable record of what was asked or answered. With transcripts enabled, you get:\n\n- **Session recall**: the gateway can reference earlier CLI turns for context\n- **Dreaming and memory ingestion**: CLI sessions become eligible for OpenClaw's memory consolidation pipeline\n- **Audit trails**: useful for team setups where multiple people share a gateway\n\n### Security Considerations\n\nAn automated security review flagged three medium-severity concerns worth knowing about:\n\n1. **Unbounded payload concatenation** — if a CLI provider returns an extremely large output, the current implementation concatenates everything in memory before writing. The reviewer recommends capping the total character budget (e.g. 50,000 chars) before persistence.\n\n2. **Untrusted metadata in transcripts** — provider, model, and usage data come from the CLI agent's own `agentMeta` output, which means a misbehaving agent could spoof billing entries. Recommended fix: sanitize and clamp token counts; derive provider from config rather than agent output.\n\n3. **No secret redaction** — CLI agents can read local files and print credentials. The PR doesn't scrub transcript content before writing to disk. A follow-up opt-in flag and secret-scrubbing pass is recommended.\n\nNone of these are show-stoppers for most setups, but they are worth tracking for production deployments with sensitive environments. Watch the [PR thread](https://github.com/openclaw/openclaw/pull/67490) for follow-up hardening.\n\n## Ollama Model IDs No Longer Cause 404 Errors\n\n[PR #67457](https://github.com/openclaw/openclaw/pull/67457) by [@suboss87](https://github.com/suboss87) fixes a quiet but frustrating bug in the Ollama chat request path.\n\nWhen OpenClaw is configured to use an Ollama model — either via setup or by setting the primary model to `ollama/<model-name>` — the model ID was passed directly to the Ollama API without stripping the `ollama/` prefix. The Ollama API does not understand the prefixed format, so every request returned a 404.\n\n```\n# Before: sent to Ollama API as-is\nollama/llama3.2\n\n# After: prefix stripped before the request\nllama3.2\n```\n\nInterestingly, the embedding path (`normalizeEmbeddingModel` at line 100) already handled this correctly. Only the chat stream path was affected. The fix brings the chat path into alignment with the embedding path.\n\nThis closes [issue #67435](https://github.com/openclaw/openclaw/issues/67435) and should resolve 404 failures that appeared silently even with a correctly configured Ollama endpoint.\n\n## How to Get These Changes\n\nBoth fixes are in the `main` branch and will land in the next tagged release. Monitor the [releases page](https://github.com/openclaw/openclaw/releases) for the next beta or stable build. Once released:\n\n```bash\nnpm install -g openclaw@latest\nopenclaw gateway restart\n```\n\nIf you're running Ollama and hitting 404s today, these are likely your culprit — the fix is confirmed merged.",
      "content_html": "<p>Two pull requests merged in the early hours of April 16 bring a meaningful new capability and a well-targeted bug fix to OpenClaw. Together they improve the experience for anyone running CLI-backed agents like Codex or Claude Code through the gateway, and for anyone using Ollama as a local model provider.</p><h2>CLI Agent Turns Now Persist to Session Transcripts</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/67490\">PR #67490</a> by <a href=\"https://github.com/obviyus\">@obviyus</a> adds <code>persistCliTurnTranscript()</code> to the attempt execution layer. When a CLI-backed agent (one where <code>result.meta.executionTrace?.runner === \"cli\"</code>) completes a turn, OpenClaw now writes both the user prompt and the assistant reply into the session transcript via <code>SessionManager.appendMessage</code>.</p><h3>What This Unlocks</h3><p>Before this change, conversations with CLI agents like Codex effectively vanished after each session — OpenClaw had no durable record of what was asked or answered. With transcripts enabled, you get:</p><ul><li><strong>Session recall</strong>: the gateway can reference earlier CLI turns for context</li><li><strong>Dreaming and memory ingestion</strong>: CLI sessions become eligible for OpenClaw's memory consolidation pipeline</li><li><strong>Audit trails</strong>: useful for team setups where multiple people share a gateway</li></ul><h3>Security Considerations</h3><p>An automated security review flagged three medium-severity concerns worth knowing about:</p><ol><li><strong>Unbounded payload concatenation</strong> — if a CLI provider returns an extremely large output, the current implementation concatenates everything in memory before writing. The reviewer recommends capping the total character budget (e.g. 50,000 chars) before persistence.</li></ol><ol><li><strong>Untrusted metadata in transcripts</strong> — provider, model, and usage data come from the CLI agent's own <code>agentMeta</code> output, which means a misbehaving agent could spoof billing entries. Recommended fix: sanitize and clamp token counts; derive provider from config rather than agent output.</li></ol><ol><li><strong>No secret redaction</strong> — CLI agents can read local files and print credentials. The PR doesn't scrub transcript content before writing to disk. A follow-up opt-in flag and secret-scrubbing pass is recommended.</li></ol><p>None of these are show-stoppers for most setups, but they are worth tracking for production deployments with sensitive environments. Watch the <a href=\"https://github.com/openclaw/openclaw/pull/67490\">PR thread</a> for follow-up hardening.</p><h2>Ollama Model IDs No Longer Cause 404 Errors</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/67457\">PR #67457</a> by <a href=\"https://github.com/suboss87\">@suboss87</a> fixes a quiet but frustrating bug in the Ollama chat request path.</p><p>When OpenClaw is configured to use an Ollama model — either via setup or by setting the primary model to <code>ollama/<model-name></code> — the model ID was passed directly to the Ollama API without stripping the <code>ollama/</code> prefix. The Ollama API does not understand the prefixed format, so every request returned a 404.</p><p>``<code><br /><h1>Before: sent to Ollama API as-is</h1><br />ollama/llama3.2</p><h1>After: prefix stripped before the request</h1>\nllama3.2\n</code>`<code><p>Interestingly, the embedding path (</code>normalizeEmbeddingModel<code> at line 100) already handled this correctly. Only the chat stream path was affected. The fix brings the chat path into alignment with the embedding path.</p><p>This closes <a href=\"https://github.com/openclaw/openclaw/issues/67435\">issue #67435</a> and should resolve 404 failures that appeared silently even with a correctly configured Ollama endpoint.</p><h2>How to Get These Changes</h2><p>Both fixes are in the </code>main<code> branch and will land in the next tagged release. Monitor the <a href=\"https://github.com/openclaw/openclaw/releases\">releases page</a> for the next beta or stable build. Once released:</p><p></code>`<code>bash<br />npm install -g openclaw@latest<br />openclaw gateway restart<br /></code>``</p><p>If you're running Ollama and hitting 404s today, these are likely your culprit — the fix is confirmed merged.</p>",
      "date_published": "2026-04-16T08:05:00.000Z",
      "date_modified": "2026-04-16T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-16-cli-transcripts-ollama-fix.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-16-msteams-security-hardening/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-16-msteams-security-hardening/",
      "title": "OpenClaw Patches Four Microsoft Teams Security Vulnerabilities",
      "summary": "A newly merged PR hardens the MS Teams extension against OData injection, SSRF, shell injection, and arbitrary role escalation — all in one sweep.",
      "content_text": "OpenClaw's Microsoft Teams integration received a targeted security hardening pass today with the merge of [PR #65841](https://github.com/openclaw/openclaw/pull/65841) by [@steipete](https://github.com/steipete). The change closes four distinct vulnerabilities in the Teams extension, covering injection, server-side request forgery, and privilege escalation vectors — all verified correct with a 5/5 confidence rating from Greptile's automated review.\n\n## What Was Fixed\n\n### OData User-ID Injection\n\nThe first fix addresses how user identifiers were passed into OData query strings inside the Teams extension. Without proper sanitization, a crafted user ID could manipulate the query structure and potentially leak or corrupt directory lookups. The patch ensures IDs are escaped before they reach the API layer.\n\n### Arbitrary Conversation-Member Role Values\n\nThe second vulnerability allowed unvalidated role values to be submitted when modifying conversation members. In practice this meant an attacker with appropriate channel access could attempt to escalate privileges by supplying unexpected role strings. The fix validates roles against an allowlist before they are forwarded to the Graph API.\n\n### SSRF via Private-IP DNS Bypass in Attachment Fetches\n\nThe third — and arguably most impactful — fix targets server-side request forgery in Teams attachment downloads. The extension fetches media attachments on behalf of users, and the previous implementation could be coerced into resolving and connecting to private IP ranges by supplying an attachment URL that DNS-resolved to an internal address.\n\nThe patch introduces a `resolveFn` parameter throughout `shared.ts`, defaulting to Node's built-in `lookup`, and validates the resolved IP against a blocklist before any HTTP connection is made. This ensures the SSRF guard runs on every fetch path, including redirects.\n\n```\nresolveAndValidateIP(initialHost, resolveFn)\n```\n\nThe fix mirrors the approach already taken in `attachments.test.ts` and brings `bot-framework.ts` into alignment.\n\n### Shell Injection in Delegated OAuth URL Opener\n\nThe fourth fix hardens the delegated OAuth flow used when Teams prompts for sign-in. The previous implementation passed the callback URL to a shell opener without sufficient escaping, creating a shell injection risk if the URL contained metacharacters. The updated code sanitizes the URL before it reaches the shell layer.\n\n## Why It Matters\n\nMicrosoft Teams is one of OpenClaw's most widely deployed enterprise channels, often running in environments where the gateway has access to internal networks and sensitive API credentials. SSRF vulnerabilities in this context are particularly dangerous because a gateway sitting inside a corporate perimeter can reach internal services that external attackers cannot.\n\nThe Greptile review flagged two minor follow-up items — dead `if (resolveFn)` guards in `shared.ts` that are now always truthy, and missing `resolveFn` mocks in `bot-framework.test.ts` that could cause test failures in air-gapped CI environments. Neither affects runtime security, but both are worth cleaning up in a follow-on PR.\n\n## Updating Your Installation\n\nIf you are running OpenClaw with the Microsoft Teams channel enabled, upgrade as soon as the next release ships. You can track when these fixes land in a tagged release on the [GitHub releases page](https://github.com/openclaw/openclaw/releases). No configuration changes are required — the fixes are internal to the attachment and auth flows.\n\n```bash\nnpm install -g openclaw@latest\nopenclaw gateway restart\n```\n\nThese fixes were contributed by [@steipete](https://github.com/steipete) and are consistent with OpenClaw's recent trend of AI-assisted security hardening across channel extensions.",
      "content_html": "<p>OpenClaw's Microsoft Teams integration received a targeted security hardening pass today with the merge of <a href=\"https://github.com/openclaw/openclaw/pull/65841\">PR #65841</a> by <a href=\"https://github.com/steipete\">@steipete</a>. The change closes four distinct vulnerabilities in the Teams extension, covering injection, server-side request forgery, and privilege escalation vectors — all verified correct with a 5/5 confidence rating from Greptile's automated review.</p><h2>What Was Fixed</h2><h3>OData User-ID Injection</h3><p>The first fix addresses how user identifiers were passed into OData query strings inside the Teams extension. Without proper sanitization, a crafted user ID could manipulate the query structure and potentially leak or corrupt directory lookups. The patch ensures IDs are escaped before they reach the API layer.</p><h3>Arbitrary Conversation-Member Role Values</h3><p>The second vulnerability allowed unvalidated role values to be submitted when modifying conversation members. In practice this meant an attacker with appropriate channel access could attempt to escalate privileges by supplying unexpected role strings. The fix validates roles against an allowlist before they are forwarded to the Graph API.</p><h3>SSRF via Private-IP DNS Bypass in Attachment Fetches</h3><p>The third — and arguably most impactful — fix targets server-side request forgery in Teams attachment downloads. The extension fetches media attachments on behalf of users, and the previous implementation could be coerced into resolving and connecting to private IP ranges by supplying an attachment URL that DNS-resolved to an internal address.</p><p>The patch introduces a <code>resolveFn</code> parameter throughout <code>shared.ts</code>, defaulting to Node's built-in <code>lookup</code>, and validates the resolved IP against a blocklist before any HTTP connection is made. This ensures the SSRF guard runs on every fetch path, including redirects.</p><p>``<code><br />resolveAndValidateIP(initialHost, resolveFn)<br /></code>`<code></p><p>The fix mirrors the approach already taken in </code>attachments.test.ts<code> and brings </code>bot-framework.ts<code> into alignment.</p><h3>Shell Injection in Delegated OAuth URL Opener</h3><p>The fourth fix hardens the delegated OAuth flow used when Teams prompts for sign-in. The previous implementation passed the callback URL to a shell opener without sufficient escaping, creating a shell injection risk if the URL contained metacharacters. The updated code sanitizes the URL before it reaches the shell layer.</p><h2>Why It Matters</h2><p>Microsoft Teams is one of OpenClaw's most widely deployed enterprise channels, often running in environments where the gateway has access to internal networks and sensitive API credentials. SSRF vulnerabilities in this context are particularly dangerous because a gateway sitting inside a corporate perimeter can reach internal services that external attackers cannot.</p><p>The Greptile review flagged two minor follow-up items — dead </code>if (resolveFn)<code> guards in </code>shared.ts<code> that are now always truthy, and missing </code>resolveFn<code> mocks in </code>bot-framework.test.ts<code> that could cause test failures in air-gapped CI environments. Neither affects runtime security, but both are worth cleaning up in a follow-on PR.</p><h2>Updating Your Installation</h2><p>If you are running OpenClaw with the Microsoft Teams channel enabled, upgrade as soon as the next release ships. You can track when these fixes land in a tagged release on the <a href=\"https://github.com/openclaw/openclaw/releases\">GitHub releases page</a>. No configuration changes are required — the fixes are internal to the attachment and auth flows.</p><p></code>`<code>bash<br />npm install -g openclaw@latest<br />openclaw gateway restart<br /></code>``</p><p>These fixes were contributed by <a href=\"https://github.com/steipete\">@steipete</a> and are consistent with OpenClaw's recent trend of AI-assisted security hardening across channel extensions.</p>",
      "date_published": "2026-04-16T08:00:00.000Z",
      "date_modified": "2026-04-16T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-16-msteams-security-hardening.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-15-community-roundup/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-15-community-roundup/",
      "title": "OpenClaw Community Roundup: Anthropic Drama, HN Front Page, ArmorClaw",
      "summary": "Anthropic briefly banned OpenClaw's creator this week, an HN thread asking who uses OpenClaw hit 138 points, and ArmorClaw launched intent assurance for agents.",
      "content_text": "A busy week in the OpenClaw ecosystem. Peter Steinberger's Anthropic account got suspended and reinstated in a matter of hours, an \"Ask HN: Who is using OpenClaw?\" thread hit Hacker News front page with 172 comments, and a new community-built plugin called ArmorClaw launched on Product Hunt. Here is the breakdown.\n\n## Anthropic Briefly Bans OpenClaw's Creator — Then Reinstates Him\n\n[TechCrunch reported](https://techcrunch.com/2026/04/10/anthropic-temporarily-banned-openclaws-creator-from-accessing-claude/) that Steinberger's Anthropic account was suspended last week over \"suspicious activity.\" The ban followed Anthropic's earlier move to stop Claude subscriptions from covering third-party harnesses like OpenClaw — what Steinberger publicly called the \"claw tax.\"\n\nThe suspension lasted only a few hours. After Steinberger posted about it on X, it went viral, and an Anthropic engineer stepped in to clarify that Anthropic had \"never banned anyone for using OpenClaw\" and offered to help. The account was reinstated.\n\nSteinberger's read on the situation was pointed: when asked why he was using Claude at all given his job at OpenAI, he explained he uses it purely for testing — to make sure OpenClaw updates don't break Claude users. \"You need to separate two things. My work at the OpenClaw Foundation where we want to make OpenClaw work great for *any* model provider, and my job at OpenAI to help them with future product strategy.\"\n\nThe pricing change that preceded all this — Anthropic requiring API-based consumption billing for OpenClaw usage rather than covering it through subscriptions — appears to remain in effect. The ban incident was a separate, quickly-resolved issue on top of it.\n\n## Ask HN: \"Who Is Using OpenClaw?\" Hits Front Page With 172 Comments\n\nA [Hacker News thread](https://news.ycombinator.com/item?id=47783940) posted today asked simply: \"Who is using OpenClaw?\" The original poster admitted they didn't use it personally and neither did anyone in their circle, \"even though I feel like I'm super plugged into the AI world.\"\n\nThe thread hit Hacker News front page with 138 points and 172 comments as of this writing — an unusually active discussion for a tool-adoption question. The range of responses covered everything from power users describing complex multi-agent setups to skeptics questioning whether personal AI agents are genuinely useful yet.\n\nIt is a rare look at the gap between OpenClaw's developer mindshare and its actual adoption curve. For a tool that dominates GitHub trending and developer chatter, the HN community's candid responses are worth a read if you are trying to understand where the product actually sits in the market.\n\n## ArmorClaw: Intent Assurance for OpenClaw Agents\n\nA new plugin called **ArmorClaw** launched this week via [Show HN](https://news.ycombinator.com/item?id=47774344) (14 points) and [Product Hunt](https://claw.armoriq.ai/). It describes itself as \"intent assurance\" — the idea being that OpenClaw grants agents the ability to act, but doesn't verify that those actions match what you actually asked for.\n\nArmorClaw inserts itself at the reasoning layer: it captures an agent's declared intent before execution, evaluates it against configurable policies, and blocks any actions outside that plan before they run. The example the developer gives: if you ask an agent to email your dad, it should only need the email tool — if it also tries to read your calendar, ArmorClaw rejects it.\n\nThe plugin is open source ([GitHub](https://github.com/armoriq/armorclaw)) and installs with a single command into an existing OpenClaw setup. Free tier supports up to 5 agents and 30 intent calls per day. The Pro tier ($20/month) adds unlimited agents, custom YAML policy support, and 90-day audit logs. A vulnerability scanner for OpenClaw skill endpoints is listed as coming soon.\n\nThe developer noted they run OpenClaw agents with access to email, calendar, and files themselves, and built ArmorClaw out of genuine concern about unintended autonomous behavior — not a theoretical one. It is the kind of tool that tends to find an audience once people have had a few \"my agent did something I didn't ask it to\" moments.\n\n---\n\n*Also surfaced today: a Show HN for [Springdrift](https://github.com/seamus-brady/springdrift), a persistent auditable BEAM-based agent runtime that explicitly positions itself as doing \"everything OpenClaw can do\" with additional safety metacognition. Worth watching for those interested in the broader personal-agent safety landscape.*",
      "content_html": "<p>A busy week in the OpenClaw ecosystem. Peter Steinberger's Anthropic account got suspended and reinstated in a matter of hours, an \"Ask HN: Who is using OpenClaw?\" thread hit Hacker News front page with 172 comments, and a new community-built plugin called ArmorClaw launched on Product Hunt. Here is the breakdown.</p><h2>Anthropic Briefly Bans OpenClaw's Creator — Then Reinstates Him</h2><p><a href=\"https://techcrunch.com/2026/04/10/anthropic-temporarily-banned-openclaws-creator-from-accessing-claude/\">TechCrunch reported</a> that Steinberger's Anthropic account was suspended last week over \"suspicious activity.\" The ban followed Anthropic's earlier move to stop Claude subscriptions from covering third-party harnesses like OpenClaw — what Steinberger publicly called the \"claw tax.\"</p><p>The suspension lasted only a few hours. After Steinberger posted about it on X, it went viral, and an Anthropic engineer stepped in to clarify that Anthropic had \"never banned anyone for using OpenClaw\" and offered to help. The account was reinstated.</p><p>Steinberger's read on the situation was pointed: when asked why he was using Claude at all given his job at OpenAI, he explained he uses it purely for testing — to make sure OpenClaw updates don't break Claude users. \"You need to separate two things. My work at the OpenClaw Foundation where we want to make OpenClaw work great for <em>any</em> model provider, and my job at OpenAI to help them with future product strategy.\"</p><p>The pricing change that preceded all this — Anthropic requiring API-based consumption billing for OpenClaw usage rather than covering it through subscriptions — appears to remain in effect. The ban incident was a separate, quickly-resolved issue on top of it.</p><h2>Ask HN: \"Who Is Using OpenClaw?\" Hits Front Page With 172 Comments</h2><p>A <a href=\"https://news.ycombinator.com/item?id=47783940\">Hacker News thread</a> posted today asked simply: \"Who is using OpenClaw?\" The original poster admitted they didn't use it personally and neither did anyone in their circle, \"even though I feel like I'm super plugged into the AI world.\"</p><p>The thread hit Hacker News front page with 138 points and 172 comments as of this writing — an unusually active discussion for a tool-adoption question. The range of responses covered everything from power users describing complex multi-agent setups to skeptics questioning whether personal AI agents are genuinely useful yet.</p><p>It is a rare look at the gap between OpenClaw's developer mindshare and its actual adoption curve. For a tool that dominates GitHub trending and developer chatter, the HN community's candid responses are worth a read if you are trying to understand where the product actually sits in the market.</p><h2>ArmorClaw: Intent Assurance for OpenClaw Agents</h2><p>A new plugin called <strong>ArmorClaw</strong> launched this week via <a href=\"https://news.ycombinator.com/item?id=47774344\">Show HN</a> (14 points) and <a href=\"https://claw.armoriq.ai/\">Product Hunt</a>. It describes itself as \"intent assurance\" — the idea being that OpenClaw grants agents the ability to act, but doesn't verify that those actions match what you actually asked for.</p><p>ArmorClaw inserts itself at the reasoning layer: it captures an agent's declared intent before execution, evaluates it against configurable policies, and blocks any actions outside that plan before they run. The example the developer gives: if you ask an agent to email your dad, it should only need the email tool — if it also tries to read your calendar, ArmorClaw rejects it.</p><p>The plugin is open source (<a href=\"https://github.com/armoriq/armorclaw\">GitHub</a>) and installs with a single command into an existing OpenClaw setup. Free tier supports up to 5 agents and 30 intent calls per day. The Pro tier ($20/month) adds unlimited agents, custom YAML policy support, and 90-day audit logs. A vulnerability scanner for OpenClaw skill endpoints is listed as coming soon.</p><p>The developer noted they run OpenClaw agents with access to email, calendar, and files themselves, and built ArmorClaw out of genuine concern about unintended autonomous behavior — not a theoretical one. It is the kind of tool that tends to find an audience once people have had a few \"my agent did something I didn't ask it to\" moments.</p><p>---</p><p><em>Also surfaced today: a Show HN for <a href=\"https://github.com/seamus-brady/springdrift\">Springdrift</a>, a persistent auditable BEAM-based agent runtime that explicitly positions itself as doing \"everything OpenClaw can do\" with additional safety metacognition. Worth watching for those interested in the broader personal-agent safety landscape.</em></p>",
      "date_published": "2026-04-15T23:02:00.000Z",
      "date_modified": "2026-04-15T23:02:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-15-community-roundup.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-15-beta-features/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-15-beta-features/",
      "title": "OpenClaw v2026.4.15 Beta: Cloud Memory, Copilot Search, Lean Local Models",
      "summary": "OpenClaw's latest beta adds LanceDB cloud storage, GitHub Copilot embedding support, a Control UI OAuth health card, and a slim mode for local models.",
      "content_text": "Alongside its security hardening, OpenClaw v2026.4.15-beta.1 delivers four meaningful capability additions. Cloud-backed memory indexes, GitHub Copilot as an embedding provider, an OAuth health card in Control UI, and a new lean mode for local model deployments — here is what each one does.\n\n## Control UI: OAuth Token Health at a Glance\n\n**PR [#66211](https://github.com/openclaw/openclaw/pull/66211)**\n\nA new **Model Auth status card** in the Control UI Overview shows OAuth token health and provider rate-limit pressure at a glance. It surfaces attention callouts when OAuth tokens are expiring or have already expired. The underlying `models.authStatus` gateway method strips credentials before sending and caches results for 60 seconds to avoid hammering providers on every refresh. Thanks to [@omarshahine](https://github.com/omarshahine).\n\nThis is a practical quality-of-life improvement for anyone running multiple model providers. Silent token expiry has historically been a silent failure mode — you'd only discover the problem when a request failed, not proactively.\n\n## Memory/LanceDB: Cloud Storage Support\n\n**PR [#63502](https://github.com/openclaw/openclaw/pull/63502)**\n\nThe `memory-lancedb` plugin can now store durable memory indexes on remote object storage instead of local disk only. This unlocks persistent, portable memory for cloud-hosted OpenClaw deployments where writing to local disk is impractical — think serverless environments, multi-instance setups, or stateless containers.\n\nThe implementation keeps the local-disk path intact as the default. Remote storage is opt-in via configuration. Thanks to [@rugvedS07](https://github.com/rugvedS07).\n\n## GitHub Copilot Embedding Provider\n\n**PR [#61718](https://github.com/openclaw/openclaw/pull/61718)**\n\nOpenClaw memory search can now use GitHub Copilot as an embedding provider. A dedicated Copilot embedding host helper is exposed for plugins to reuse the transport while honoring remote overrides, token refresh, and safer payload validation.\n\nFor developers already authenticated with GitHub Copilot through their IDE, this means memory-search embeddings can run through that same credential without managing a separate OpenAI or other embedding API key. Thanks to [@feiskyer](https://github.com/feiskyer) and [@vincentkoc](https://github.com/vincentkoc).\n\n## Experimental: localModelLean Mode\n\n**PR [#66495](https://github.com/openclaw/openclaw/pull/66495)**\n\nA new experimental flag, `agents.defaults.experimental.localModelLean: true`, drops heavyweight default tools — including browser, cron, and message — from the agent's tool set when running on local models. This reduces prompt size significantly for setups where a weaker local model is being used and the full tool surface is unnecessary overhead.\n\nThe flag leaves the normal path completely unchanged. It is opt-in and documented as experimental. Thanks to [@ImLukeF](https://github.com/ImLukeF).\n\nThis is a thoughtful addition. Local models often run on constrained hardware and have smaller context windows. Trimming 10–15 tool definitions from the system prompt can meaningfully improve response quality on models like Ollama's smaller variants.\n\n## Packaging: Leaner Builds\n\n**PR [#67099](https://github.com/openclaw/openclaw/pull/67099)**\n\nPlugin runtime dependencies are now localized to their owning extensions, and the published docs payload has been trimmed. Install and package-manager guardrails are tighter, so published builds stay leaner and the core package stops carrying extension-owned runtime baggage. Thanks to [@vincentkoc](https://github.com/vincentkoc).\n\n## Status\n\nAll of these changes are in **v2026.4.15-beta.1** — a pre-release. The stable `v2026.4.15` has not yet been tagged. See the [full pre-release changelog](https://github.com/openclaw/openclaw/releases/tag/v2026.4.15-beta.1) for the complete list of fixes also included in this build.",
      "content_html": "<p>Alongside its security hardening, OpenClaw v2026.4.15-beta.1 delivers four meaningful capability additions. Cloud-backed memory indexes, GitHub Copilot as an embedding provider, an OAuth health card in Control UI, and a new lean mode for local model deployments — here is what each one does.</p><h2>Control UI: OAuth Token Health at a Glance</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/66211\">#66211</a></strong></p><p>A new <strong>Model Auth status card</strong> in the Control UI Overview shows OAuth token health and provider rate-limit pressure at a glance. It surfaces attention callouts when OAuth tokens are expiring or have already expired. The underlying <code>models.authStatus</code> gateway method strips credentials before sending and caches results for 60 seconds to avoid hammering providers on every refresh. Thanks to <a href=\"https://github.com/omarshahine\">@omarshahine</a>.</p><p>This is a practical quality-of-life improvement for anyone running multiple model providers. Silent token expiry has historically been a silent failure mode — you'd only discover the problem when a request failed, not proactively.</p><h2>Memory/LanceDB: Cloud Storage Support</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/63502\">#63502</a></strong></p><p>The <code>memory-lancedb</code> plugin can now store durable memory indexes on remote object storage instead of local disk only. This unlocks persistent, portable memory for cloud-hosted OpenClaw deployments where writing to local disk is impractical — think serverless environments, multi-instance setups, or stateless containers.</p><p>The implementation keeps the local-disk path intact as the default. Remote storage is opt-in via configuration. Thanks to <a href=\"https://github.com/rugvedS07\">@rugvedS07</a>.</p><h2>GitHub Copilot Embedding Provider</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/61718\">#61718</a></strong></p><p>OpenClaw memory search can now use GitHub Copilot as an embedding provider. A dedicated Copilot embedding host helper is exposed for plugins to reuse the transport while honoring remote overrides, token refresh, and safer payload validation.</p><p>For developers already authenticated with GitHub Copilot through their IDE, this means memory-search embeddings can run through that same credential without managing a separate OpenAI or other embedding API key. Thanks to <a href=\"https://github.com/feiskyer\">@feiskyer</a> and <a href=\"https://github.com/vincentkoc\">@vincentkoc</a>.</p><h2>Experimental: localModelLean Mode</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/66495\">#66495</a></strong></p><p>A new experimental flag, <code>agents.defaults.experimental.localModelLean: true</code>, drops heavyweight default tools — including browser, cron, and message — from the agent's tool set when running on local models. This reduces prompt size significantly for setups where a weaker local model is being used and the full tool surface is unnecessary overhead.</p><p>The flag leaves the normal path completely unchanged. It is opt-in and documented as experimental. Thanks to <a href=\"https://github.com/ImLukeF\">@ImLukeF</a>.</p><p>This is a thoughtful addition. Local models often run on constrained hardware and have smaller context windows. Trimming 10–15 tool definitions from the system prompt can meaningfully improve response quality on models like Ollama's smaller variants.</p><h2>Packaging: Leaner Builds</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/67099\">#67099</a></strong></p><p>Plugin runtime dependencies are now localized to their owning extensions, and the published docs payload has been trimmed. Install and package-manager guardrails are tighter, so published builds stay leaner and the core package stops carrying extension-owned runtime baggage. Thanks to <a href=\"https://github.com/vincentkoc\">@vincentkoc</a>.</p><h2>Status</h2><p>All of these changes are in <strong>v2026.4.15-beta.1</strong> — a pre-release. The stable <code>v2026.4.15</code> has not yet been tagged. See the <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.15-beta.1\">full pre-release changelog</a> for the complete list of fixes also included in this build.</p>",
      "date_published": "2026-04-15T23:01:00.000Z",
      "date_modified": "2026-04-15T23:01:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-15-beta-features.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-15-security-hardening/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-15-security-hardening/",
      "title": "OpenClaw v2026.4.15 Beta: Six Security Fixes You Should Know",
      "summary": "The latest OpenClaw beta patches secret leaks in exec prompts, path traversal in memory tools, and a timing gap in MCP loopback auth. Here is what changed.",
      "content_text": "OpenClaw v2026.4.15-beta.1 dropped today carrying one of the more security-dense changelogs in recent memory. Six distinct hardening fixes land in this release, spanning exec approvals, memory access controls, workspace file handling, MCP authentication, the Feishu channel, and gateway bearer token rotation. None carry a public CVE yet — but several patch meaningful exposure surfaces that operators should understand before the stable release lands.\n\n## What Was Fixed\n\n### 1. Secrets No Longer Leak in Exec Approval Prompts\n\n**PR [#64790](https://github.com/openclaw/openclaw/pull/64790) — Issue [#61077](https://github.com/openclaw/openclaw/issues/61077)**\n\nInline approval review could previously render credential material that appeared in exec command arguments. The fix redacts secrets before the approval prompt is composed. If you use exec approval flows and your commands ever reference tokens or API keys, this matters directly.\n\n### 2. QMD Memory Backend Path Traversal Closed\n\n**PR [#66026](https://github.com/openclaw/openclaw/pull/66026)**\n\nThe `memory_get` tool on the QMD backend previously accepted arbitrary workspace markdown paths, effectively allowing it to be used as a generic file-read shim that bypassed the read tool's policy denials. The fix restricts reads to canonical memory files (`MEMORY.md`, `memory/**`, `DREAMS.md`) and exact paths of active indexed QMD workspace documents. Thanks to [@eleqtrizit](https://github.com/eleqtrizit).\n\n### 3. Workspace File Access Routes Through fs-safe Helpers\n\n**PR [#66636](https://github.com/openclaw/openclaw/pull/66636)**\n\n`agents.files.get`, `agents.files.set`, and workspace listing now route through the shared `openFileWithinRoot` / `readFileWithinRoot` / `writeFileWithinRoot` helpers. The fix also rejects symlink aliases for allowlisted agent files and resolves opened-file real paths from the file descriptor before falling back to path-based `realpath` — closing a window where a symlink swap between open and realpath could redirect a validated path off its intended inode. Thanks to [@eleqtrizit](https://github.com/eleqtrizit).\n\n### 4. MCP Loopback Bearer Comparison Is Now Constant-Time\n\n**PR [#66665](https://github.com/openclaw/openclaw/pull/66665)**\n\nThe `/mcp` bearer comparison previously used a plain `!==` operator. It now uses `safeEqualSecret`, matching every other auth surface in the codebase. The fix also adds a `checkBrowserOrigin` guard to reject non-loopback browser-origin requests before the auth gate runs. Loopback origins (127.0.0.1, localhost, same-origin) still pass through — including the localhost↔127.0.0.1 host mismatch that browsers flag as `Sec-Fetch-Site: cross-site`. Thanks to [@eleqtrizit](https://github.com/eleqtrizit).\n\n### 5. Feishu Webhook Fails Closed Without encryptKey\n\n**PR [#66707](https://github.com/openclaw/openclaw/pull/66707)**\n\nThe Feishu webhook transport now refuses to start without an `encryptKey` and rejects unsigned requests when no key is present instead of accepting them. Blank card-action callback tokens are dropped before the dedupe claim and dispatcher. The fix is described as defense-in-depth over an already-closed monitor-account layer. Thanks to [@eleqtrizit](https://github.com/eleqtrizit).\n\n### 6. Gateway Bearer Token Hot-Reload Takes Effect Immediately\n\n**PR [#66651](https://github.com/openclaw/openclaw/pull/66651)**\n\nAfter a `secrets.reload` or config hot-reload, the active gateway bearer token was only invalidated on the WebSocket path. HTTP remained valid until gateway restart. The fix resolves the active bearer per-request on both the HTTP server and the HTTP upgrade handler via `getResolvedAuth()`, matching the WebSocket path behavior. Thanks to [@mmaps](https://github.com/mmaps).\n\n## Who Should Pay Attention\n\n- **Self-hosters with exec approval flows** — upgrade path traversal and exec secret redaction are directly relevant.\n- **Memory plugin users** — the QMD `memory_get` restriction matters if you use workspace documents as memory sources.\n- **MCP integrations** — the constant-time comparison and browser-origin guard apply to anyone exposing the MCP endpoint.\n- **Feishu deployments** — the webhook hardening is significant if your encryptKey configuration is incomplete.\n\n## Status\n\nThis is a **pre-release**. The stable `v2026.4.15` has not yet been tagged. Track progress at the [OpenClaw releases page](https://github.com/openclaw/openclaw/releases). The full pre-release changelog includes additional bug fixes across BlueBubbles, Telegram, Slack, OpenRouter/Qwen3, and more.",
      "content_html": "<p>OpenClaw v2026.4.15-beta.1 dropped today carrying one of the more security-dense changelogs in recent memory. Six distinct hardening fixes land in this release, spanning exec approvals, memory access controls, workspace file handling, MCP authentication, the Feishu channel, and gateway bearer token rotation. None carry a public CVE yet — but several patch meaningful exposure surfaces that operators should understand before the stable release lands.</p><h2>What Was Fixed</h2><h3>1. Secrets No Longer Leak in Exec Approval Prompts</h3><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/64790\">#64790</a> — Issue <a href=\"https://github.com/openclaw/openclaw/issues/61077\">#61077</a></strong></p><p>Inline approval review could previously render credential material that appeared in exec command arguments. The fix redacts secrets before the approval prompt is composed. If you use exec approval flows and your commands ever reference tokens or API keys, this matters directly.</p><h3>2. QMD Memory Backend Path Traversal Closed</h3><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/66026\">#66026</a></strong></p><p>The <code>memory_get</code> tool on the QMD backend previously accepted arbitrary workspace markdown paths, effectively allowing it to be used as a generic file-read shim that bypassed the read tool's policy denials. The fix restricts reads to canonical memory files (<code>MEMORY.md</code>, <code>memory/<em></em></code>, <code>DREAMS.md</code>) and exact paths of active indexed QMD workspace documents. Thanks to <a href=\"https://github.com/eleqtrizit\">@eleqtrizit</a>.</p><h3>3. Workspace File Access Routes Through fs-safe Helpers</h3><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/66636\">#66636</a></strong></p><p><code>agents.files.get</code>, <code>agents.files.set</code>, and workspace listing now route through the shared <code>openFileWithinRoot</code> / <code>readFileWithinRoot</code> / <code>writeFileWithinRoot</code> helpers. The fix also rejects symlink aliases for allowlisted agent files and resolves opened-file real paths from the file descriptor before falling back to path-based <code>realpath</code> — closing a window where a symlink swap between open and realpath could redirect a validated path off its intended inode. Thanks to <a href=\"https://github.com/eleqtrizit\">@eleqtrizit</a>.</p><h3>4. MCP Loopback Bearer Comparison Is Now Constant-Time</h3><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/66665\">#66665</a></strong></p><p>The <code>/mcp</code> bearer comparison previously used a plain <code>!==</code> operator. It now uses <code>safeEqualSecret</code>, matching every other auth surface in the codebase. The fix also adds a <code>checkBrowserOrigin</code> guard to reject non-loopback browser-origin requests before the auth gate runs. Loopback origins (127.0.0.1, localhost, same-origin) still pass through — including the localhost↔127.0.0.1 host mismatch that browsers flag as <code>Sec-Fetch-Site: cross-site</code>. Thanks to <a href=\"https://github.com/eleqtrizit\">@eleqtrizit</a>.</p><h3>5. Feishu Webhook Fails Closed Without encryptKey</h3><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/66707\">#66707</a></strong></p><p>The Feishu webhook transport now refuses to start without an <code>encryptKey</code> and rejects unsigned requests when no key is present instead of accepting them. Blank card-action callback tokens are dropped before the dedupe claim and dispatcher. The fix is described as defense-in-depth over an already-closed monitor-account layer. Thanks to <a href=\"https://github.com/eleqtrizit\">@eleqtrizit</a>.</p><h3>6. Gateway Bearer Token Hot-Reload Takes Effect Immediately</h3><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/66651\">#66651</a></strong></p><p>After a <code>secrets.reload</code> or config hot-reload, the active gateway bearer token was only invalidated on the WebSocket path. HTTP remained valid until gateway restart. The fix resolves the active bearer per-request on both the HTTP server and the HTTP upgrade handler via <code>getResolvedAuth()</code>, matching the WebSocket path behavior. Thanks to <a href=\"https://github.com/mmaps\">@mmaps</a>.</p><h2>Who Should Pay Attention</h2><ul><li><strong>Self-hosters with exec approval flows</strong> — upgrade path traversal and exec secret redaction are directly relevant.</li><li><strong>Memory plugin users</strong> — the QMD <code>memory_get</code> restriction matters if you use workspace documents as memory sources.</li><li><strong>MCP integrations</strong> — the constant-time comparison and browser-origin guard apply to anyone exposing the MCP endpoint.</li><li><strong>Feishu deployments</strong> — the webhook hardening is significant if your encryptKey configuration is incomplete.</li></ul><h2>Status</h2><p>This is a <strong>pre-release</strong>. The stable <code>v2026.4.15</code> has not yet been tagged. Track progress at the <a href=\"https://github.com/openclaw/openclaw/releases\">OpenClaw releases page</a>. The full pre-release changelog includes additional bug fixes across BlueBubbles, Telegram, Slack, OpenRouter/Qwen3, and more.</p>",
      "date_published": "2026-04-15T23:00:00.000Z",
      "date_modified": "2026-04-15T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-15-security-hardening.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-15-install-security-hardening/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-15-install-security-hardening/",
      "title": "OpenClaw Hardens Install Path: Dist Integrity Checks and pnpm Runner Fix",
      "summary": "Two PRs merged April 15 tighten OpenClaw's install and update infrastructure, adding dist inventory verification and securing the pnpm binary runner.",
      "content_text": "Two infrastructure-focused pull requests landed in OpenClaw's `main` branch this morning, both targeting the install and update pipeline. Together they address reliability gaps around stale dist files and how OpenClaw invokes the pnpm binary at install time.\n\n## PR #66959: Prune Stale Dist Chunks After npm Upgrades\n\nContributor **obviyus** merged [PR #66959](https://github.com/openclaw/openclaw/pull/66959) to tackle a longstanding annoyance: stale hashed dist chunks left behind after `npm install -g openclaw@latest` upgrades.\n\nOpenClaw bundles its runtime into hashed chunk files under `dist/`. When a new version ships with differently-named chunks, the old files linger — and in some cases the old entrypoint could reference the wrong chunk at runtime. This is the root cause behind the infamous `ERR_MODULE_NOT_FOUND` errors after upgrades that have appeared in community reports.\n\nThe fix introduces a **dist inventory file** (`dist/postinstall-inventory.json`) that records which files belong to a given release. During postinstall, any `dist/` file not listed in the inventory is pruned. This keeps the installed dist clean across upgrades without requiring users to manually wipe their global install.\n\n### Security Analysis Findings\n\nThe Aisle Security bot flagged three medium-severity considerations in this PR, all related to how the new inventory mechanism handles edge cases:\n\n- **Missing inventory causes install failure** (CWE-703): If `postinstall-inventory.json` is absent (e.g., upgrade from a pre-inventory release), the postinstall script would throw instead of skipping the prune step gracefully.\n- **Inventory file trusted as integrity authority** (CWE-345): The inventory is a local file — in a compromised-package scenario, an attacker could tamper with both the dist tree and the inventory to evade verification checks.\n- **Fail-open when inventory is missing** (CWE-693): Without the inventory, unexpected files in `dist/extensions/` (which OpenClaw scans for bundled plugins) would go undetected.\n\nThe maintainers merged despite these findings. Likely follow-up work will address the fallback and signing concerns in subsequent PRs.\n\n## PR #66987: Avoid Running Native pnpm Binaries Through Node\n\n[PR #66987](https://github.com/openclaw/openclaw/pull/66987) by **obviyus** refines how OpenClaw's `pnpm-runner.mjs` script decides whether to invoke pnpm via Node or as a native binary. Previously, it read `process.env.npm_execpath` to determine if the available pnpm entrypoint was a Node-runnable script (`.js`/`.cjs`/`.mjs`) or a native binary — and if native, it was mistakenly being routed through Node anyway in some environments.\n\nThe fix narrows the detection logic so native pnpm binaries (installed via Corepack or system package managers) are invoked directly, rather than being wrapped by Node — which could cause them to silently fail or behave unexpectedly.\n\n### Security Analysis Findings\n\nAisle Security raised two medium concerns here as well:\n\n- **DoS via blocking I/O on attacker-controlled `npm_execpath`** (CWE-400): The shebang-detection helper performs synchronous `openSync`/`readSync` on the path from `npm_execpath` without checking if it's a regular file first. A FIFO or slow network FS path could block indefinitely.\n- **Arbitrary code execution via untrusted `npm_execpath`** (CWE-94): The validation for whether to run a file via Node only checks the basename and file extension/shebang marker — not the actual resolved path — meaning a crafted file named `pnpm` with a `.js` extension in an attacker-controlled location could be executed.\n\nAgain, the maintainers reviewed and merged. These are install-time risks that require attacker control of the environment, so the practical blast radius is narrow — but worth watching for follow-up hardening.\n\n## What This Means for Users\n\nNeither change is user-visible in normal operation. But if you've hit:\n\n- `ERR_MODULE_NOT_FOUND` errors after `openclaw` upgrades\n- Broken pnpm invocations during install in non-standard npm environments\n\n…these fixes are aimed squarely at your pain points. Both land in the next OpenClaw release. Follow the [GitHub repository](https://github.com/openclaw/openclaw/releases) for the release announcement.",
      "content_html": "<p>Two infrastructure-focused pull requests landed in OpenClaw's <code>main</code> branch this morning, both targeting the install and update pipeline. Together they address reliability gaps around stale dist files and how OpenClaw invokes the pnpm binary at install time.</p><h2>PR #66959: Prune Stale Dist Chunks After npm Upgrades</h2><p>Contributor <strong>obviyus</strong> merged <a href=\"https://github.com/openclaw/openclaw/pull/66959\">PR #66959</a> to tackle a longstanding annoyance: stale hashed dist chunks left behind after <code>npm install -g openclaw@latest</code> upgrades.</p><p>OpenClaw bundles its runtime into hashed chunk files under <code>dist/</code>. When a new version ships with differently-named chunks, the old files linger — and in some cases the old entrypoint could reference the wrong chunk at runtime. This is the root cause behind the infamous <code>ERR_MODULE_NOT_FOUND</code> errors after upgrades that have appeared in community reports.</p><p>The fix introduces a <strong>dist inventory file</strong> (<code>dist/postinstall-inventory.json</code>) that records which files belong to a given release. During postinstall, any <code>dist/</code> file not listed in the inventory is pruned. This keeps the installed dist clean across upgrades without requiring users to manually wipe their global install.</p><h3>Security Analysis Findings</h3><p>The Aisle Security bot flagged three medium-severity considerations in this PR, all related to how the new inventory mechanism handles edge cases:</p><ul><li><strong>Missing inventory causes install failure</strong> (CWE-703): If <code>postinstall-inventory.json</code> is absent (e.g., upgrade from a pre-inventory release), the postinstall script would throw instead of skipping the prune step gracefully.</li><li><strong>Inventory file trusted as integrity authority</strong> (CWE-345): The inventory is a local file — in a compromised-package scenario, an attacker could tamper with both the dist tree and the inventory to evade verification checks.</li><li><strong>Fail-open when inventory is missing</strong> (CWE-693): Without the inventory, unexpected files in <code>dist/extensions/</code> (which OpenClaw scans for bundled plugins) would go undetected.</li></ul><p>The maintainers merged despite these findings. Likely follow-up work will address the fallback and signing concerns in subsequent PRs.</p><h2>PR #66987: Avoid Running Native pnpm Binaries Through Node</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/66987\">PR #66987</a> by <strong>obviyus</strong> refines how OpenClaw's <code>pnpm-runner.mjs</code> script decides whether to invoke pnpm via Node or as a native binary. Previously, it read <code>process.env.npm_execpath</code> to determine if the available pnpm entrypoint was a Node-runnable script (<code>.js</code>/<code>.cjs</code>/<code>.mjs</code>) or a native binary — and if native, it was mistakenly being routed through Node anyway in some environments.</p><p>The fix narrows the detection logic so native pnpm binaries (installed via Corepack or system package managers) are invoked directly, rather than being wrapped by Node — which could cause them to silently fail or behave unexpectedly.</p><h3>Security Analysis Findings</h3><p>Aisle Security raised two medium concerns here as well:</p><ul><li><strong>DoS via blocking I/O on attacker-controlled <code>npm_execpath</code></strong> (CWE-400): The shebang-detection helper performs synchronous <code>openSync</code>/<code>readSync</code> on the path from <code>npm_execpath</code> without checking if it's a regular file first. A FIFO or slow network FS path could block indefinitely.</li><li><strong>Arbitrary code execution via untrusted <code>npm_execpath</code></strong> (CWE-94): The validation for whether to run a file via Node only checks the basename and file extension/shebang marker — not the actual resolved path — meaning a crafted file named <code>pnpm</code> with a <code>.js</code> extension in an attacker-controlled location could be executed.</li></ul><p>Again, the maintainers reviewed and merged. These are install-time risks that require attacker control of the environment, so the practical blast radius is narrow — but worth watching for follow-up hardening.</p><h2>What This Means for Users</h2><p>Neither change is user-visible in normal operation. But if you've hit:</p><ul><li><code>ERR_MODULE_NOT_FOUND</code> errors after <code>openclaw</code> upgrades</li><li>Broken pnpm invocations during install in non-standard npm environments</li></ul><p>…these fixes are aimed squarely at your pain points. Both land in the next OpenClaw release. Follow the <a href=\"https://github.com/openclaw/openclaw/releases\">GitHub repository</a> for the release announcement.</p>",
      "date_published": "2026-04-15T08:05:00.000Z",
      "date_modified": "2026-04-15T08:05:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-15-install-security-hardening.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-15-plugin-fault-isolation/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-15-plugin-fault-isolation/",
      "title": "OpenClaw Fixes Plugin Fault Isolation: Bad Engines Won't Crash All Channels",
      "summary": "A merged fix ensures a failing third-party context engine no longer takes down every OpenClaw channel simultaneously, landing in the next release.",
      "content_text": "If you run third-party context engine plugins — like the popular `lossless-claw` — a single faulty plugin could silently kill every channel connected to your OpenClaw instance. Discord, Telegram, WebChat: all unresponsive, with no obvious error pointing at the culprit. That changes with [PR #66930](https://github.com/openclaw/openclaw/pull/66930), merged today by contributor **openperf**.\n\n## What Was Breaking\n\nOpenClaw's `resolveContextEngine()` function is responsible for wiring up the active context engine for each agent run. When a third-party plugin registers a context engine factory that later throws during resolution — or returns an object that violates the `ContextEngine` contract — the exception previously propagated all the way up and crashed the turn.\n\nThe real sting: the broken factory stayed registered in the **process-global plugin registry**. That meant every subsequent agent run on every connected channel would hit the same failure. You'd effectively have a silent, permanent outage until you manually restarted the gateway or removed the offending plugin.\n\nAs [issue #66887](https://github.com/openclaw/openclaw/issues/66887) documents, this wasn't a theoretical edge case — it was biting users with real third-party plugin setups.\n\n## The Fix\n\nThe fix introduces graceful fallback behavior in `resolveContextEngine()`. When a registered factory:\n\n- **throws during resolution**, or\n- **returns an object that fails the `ContextEngine` contract check**\n\n…OpenClaw now catches the error, logs it, and falls back to the **default legacy engine** instead of propagating the failure.\n\nThis makes context engine plugin failures self-contained. A bad plugin crashes its own resolution path, not the entire agent runtime. Subsequent turns on all channels continue working normally with the fallback engine active.\n\nThe PR also adds test coverage for both failure modes — factory-throws and contract-violation — so this class of regression has guardrails going forward.\n\n## Why It Matters\n\nContext engine plugins are one of the more powerful extension points in OpenClaw. They control how conversation context is built, compacted, and passed to the model. The ecosystem of third-party context engines is growing, and with more plugins comes more surface area for version mismatches and API contract violations.\n\nFault isolation at the plugin boundary is table-stakes infrastructure for a system that's meant to run unattended. This fix brings `resolveContextEngine()` in line with how OpenClaw already handles other plugin-failure modes — fail gracefully, keep running, surface the error in logs.\n\n## What to Expect\n\nThis fix is merged to `main` and will ship in the next release (likely **v2026.4.15** or later today). If you're running third-party context engine plugins and have hit mysterious full-instance outages, this is the fix you've been waiting for.\n\nIn the meantime, you can track the fix directly at [PR #66930](https://github.com/openclaw/openclaw/pull/66930) on GitHub.",
      "content_html": "<p>If you run third-party context engine plugins — like the popular <code>lossless-claw</code> — a single faulty plugin could silently kill every channel connected to your OpenClaw instance. Discord, Telegram, WebChat: all unresponsive, with no obvious error pointing at the culprit. That changes with <a href=\"https://github.com/openclaw/openclaw/pull/66930\">PR #66930</a>, merged today by contributor <strong>openperf</strong>.</p><h2>What Was Breaking</h2><p>OpenClaw's <code>resolveContextEngine()</code> function is responsible for wiring up the active context engine for each agent run. When a third-party plugin registers a context engine factory that later throws during resolution — or returns an object that violates the <code>ContextEngine</code> contract — the exception previously propagated all the way up and crashed the turn.</p><p>The real sting: the broken factory stayed registered in the <strong>process-global plugin registry</strong>. That meant every subsequent agent run on every connected channel would hit the same failure. You'd effectively have a silent, permanent outage until you manually restarted the gateway or removed the offending plugin.</p><p>As <a href=\"https://github.com/openclaw/openclaw/issues/66887\">issue #66887</a> documents, this wasn't a theoretical edge case — it was biting users with real third-party plugin setups.</p><h2>The Fix</h2><p>The fix introduces graceful fallback behavior in <code>resolveContextEngine()</code>. When a registered factory:</p><ul><li><strong>throws during resolution</strong>, or</li><li><strong>returns an object that fails the <code>ContextEngine</code> contract check</strong></li></ul><p>…OpenClaw now catches the error, logs it, and falls back to the <strong>default legacy engine</strong> instead of propagating the failure.</p><p>This makes context engine plugin failures self-contained. A bad plugin crashes its own resolution path, not the entire agent runtime. Subsequent turns on all channels continue working normally with the fallback engine active.</p><p>The PR also adds test coverage for both failure modes — factory-throws and contract-violation — so this class of regression has guardrails going forward.</p><h2>Why It Matters</h2><p>Context engine plugins are one of the more powerful extension points in OpenClaw. They control how conversation context is built, compacted, and passed to the model. The ecosystem of third-party context engines is growing, and with more plugins comes more surface area for version mismatches and API contract violations.</p><p>Fault isolation at the plugin boundary is table-stakes infrastructure for a system that's meant to run unattended. This fix brings <code>resolveContextEngine()</code> in line with how OpenClaw already handles other plugin-failure modes — fail gracefully, keep running, surface the error in logs.</p><h2>What to Expect</h2><p>This fix is merged to <code>main</code> and will ship in the next release (likely <strong>v2026.4.15</strong> or later today). If you're running third-party context engine plugins and have hit mysterious full-instance outages, this is the fix you've been waiting for.</p><p>In the meantime, you can track the fix directly at <a href=\"https://github.com/openclaw/openclaw/pull/66930\">PR #66930</a> on GitHub.</p>",
      "date_published": "2026-04-15T08:00:00.000Z",
      "date_modified": "2026-04-15T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-15-plugin-fault-isolation.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-14-release/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-14-release/",
      "title": "OpenClaw 2026.4.14: GPT-5.4 Pro, ReDoS Fix, and Security Hardening",
      "summary": "OpenClaw 2026.4.14 ships GPT-5.4 Pro compatibility, a Control UI ReDoS fix, stronger security hardening, and a flood of Ollama and memory fixes.",
      "content_text": "OpenClaw [2026.4.14](https://github.com/openclaw/openclaw/releases/tag/v2026.4.14) dropped this afternoon — the team describes it as a \"broad quality release focused on model provider improvements for the GPT-5 family and channel provider issues,\" with an emphasis on overall performance through core codebase refactors. There is a lot to unpack.\n\n## GPT-5.4 Pro Is Now First-Class\n\nThe headline new feature is forward-compatibility support for **gpt-5.4-pro** across the OpenAI Codex provider, including correct pricing and limits visibility before the upstream catalog catches up ([#66453](https://github.com/openclaw/openclaw/pull/66453)). The release also maps OpenClaw's minimal thinking mode to OpenAI's supported `low` reasoning effort for GPT-5.4 requests, and canonicalizes the legacy `openai-codex/gpt-5.4-codex` runtime alias to `openai-codex/gpt-5.4` while still honoring per-model overrides.\n\nIf you have been using Codex models, there is also a fix that ensures the `apiKey` is included in the Codex provider catalog output, preventing the Pi ModelRegistry from rejecting the entry and silently dropping all custom models from every provider in `models.json` ([#66180](https://github.com/openclaw/openclaw/pull/66180)).\n\n## Control UI ReDoS Fix\n\nA notable security fix in this release replaces **marked.js** with **markdown-it** in the Control UI ([#46707](https://github.com/openclaw/openclaw/pull/46707)). The old parser was vulnerable to ReDoS — a regex denial-of-service attack where maliciously crafted markdown could freeze the UI indefinitely. If you run the web interface and interact with untrusted content, upgrade as soon as possible.\n\n## Security Hardening Across the Stack\n\nThis release includes several important security fixes merged from the recent AI-assisted security audit:\n\n- **Slack/interactions**: The configured `allowFrom` owner allowlist now correctly applies to channel block-action and modal interactive events, closing a bypass where interactive triggers could skip the allowlist in channels without a users list ([#66028](https://github.com/openclaw/openclaw/pull/66028)).\n- **Media/attachments**: OpenClaw now fails closed when a local attachment path cannot be canonically resolved via `realpath`, preventing a path-traversal downgrade attack ([#66022](https://github.com/openclaw/openclaw/pull/66022)).\n- **Gateway tool**: The model-facing gateway tool now rejects `config.patch` and `config.apply` calls that would enable dangerous flags enumerated by the OpenClaw security audit — for example `dangerouslyDisableDeviceAuth` or `allowInsecureAuth` — while still allowing non-dangerous edits in the same patch ([#62006](https://github.com/openclaw/openclaw/pull/62006)).\n- **Heartbeat**: Owner downgrade is now forced for untrusted `hook:wake` system events ([#66031](https://github.com/openclaw/openclaw/pull/66031)).\n- **Browser/SSRF**: SSRF policy is now enforced on snapshot, screenshot, and tab routes ([#66040](https://github.com/openclaw/openclaw/pull/66040)).\n- **Microsoft Teams**: Sender allowlist checks are now enforced on SSO signin invokes ([#66033](https://github.com/openclaw/openclaw/pull/66033)).\n\n## Ollama Gets Better Timeout and Usage Handling\n\nTwo Ollama fixes stand out. First, the configured `agents.defaults.timeoutSeconds` override is now properly forwarded into the global undici stream timeout, so slow local Ollama runs no longer inherit the default stream cutoff ([#63175](https://github.com/openclaw/openclaw/issues/63175)). Second, `stream_options.include_usage` is now sent for Ollama streaming completions, meaning local Ollama runs finally report real usage numbers instead of bogus prompt-token counts that were triggering premature context compaction ([#64568](https://github.com/openclaw/openclaw/pull/64568)).\n\n## Memory and Embedding Provider Fixes\n\nNon-OpenAI provider prefixes are now preserved when normalizing OpenAI-compatible embedding model refs, fixing a bug where proxy-backed memory providers would fail with `Unknown memory embedding provider` ([#66452](https://github.com/openclaw/openclaw/pull/66452)). Google image generation also gets a fix: a trailing `/openai` suffix is now stripped from configured Google base URLs only when calling the native Gemini image API, so Gemini image requests stop 404-ing without breaking explicit OpenAI-compatible endpoints ([#66445](https://github.com/openclaw/openclaw/pull/66445)).\n\n## Telegram Forum Topics\n\nTelegram operators will appreciate that human topic names are now surfaced in agent context, prompt metadata, and plugin hook metadata by learning them from forum service messages ([#65973](https://github.com/openclaw/openclaw/pull/65973)), and those learned names are now persisted to the session sidecar store so they survive restarts ([#66107](https://github.com/openclaw/openclaw/pull/66107)).\n\n## How to Update\n\n```bash\nopenclaw update\nopenclaw gateway restart\n```\n\nThe full changelog is on [GitHub](https://github.com/openclaw/openclaw/releases/tag/v2026.4.14). As always, check `openclaw doctor` after updating if you run any custom provider or plugin configuration.",
      "content_html": "<p>OpenClaw <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.14\">2026.4.14</a> dropped this afternoon — the team describes it as a \"broad quality release focused on model provider improvements for the GPT-5 family and channel provider issues,\" with an emphasis on overall performance through core codebase refactors. There is a lot to unpack.</p><h2>GPT-5.4 Pro Is Now First-Class</h2><p>The headline new feature is forward-compatibility support for <strong>gpt-5.4-pro</strong> across the OpenAI Codex provider, including correct pricing and limits visibility before the upstream catalog catches up (<a href=\"https://github.com/openclaw/openclaw/pull/66453\">#66453</a>). The release also maps OpenClaw's minimal thinking mode to OpenAI's supported <code>low</code> reasoning effort for GPT-5.4 requests, and canonicalizes the legacy <code>openai-codex/gpt-5.4-codex</code> runtime alias to <code>openai-codex/gpt-5.4</code> while still honoring per-model overrides.</p><p>If you have been using Codex models, there is also a fix that ensures the <code>apiKey</code> is included in the Codex provider catalog output, preventing the Pi ModelRegistry from rejecting the entry and silently dropping all custom models from every provider in <code>models.json</code> (<a href=\"https://github.com/openclaw/openclaw/pull/66180\">#66180</a>).</p><h2>Control UI ReDoS Fix</h2><p>A notable security fix in this release replaces <strong>marked.js</strong> with <strong>markdown-it</strong> in the Control UI (<a href=\"https://github.com/openclaw/openclaw/pull/46707\">#46707</a>). The old parser was vulnerable to ReDoS — a regex denial-of-service attack where maliciously crafted markdown could freeze the UI indefinitely. If you run the web interface and interact with untrusted content, upgrade as soon as possible.</p><h2>Security Hardening Across the Stack</h2><p>This release includes several important security fixes merged from the recent AI-assisted security audit:</p><ul><li><strong>Slack/interactions</strong>: The configured <code>allowFrom</code> owner allowlist now correctly applies to channel block-action and modal interactive events, closing a bypass where interactive triggers could skip the allowlist in channels without a users list (<a href=\"https://github.com/openclaw/openclaw/pull/66028\">#66028</a>).</li><li><strong>Media/attachments</strong>: OpenClaw now fails closed when a local attachment path cannot be canonically resolved via <code>realpath</code>, preventing a path-traversal downgrade attack (<a href=\"https://github.com/openclaw/openclaw/pull/66022\">#66022</a>).</li><li><strong>Gateway tool</strong>: The model-facing gateway tool now rejects <code>config.patch</code> and <code>config.apply</code> calls that would enable dangerous flags enumerated by the OpenClaw security audit — for example <code>dangerouslyDisableDeviceAuth</code> or <code>allowInsecureAuth</code> — while still allowing non-dangerous edits in the same patch (<a href=\"https://github.com/openclaw/openclaw/pull/62006\">#62006</a>).</li><li><strong>Heartbeat</strong>: Owner downgrade is now forced for untrusted <code>hook:wake</code> system events (<a href=\"https://github.com/openclaw/openclaw/pull/66031\">#66031</a>).</li><li><strong>Browser/SSRF</strong>: SSRF policy is now enforced on snapshot, screenshot, and tab routes (<a href=\"https://github.com/openclaw/openclaw/pull/66040\">#66040</a>).</li><li><strong>Microsoft Teams</strong>: Sender allowlist checks are now enforced on SSO signin invokes (<a href=\"https://github.com/openclaw/openclaw/pull/66033\">#66033</a>).</li></ul><h2>Ollama Gets Better Timeout and Usage Handling</h2><p>Two Ollama fixes stand out. First, the configured <code>agents.defaults.timeoutSeconds</code> override is now properly forwarded into the global undici stream timeout, so slow local Ollama runs no longer inherit the default stream cutoff (<a href=\"https://github.com/openclaw/openclaw/issues/63175\">#63175</a>). Second, <code>stream_options.include_usage</code> is now sent for Ollama streaming completions, meaning local Ollama runs finally report real usage numbers instead of bogus prompt-token counts that were triggering premature context compaction (<a href=\"https://github.com/openclaw/openclaw/pull/64568\">#64568</a>).</p><h2>Memory and Embedding Provider Fixes</h2><p>Non-OpenAI provider prefixes are now preserved when normalizing OpenAI-compatible embedding model refs, fixing a bug where proxy-backed memory providers would fail with <code>Unknown memory embedding provider</code> (<a href=\"https://github.com/openclaw/openclaw/pull/66452\">#66452</a>). Google image generation also gets a fix: a trailing <code>/openai</code> suffix is now stripped from configured Google base URLs only when calling the native Gemini image API, so Gemini image requests stop 404-ing without breaking explicit OpenAI-compatible endpoints (<a href=\"https://github.com/openclaw/openclaw/pull/66445\">#66445</a>).</p><h2>Telegram Forum Topics</h2><p>Telegram operators will appreciate that human topic names are now surfaced in agent context, prompt metadata, and plugin hook metadata by learning them from forum service messages (<a href=\"https://github.com/openclaw/openclaw/pull/65973\">#65973</a>), and those learned names are now persisted to the session sidecar store so they survive restarts (<a href=\"https://github.com/openclaw/openclaw/pull/66107\">#66107</a>).</p><h2>How to Update</h2><p>``<code>bash<br />openclaw update<br />openclaw gateway restart<br /></code>`<code></p><p>The full changelog is on <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.14\">GitHub</a>. As always, check </code>openclaw doctor` after updating if you run any custom provider or plugin configuration.</p>",
      "date_published": "2026-04-14T23:00:00.000Z",
      "date_modified": "2026-04-14T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-14-release.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-clawtrace-debug-token-spikes/",
      "url": "https://openclawchronicles.com/posts/openclaw-clawtrace-debug-token-spikes/",
      "title": "ClawTrace: Visualize and Debug Your OpenClaw Agent Runs",
      "summary": "ClawTrace is an open-source OpenClaw plugin that records every agent run as a trace tree, helping you catch token spikes, tool loops, and runaway costs.",
      "content_text": "The origin story is painfully relatable. An OpenClaw agent burned roughly 40 times its normal token budget in under an hour. Root cause: it was appending around 1,500 messages of history to every LLM call. By the time the operator noticed, a task that should have cost three cents had already consumed several dollars — and there was nothing in the logs to catch it.\n\nThat incident prompted Epsilla Cloud to build **[ClawTrace](https://github.com/epsilla-cloud/clawtrace)** — an open-source OpenClaw plugin and accompanying web UI that turns every agent run into an inspectable tree of spans. It showed up on Hacker News today ([Show HN #47769889](https://news.ycombinator.com/item?id=47769889)).\n\n## What ClawTrace Records\n\nThe `@epsilla/clawtrace` plugin hooks into eight OpenClaw event types:\n\n- `session_start` / `session_end`\n- `llm_input` / `llm_output`\n- `before_tool_call` / `after_tool_call`\n- `subagent_spawning` / `subagent_ended`\n\nEvery event is batched and streamed to ClawTrace's ingest service, then materialized via a Databricks Lakeflow SQL pipeline into Iceberg tables and exposed as a Cypher-queryable graph via PuppyGraph. The frontend renders three views for every trace:\n\n- **Execution path** — a collapsible tree with parent-child relationships and per-node cost badges\n- **Call graph** — a force-directed diagram of every agent, model, and tool\n- **Timeline** — a Gantt chart showing where time actually went\n\nClick any node to see the full input/output payload, token counts, duration, and cost.\n\n## Tracy: Ask Questions About Your Own Agent\n\nThe standout feature is **Tracy**, an AI analyst wired directly to the trajectory graph via MCP. Instead of reading logs, you ask questions in plain English:\n\n- *\"Why did my last run cost so much?\"*\n- *\"Which tool is failing most often?\"*\n- *\"Is my context window growing across sessions?\"*\n\nTracy runs live Cypher queries against your data, generates charts, and returns specific answers. The ClawTrace Self-Evolve skill takes this further — install it and your agent will periodically review its own cost and failure patterns, apply fixes, and log what it changed.\n\n## Installation\n\n```bash\nopenclaw plugins install @epsilla/clawtrace\nopenclaw clawtrace setup\nopenclaw gateway restart\n```\n\nPaste your observe key from [clawtrace.ai](https://clawtrace.ai) when prompted. New accounts get 200 free credits and no credit card is required.\n\n## Architecture in Brief\n\nThe stack is: OpenClaw plugin → FastAPI ingest → Databricks Delta Lake → PuppyGraph → FastAPI backend → Next.js 15 UI. Cost estimates support 80+ models with cache-aware pricing across OpenAI, Anthropic, Google, DeepSeek, Mistral, and a range of Chinese models (Qwen, GLM, Kimi, ERNIE) and open-source models (Llama 4/3.x, Mixtral).\n\n## Why This Matters\n\nOpenClaw's built-in logs are useful but they flatten everything into JSON blobs with no execution graph. When agents spawn sub-agents, call tools in loops, or hit context growth issues, root-causing from flat logs is tedious. ClawTrace gives you the observability layer that has been missing from the OpenClaw ecosystem — and the full source is on GitHub under Apache 2.0.\n\nIf you have ever shipped an agent to production and later wondered \"what exactly did it do,\" this is worth installing.",
      "content_html": "<p>The origin story is painfully relatable. An OpenClaw agent burned roughly 40 times its normal token budget in under an hour. Root cause: it was appending around 1,500 messages of history to every LLM call. By the time the operator noticed, a task that should have cost three cents had already consumed several dollars — and there was nothing in the logs to catch it.</p><p>That incident prompted Epsilla Cloud to build <strong><a href=\"https://github.com/epsilla-cloud/clawtrace\">ClawTrace</a></strong> — an open-source OpenClaw plugin and accompanying web UI that turns every agent run into an inspectable tree of spans. It showed up on Hacker News today (<a href=\"https://news.ycombinator.com/item?id=47769889\">Show HN #47769889</a>).</p><h2>What ClawTrace Records</h2><p>The <code>@epsilla/clawtrace</code> plugin hooks into eight OpenClaw event types:</p><ul><li><code>session_start</code> / <code>session_end</code></li><li><code>llm_input</code> / <code>llm_output</code></li><li><code>before_tool_call</code> / <code>after_tool_call</code></li><li><code>subagent_spawning</code> / <code>subagent_ended</code></li></ul><p>Every event is batched and streamed to ClawTrace's ingest service, then materialized via a Databricks Lakeflow SQL pipeline into Iceberg tables and exposed as a Cypher-queryable graph via PuppyGraph. The frontend renders three views for every trace:</p><ul><li><strong>Execution path</strong> — a collapsible tree with parent-child relationships and per-node cost badges</li><li><strong>Call graph</strong> — a force-directed diagram of every agent, model, and tool</li><li><strong>Timeline</strong> — a Gantt chart showing where time actually went</li></ul><p>Click any node to see the full input/output payload, token counts, duration, and cost.</p><h2>Tracy: Ask Questions About Your Own Agent</h2><p>The standout feature is <strong>Tracy</strong>, an AI analyst wired directly to the trajectory graph via MCP. Instead of reading logs, you ask questions in plain English:</p><ul><li><em>\"Why did my last run cost so much?\"</em></li><li><em>\"Which tool is failing most often?\"</em></li><li><em>\"Is my context window growing across sessions?\"</em></li></ul><p>Tracy runs live Cypher queries against your data, generates charts, and returns specific answers. The ClawTrace Self-Evolve skill takes this further — install it and your agent will periodically review its own cost and failure patterns, apply fixes, and log what it changed.</p><h2>Installation</h2><p>``<code>bash<br />openclaw plugins install @epsilla/clawtrace<br />openclaw clawtrace setup<br />openclaw gateway restart<br /></code>``</p><p>Paste your observe key from <a href=\"https://clawtrace.ai\">clawtrace.ai</a> when prompted. New accounts get 200 free credits and no credit card is required.</p><h2>Architecture in Brief</h2><p>The stack is: OpenClaw plugin → FastAPI ingest → Databricks Delta Lake → PuppyGraph → FastAPI backend → Next.js 15 UI. Cost estimates support 80+ models with cache-aware pricing across OpenAI, Anthropic, Google, DeepSeek, Mistral, and a range of Chinese models (Qwen, GLM, Kimi, ERNIE) and open-source models (Llama 4/3.x, Mixtral).</p><h2>Why This Matters</h2><p>OpenClaw's built-in logs are useful but they flatten everything into JSON blobs with no execution graph. When agents spawn sub-agents, call tools in loops, or hit context growth issues, root-causing from flat logs is tedious. ClawTrace gives you the observability layer that has been missing from the OpenClaw ecosystem — and the full source is on GitHub under Apache 2.0.</p><p>If you have ever shipped an agent to production and later wondered \"what exactly did it do,\" this is worth installing.</p>",
      "date_published": "2026-04-14T23:00:00.000Z",
      "date_modified": "2026-04-14T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-clawtrace-debug-token-spikes.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-community-roundup-april-14-2026/",
      "url": "https://openclawchronicles.com/posts/openclaw-community-roundup-april-14-2026/",
      "title": "OpenClaw Community Roundup: Deployment Playbooks, HN Discussions, and RedCrab",
      "summary": "This week the OpenClaw community debated real-world use cases on HN, shared a free deployment playbook site, and released a Claude Code hybrid project.",
      "content_text": "Between today's [2026.4.14 release](https://github.com/openclaw/openclaw/releases/tag/v2026.4.14) and the ClawTrace debut, it has been a busy day in the OpenClaw ecosystem. But a handful of community projects and discussions are also worth surfacing.\n\n## AutoClaw: Free Open-Source Deployment Playbooks\n\n[AutoClaw.sh](https://autoclaw.sh) — which appeared on Hacker News today as \"Toward an Open-Source Playbook for OpenClaw Deployment\" ([#47764352](https://news.ycombinator.com/item?id=47764352)) — is a growing collection of practical, opinionated deployment guides. The site currently covers:\n\n- **Hosting options** — local machine vs. cloud servers, with costs, pros/cons, and technical requirements\n- **Cloudflare Workers** — deploying OpenClaw using Sandbox containers, Cloudflare Access, and optional R2 persistence\n- **Google Workspace integration** — connecting Gmail, Calendar, and Drive via gogcli\n- **Multi-agent workflows** — designing specialist agents, orchestrators, handoffs, and approval points\n- **Running locally** — Docker Compose, local volumes, and the built-in gateway UI\n- **Autonomous SRE agent on GKE** — a step-by-step guide for Cloudflare Workers-based incident investigation connected to a private GKE cluster\n- **When to use OpenClaw (and when not to)** — a decision framework for choosing between OpenClaw, a custom agent loop, Cloudflare Workers AI, or plain LLM API calls\n\nAll content is freely accessible. This is exactly the kind of practical deployment documentation the community has been asking for, filling a gap between the official docs and real production setups.\n\n## Ask HN: What Are You Using OpenClaw For?\n\nAn Ask HN thread ([#47758502](https://news.ycombinator.com/item?id=47758502)) has been collecting real use cases from operators over the past day. With 8 points and several comments, the responses reflect the range of things people are actually running:\n\n- Personal assistant workflows covering email, calendar, and research\n- Home automation pipelines that chain sensors, notifications, and external APIs\n- Internal tools for engineering teams (triage, on-call summaries, changelog drafts)\n- Long-running background agents for data processing and monitoring\n\nThe thread is a useful read if you are evaluating OpenClaw for a new project or looking for deployment patterns from people who are already running it in production.\n\n## RedCrab: \"What If Claude Code and OpenClaw Had a Child?\"\n\nA project called [RedCrab](https://redcrab.ai) appeared on Hacker News today ([#47766906](https://news.ycombinator.com/item?id=47766906)) with an intriguing pitch: what if you combined Claude Code's interactive coding loop with OpenClaw's persistent agent runtime and channel integrations? The project is in early stages but the concept — a coding assistant that lives persistently in your messaging stack and can be directed through normal channels — is generating discussion.\n\n## Mercury: Multi-Agent Canvas Supporting OpenClaw\n\nAlso worth noting from yesterday's HN activity: **Mercury** ([mercury.build](https://mercury.build)), a no-code agent orchestration canvas, explicitly lists OpenClaw as one of the supported agent adapters alongside Claude Code, Devin, and Manus. The Show HN post ([#47758643](https://news.ycombinator.com/item?id=47758643)) raised an interesting architecture question about where memory should live — in the orchestration layer or the individual agent — that is directly relevant to OpenClaw operators thinking about multi-agent setups.\n\n## Quick Links\n\n- [OpenClaw 2026.4.14 Release Notes](https://github.com/openclaw/openclaw/releases/tag/v2026.4.14)\n- [AutoClaw Playbooks](https://autoclaw.sh)\n- [ClawTrace on GitHub](https://github.com/epsilla-cloud/clawtrace)\n- [Ask HN: What are you using OpenClaw or agents for?](https://news.ycombinator.com/item?id=47758502)",
      "content_html": "<p>Between today's <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.14\">2026.4.14 release</a> and the ClawTrace debut, it has been a busy day in the OpenClaw ecosystem. But a handful of community projects and discussions are also worth surfacing.</p><h2>AutoClaw: Free Open-Source Deployment Playbooks</h2><p><a href=\"https://autoclaw.sh\">AutoClaw.sh</a> — which appeared on Hacker News today as \"Toward an Open-Source Playbook for OpenClaw Deployment\" (<a href=\"https://news.ycombinator.com/item?id=47764352\">#47764352</a>) — is a growing collection of practical, opinionated deployment guides. The site currently covers:</p><ul><li><strong>Hosting options</strong> — local machine vs. cloud servers, with costs, pros/cons, and technical requirements</li><li><strong>Cloudflare Workers</strong> — deploying OpenClaw using Sandbox containers, Cloudflare Access, and optional R2 persistence</li><li><strong>Google Workspace integration</strong> — connecting Gmail, Calendar, and Drive via gogcli</li><li><strong>Multi-agent workflows</strong> — designing specialist agents, orchestrators, handoffs, and approval points</li><li><strong>Running locally</strong> — Docker Compose, local volumes, and the built-in gateway UI</li><li><strong>Autonomous SRE agent on GKE</strong> — a step-by-step guide for Cloudflare Workers-based incident investigation connected to a private GKE cluster</li><li><strong>When to use OpenClaw (and when not to)</strong> — a decision framework for choosing between OpenClaw, a custom agent loop, Cloudflare Workers AI, or plain LLM API calls</li></ul><p>All content is freely accessible. This is exactly the kind of practical deployment documentation the community has been asking for, filling a gap between the official docs and real production setups.</p><h2>Ask HN: What Are You Using OpenClaw For?</h2><p>An Ask HN thread (<a href=\"https://news.ycombinator.com/item?id=47758502\">#47758502</a>) has been collecting real use cases from operators over the past day. With 8 points and several comments, the responses reflect the range of things people are actually running:</p><ul><li>Personal assistant workflows covering email, calendar, and research</li><li>Home automation pipelines that chain sensors, notifications, and external APIs</li><li>Internal tools for engineering teams (triage, on-call summaries, changelog drafts)</li><li>Long-running background agents for data processing and monitoring</li></ul><p>The thread is a useful read if you are evaluating OpenClaw for a new project or looking for deployment patterns from people who are already running it in production.</p><h2>RedCrab: \"What If Claude Code and OpenClaw Had a Child?\"</h2><p>A project called <a href=\"https://redcrab.ai\">RedCrab</a> appeared on Hacker News today (<a href=\"https://news.ycombinator.com/item?id=47766906\">#47766906</a>) with an intriguing pitch: what if you combined Claude Code's interactive coding loop with OpenClaw's persistent agent runtime and channel integrations? The project is in early stages but the concept — a coding assistant that lives persistently in your messaging stack and can be directed through normal channels — is generating discussion.</p><h2>Mercury: Multi-Agent Canvas Supporting OpenClaw</h2><p>Also worth noting from yesterday's HN activity: <strong>Mercury</strong> (<a href=\"https://mercury.build\">mercury.build</a>), a no-code agent orchestration canvas, explicitly lists OpenClaw as one of the supported agent adapters alongside Claude Code, Devin, and Manus. The Show HN post (<a href=\"https://news.ycombinator.com/item?id=47758643\">#47758643</a>) raised an interesting architecture question about where memory should live — in the orchestration layer or the individual agent — that is directly relevant to OpenClaw operators thinking about multi-agent setups.</p><h2>Quick Links</h2><ul><li><a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.14\">OpenClaw 2026.4.14 Release Notes</a></li><li><a href=\"https://autoclaw.sh\">AutoClaw Playbooks</a></li><li><a href=\"https://github.com/epsilla-cloud/clawtrace\">ClawTrace on GitHub</a></li><li><a href=\"https://news.ycombinator.com/item?id=47758502\">Ask HN: What are you using OpenClaw or agents for?</a></li></ul>",
      "date_published": "2026-04-14T23:00:00.000Z",
      "date_modified": "2026-04-14T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-community-roundup-april-14-2026.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-14-security-ssrf-redos-browser-teams/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-14-security-ssrf-redos-browser-teams/",
      "title": "OpenClaw Security Patches: SSRF, ReDoS, and Allowlist Hardening",
      "summary": "A fresh OpenClaw pre-release drops five targeted security fixes: a ReDoS patch in the Control UI, SSRF enforcement on browser routes, heartbeat trust downgrade, Teams allowlist hardening, and config field redaction.",
      "content_text": "Less than two hours after [v2026.4.12 landed](https://github.com/openclaw/openclaw/releases/tag/v2026.4.12), a new OpenClaw pre-release pushed to GitHub at 02:07 UTC on April 14 — carrying a focused set of security patches that address five distinct attack surfaces. If you run the Control UI, the browser tool, Microsoft Teams, or any setup that processes untrusted webhook events, you want these fixes.\n\n## ReDoS in the Control UI Chat Renderer\n\nThe most user-facing fix is a swap of the markdown renderer inside the Control UI. OpenClaw's webchat was using [marked.js](https://github.com/markedjs/marked) to parse assistant markdown responses; a maliciously crafted message with certain regex-heavy patterns could trigger a catastrophic backtrack and freeze the browser tab entirely — a classic [ReDoS (Regular Expression Denial of Service)](https://owasp.org/www-community/attacks/ReDoS) attack.\n\nThe fix ([#46707](https://github.com/openclaw/openclaw/pull/46707), thanks [@zhangfnf](https://github.com/zhangfnf)) replaces marked.js with [markdown-it](https://github.com/markdown-it/markdown-it), which does not use backtracking-prone regexes for its core parse paths. Anyone exposing the Control UI to untrusted channels — or running a shared or multi-user gateway — should prioritize this update.\n\n## SSRF Enforcement on Browser Routes\n\nOpenClaw's browser tool (snapshot, screenshot, and tab operations) was not consistently applying the server-side request forgery (SSRF) policy when handling CDP-sourced URLs. Fix [#66040](https://github.com/openclaw/openclaw/pull/66040) ([@pgondhi987](https://github.com/pgondhi987)) enforces the SSRF policy across all three routes.\n\nA companion fix ([#66043](https://github.com/openclaw/openclaw/pull/66043), [#66080](https://github.com/openclaw/openclaw/pull/66080)) also allows the managed local Chrome process's own loopback control plane to bypass SSRF checks — because OpenClaw was incorrectly classifying its own child browser as \"not reachable\" under strict default policy. The net result: stricter SSRF enforcement for external URLs, working local Chrome management.\n\n## Heartbeat Trust: Untrusted hook:wake Events\n\n[#66031](https://github.com/openclaw/openclaw/pull/66031) ([@pgondhi987](https://github.com/pgondhi987)) forces an owner downgrade for system events arriving via `hook:wake`. Previously, an untrusted wake event could run under elevated owner trust if it happened to match session metadata. The fix clamps such events to non-owner trust before any agent turn executes, preventing a crafted external trigger from gaining escalated permissions inside a heartbeat turn.\n\n## Microsoft Teams: Sender Allowlist on SSO Signin Invokes\n\nTeams integration uses SSO (Single Sign-On) invoke actions for adaptive card interactions. The sender allowlist checks were not being applied to SSO signin invoke paths — meaning a message crafted to look like an SSO invoke could bypass the `allowFrom` filter. Fix [#66033](https://github.com/openclaw/openclaw/pull/66033) ([@pgondhi987](https://github.com/pgondhi987)) enforces allowlist evaluation on this path.\n\n## Config Snapshot: sourceConfig and runtimeConfig Redaction\n\nOpenClaw's `redactConfigSnapshot` function — used when sharing debug state or writing diagnostic output — was not stripping `sourceConfig` and `runtimeConfig` alias fields. These fields can contain provider credentials and channel secrets. Fix [#66030](https://github.com/openclaw/openclaw/pull/66030) ([@pgondhi987](https://github.com/pgondhi987)) ensures both fields are redacted alongside the existing credential redaction paths.\n\n## Feishu Allowlist Canonicalization\n\nA subtler fix ([#66021](https://github.com/openclaw/openclaw/pull/66021), [@eleqtrizit](https://github.com/eleqtrizit)) cleans up how Feishu allowlist entries are matched. Previously, allowlist entries were being case-folded and prefix-stripped inconsistently, which could cause user IDs and chat IDs to collide across namespaces — widening allowlist matches beyond what the operator intended. Entries are now canonicalized by explicit `user`/`chat` kind before matching.\n\n## Other Notable Fixes in This Build\n\nBeyond the security patches, the pre-release includes several operational fixes worth knowing about:\n\n- **Cron scheduler stability** ([#66083](https://github.com/openclaw/openclaw/pull/66083), [#66113](https://github.com/openclaw/openclaw/pull/66113)): The cron engine was inventing short retry loops when no valid future slot could be calculated, and could resume errored jobs too early after a transient failure. Both behaviors are corrected.\n- **Gateway session routing** ([#66073](https://github.com/openclaw/openclaw/pull/66073)): Heartbeat, cron-event, and exec-event turns were overwriting shared-session routing metadata, meaning a synthetic heartbeat target could poison later delivery for real user turns.\n- **Memory/Active Memory** ([#66144](https://github.com/openclaw/openclaw/pull/66144)): Recalled memories are now placed on the hidden untrusted prompt-prefix path rather than injected into the system prompt, reducing attack surface for memory-poisoning via stored recall.\n- **Dreaming sweep guard** ([#66139](https://github.com/openclaw/openclaw/pull/66139)): The dreaming engine now requires a live queued event before running its sweep, preventing it from replaying on later heartbeats after the scheduled run was already consumed.\n- **GPT-5.4 compatibility**: The `minimal` thinking preset is now mapped to OpenAI's supported `low` reasoning effort for GPT-5.4 requests, so embedded runs stop failing request validation.\n\n## How to Update\n\nThis is a pre-release build. If you want the security fixes now, pull it explicitly:\n\n```bash\nnpm install -g openclaw@next\n```\n\nOr wait for the next stable release, which will include all of these patches. Track the [OpenClaw releases page](https://github.com/openclaw/openclaw/releases) for the stable tag.\n\nThe five security-class fixes in this build cover meaningfully different surfaces — UI rendering, browser tool SSRF, event trust, Teams allowlists, and config redaction — making this one of the more security-dense pre-releases in recent months.",
      "content_html": "<p>Less than two hours after <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.12\">v2026.4.12 landed</a>, a new OpenClaw pre-release pushed to GitHub at 02:07 UTC on April 14 — carrying a focused set of security patches that address five distinct attack surfaces. If you run the Control UI, the browser tool, Microsoft Teams, or any setup that processes untrusted webhook events, you want these fixes.</p><h2>ReDoS in the Control UI Chat Renderer</h2><p>The most user-facing fix is a swap of the markdown renderer inside the Control UI. OpenClaw's webchat was using <a href=\"https://github.com/markedjs/marked\">marked.js</a> to parse assistant markdown responses; a maliciously crafted message with certain regex-heavy patterns could trigger a catastrophic backtrack and freeze the browser tab entirely — a classic <a href=\"https://owasp.org/www-community/attacks/ReDoS\">ReDoS (Regular Expression Denial of Service)</a> attack.</p><p>The fix (<a href=\"https://github.com/openclaw/openclaw/pull/46707\">#46707</a>, thanks <a href=\"https://github.com/zhangfnf\">@zhangfnf</a>) replaces marked.js with <a href=\"https://github.com/markdown-it/markdown-it\">markdown-it</a>, which does not use backtracking-prone regexes for its core parse paths. Anyone exposing the Control UI to untrusted channels — or running a shared or multi-user gateway — should prioritize this update.</p><h2>SSRF Enforcement on Browser Routes</h2><p>OpenClaw's browser tool (snapshot, screenshot, and tab operations) was not consistently applying the server-side request forgery (SSRF) policy when handling CDP-sourced URLs. Fix <a href=\"https://github.com/openclaw/openclaw/pull/66040\">#66040</a> (<a href=\"https://github.com/pgondhi987\">@pgondhi987</a>) enforces the SSRF policy across all three routes.</p><p>A companion fix (<a href=\"https://github.com/openclaw/openclaw/pull/66043\">#66043</a>, <a href=\"https://github.com/openclaw/openclaw/pull/66080\">#66080</a>) also allows the managed local Chrome process's own loopback control plane to bypass SSRF checks — because OpenClaw was incorrectly classifying its own child browser as \"not reachable\" under strict default policy. The net result: stricter SSRF enforcement for external URLs, working local Chrome management.</p><h2>Heartbeat Trust: Untrusted hook:wake Events</h2><p><a href=\"https://github.com/openclaw/openclaw/pull/66031\">#66031</a> (<a href=\"https://github.com/pgondhi987\">@pgondhi987</a>) forces an owner downgrade for system events arriving via <code>hook:wake</code>. Previously, an untrusted wake event could run under elevated owner trust if it happened to match session metadata. The fix clamps such events to non-owner trust before any agent turn executes, preventing a crafted external trigger from gaining escalated permissions inside a heartbeat turn.</p><h2>Microsoft Teams: Sender Allowlist on SSO Signin Invokes</h2><p>Teams integration uses SSO (Single Sign-On) invoke actions for adaptive card interactions. The sender allowlist checks were not being applied to SSO signin invoke paths — meaning a message crafted to look like an SSO invoke could bypass the <code>allowFrom</code> filter. Fix <a href=\"https://github.com/openclaw/openclaw/pull/66033\">#66033</a> (<a href=\"https://github.com/pgondhi987\">@pgondhi987</a>) enforces allowlist evaluation on this path.</p><h2>Config Snapshot: sourceConfig and runtimeConfig Redaction</h2><p>OpenClaw's <code>redactConfigSnapshot</code> function — used when sharing debug state or writing diagnostic output — was not stripping <code>sourceConfig</code> and <code>runtimeConfig</code> alias fields. These fields can contain provider credentials and channel secrets. Fix <a href=\"https://github.com/openclaw/openclaw/pull/66030\">#66030</a> (<a href=\"https://github.com/pgondhi987\">@pgondhi987</a>) ensures both fields are redacted alongside the existing credential redaction paths.</p><h2>Feishu Allowlist Canonicalization</h2><p>A subtler fix (<a href=\"https://github.com/openclaw/openclaw/pull/66021\">#66021</a>, <a href=\"https://github.com/eleqtrizit\">@eleqtrizit</a>) cleans up how Feishu allowlist entries are matched. Previously, allowlist entries were being case-folded and prefix-stripped inconsistently, which could cause user IDs and chat IDs to collide across namespaces — widening allowlist matches beyond what the operator intended. Entries are now canonicalized by explicit <code>user</code>/<code>chat</code> kind before matching.</p><h2>Other Notable Fixes in This Build</h2><p>Beyond the security patches, the pre-release includes several operational fixes worth knowing about:</p><ul><li><strong>Cron scheduler stability</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/66083\">#66083</a>, <a href=\"https://github.com/openclaw/openclaw/pull/66113\">#66113</a>): The cron engine was inventing short retry loops when no valid future slot could be calculated, and could resume errored jobs too early after a transient failure. Both behaviors are corrected.</li><li><strong>Gateway session routing</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/66073\">#66073</a>): Heartbeat, cron-event, and exec-event turns were overwriting shared-session routing metadata, meaning a synthetic heartbeat target could poison later delivery for real user turns.</li><li><strong>Memory/Active Memory</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/66144\">#66144</a>): Recalled memories are now placed on the hidden untrusted prompt-prefix path rather than injected into the system prompt, reducing attack surface for memory-poisoning via stored recall.</li><li><strong>Dreaming sweep guard</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/66139\">#66139</a>): The dreaming engine now requires a live queued event before running its sweep, preventing it from replaying on later heartbeats after the scheduled run was already consumed.</li><li><strong>GPT-5.4 compatibility</strong>: The <code>minimal</code> thinking preset is now mapped to OpenAI's supported <code>low</code> reasoning effort for GPT-5.4 requests, so embedded runs stop failing request validation.</li></ul><h2>How to Update</h2><p>This is a pre-release build. If you want the security fixes now, pull it explicitly:</p><p>``<code>bash<br />npm install -g openclaw@next<br /></code>``</p><p>Or wait for the next stable release, which will include all of these patches. Track the <a href=\"https://github.com/openclaw/openclaw/releases\">OpenClaw releases page</a> for the stable tag.</p><p>The five security-class fixes in this build cover meaningfully different surfaces — UI rendering, browser tool SSRF, event trust, Teams allowlists, and config redaction — making this one of the more security-dense pre-releases in recent months.</p>",
      "date_published": "2026-04-14T08:00:00.000Z",
      "date_modified": "2026-04-14T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-14-security-ssrf-redos-browser-teams.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-14-mercury-agent-orchestration/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-14-mercury-agent-orchestration/",
      "title": "Mercury Adds OpenClaw Adapter in a16z-Backed Agent Platform",
      "summary": "Mercury, a no-code agent orchestration canvas backed by a16z, lists OpenClaw as a first-class adapter alongside Claude Code, Devin, and Manus.",
      "content_text": "A [Show HN post from last night](https://news.ycombinator.com/item?id=47758643) caught our attention: **Mercury** (mercury.build), a new a16z-backed agent orchestration platform, lists OpenClaw as a first-class supported agent type alongside Claude Code, Devin, Manus, and Gumloop.\n\nThis is a notable signal for the OpenClaw ecosystem. It means teams building multi-agent systems are already treating OpenClaw as a peer to purpose-built coding agents in the orchestration layer.\n\n## What Mercury Does\n\nMercury is a visual canvas for connecting human and AI agent teams. You draw edges between agents to define delegation relationships — Agent A can delegate to Agent B, which can delegate further down the graph. The canvas becomes a live map of how your team operates.\n\nThe founder's pitch centers on a problem anyone who has run multiple agents knows well: \"You've got Claude Code in a terminal, a research agent in a browser tab, a Slack bot somewhere else, a scheduling assistant in yet another window. It's chaotic.\"\n\nMercury aims to solve the coordination problem, not the individual-agent problem.\n\n## OpenClaw as an Adapter\n\nMercury supports several agent types on the same canvas:\n\n- **Native Mercury agents** — built on the Anthropic SDK\n- **Adapters** for Claude Code, Devin, Manus, OpenClaw, Gumloop\n- **MCP-compatible agents** via the MCP protocol\n\nThe OpenClaw adapter means you can connect your existing OpenClaw instance to a Mercury canvas and have it receive delegated tasks, process them with its full tool stack, and return results to orchestrating agents or humans.\n\nThis fits naturally with how OpenClaw works — it already handles delegation internally through its sub-agent system. Being a node in a larger Mercury-managed graph extends that pattern to cross-tool teams.\n\n## Why This Matters\n\nOpenClaw being included at launch (not added later) suggests the Mercury team built against it deliberately. The platform integrates 800+ tools via Composio, supports iMessage and Slack as human-facing channels, and has human-in-the-loop approval by default — all patterns familiar to OpenClaw users.\n\nThe a16z backing ($1.5M seed, with investors from OpenAI and Cognition) gives Mercury some runway to ship. Whether it finds product-market fit is TBD, but the fact that an a16z portfolio company is shipping OpenClaw support from day one is a sign of where the ecosystem is heading.\n\n## The Memory Architecture Question\n\nThe most interesting part of the Show HN post is the question the Mercury team is wrestling with publicly:\n\n> \"Where should memory live — in the orchestration layer or the agent layer?\"\n\nIt's a genuinely hard problem. OpenClaw's approach leans toward the agent layer — memory lives close to the agent that uses it, with recall running right before each reply (especially now that Active Memory shipped in 2026.4.12). Mercury started with org-level memory exposed as tools, but acknowledges that's not right for every agent.\n\nThis is an open design question for the entire multi-agent space. Worth watching as both platforms evolve.\n\n## Try Mercury\n\nMercury is in alpha: [mercury.build](https://www.mercury.build/). If you're running OpenClaw and want to connect it to a larger agent graph, it's worth a look.",
      "content_html": "<p>A <a href=\"https://news.ycombinator.com/item?id=47758643\">Show HN post from last night</a> caught our attention: <strong>Mercury</strong> (mercury.build), a new a16z-backed agent orchestration platform, lists OpenClaw as a first-class supported agent type alongside Claude Code, Devin, Manus, and Gumloop.</p><p>This is a notable signal for the OpenClaw ecosystem. It means teams building multi-agent systems are already treating OpenClaw as a peer to purpose-built coding agents in the orchestration layer.</p><h2>What Mercury Does</h2><p>Mercury is a visual canvas for connecting human and AI agent teams. You draw edges between agents to define delegation relationships — Agent A can delegate to Agent B, which can delegate further down the graph. The canvas becomes a live map of how your team operates.</p><p>The founder's pitch centers on a problem anyone who has run multiple agents knows well: \"You've got Claude Code in a terminal, a research agent in a browser tab, a Slack bot somewhere else, a scheduling assistant in yet another window. It's chaotic.\"</p><p>Mercury aims to solve the coordination problem, not the individual-agent problem.</p><h2>OpenClaw as an Adapter</h2><p>Mercury supports several agent types on the same canvas:</p><ul><li><strong>Native Mercury agents</strong> — built on the Anthropic SDK</li><li><strong>Adapters</strong> for Claude Code, Devin, Manus, OpenClaw, Gumloop</li><li><strong>MCP-compatible agents</strong> via the MCP protocol</li></ul><p>The OpenClaw adapter means you can connect your existing OpenClaw instance to a Mercury canvas and have it receive delegated tasks, process them with its full tool stack, and return results to orchestrating agents or humans.</p><p>This fits naturally with how OpenClaw works — it already handles delegation internally through its sub-agent system. Being a node in a larger Mercury-managed graph extends that pattern to cross-tool teams.</p><h2>Why This Matters</h2><p>OpenClaw being included at launch (not added later) suggests the Mercury team built against it deliberately. The platform integrates 800+ tools via Composio, supports iMessage and Slack as human-facing channels, and has human-in-the-loop approval by default — all patterns familiar to OpenClaw users.</p><p>The a16z backing ($1.5M seed, with investors from OpenAI and Cognition) gives Mercury some runway to ship. Whether it finds product-market fit is TBD, but the fact that an a16z portfolio company is shipping OpenClaw support from day one is a sign of where the ecosystem is heading.</p><h2>The Memory Architecture Question</h2><p>The most interesting part of the Show HN post is the question the Mercury team is wrestling with publicly:</p><p>> \"Where should memory live — in the orchestration layer or the agent layer?\"</p><p>It's a genuinely hard problem. OpenClaw's approach leans toward the agent layer — memory lives close to the agent that uses it, with recall running right before each reply (especially now that Active Memory shipped in 2026.4.12). Mercury started with org-level memory exposed as tools, but acknowledges that's not right for every agent.</p><p>This is an open design question for the entire multi-agent space. Worth watching as both platforms evolve.</p><h2>Try Mercury</h2><p>Mercury is in alpha: <a href=\"https://www.mercury.build/\">mercury.build</a>. If you're running OpenClaw and want to connect it to a larger agent graph, it's worth a look.</p>",
      "date_published": "2026-04-14T01:30:00.000Z",
      "date_modified": "2026-04-14T01:30:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "News"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-14-mercury-agent-orchestration.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-14-shell-exec-security-hardening/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-14-shell-exec-security-hardening/",
      "title": "OpenClaw Security: Shell Injection, Busybox, and Approver Fixes",
      "summary": "Three security patches in OpenClaw 2026.4.12 close shell-wrapper injection, a busybox exec bypass, and an empty-approver authorization hole.",
      "content_text": "OpenClaw 2026.4.12, released April 13, ships three security patches alongside its feature work. All three come from [@pgondhi987](https://github.com/pgondhi987) and address real execution-boundary issues — not theoretical edge cases. If you run OpenClaw in any multi-user, multi-agent, or internet-exposed configuration, these are worth understanding.\n\n## Patch 1: Busybox Removed from Safe Exec Bins\n\n**PR [#65713](https://github.com/openclaw/openclaw/pull/65713)**\n\nOpenClaw's exec approval system maintains a list of \"safe\" binaries that can run without triggering the approval gate. `busybox` (and its cousin `toybox`) were on that list — a mistake, because both are multi-call binaries that expose dozens of commands behind a single executable name.\n\nCalling `busybox sh` or `busybox awk` is functionally equivalent to calling `sh` or `awk` directly. Including it on the safe list meant the entire POSIX toolbox was reachable through a single whitelisted binary name — defeating the purpose of the allowlist.\n\n**Fix**: busybox and toybox are now blocked outright. If you have automation that relies on explicit `busybox <cmd>` calls, migrate to the native binary (`sh`, `cat`, `sed`, etc.) — those are evaluated individually against the policy.\n\n## Patch 2: Empty Approver List No Longer Grants Authorization\n\n**PR [#65714](https://github.com/openclaw/openclaw/pull/65714)**\n\nOpenClaw's approval system checks whether the requesting entity is in the configured approver list before granting authorization. There was a logic inversion: if the approver list was empty (typically through misconfiguration or an admin forgetting to set it), the check evaluated to \"no one is unauthorized\" and passed.\n\nThis is the kind of fail-open bug that's easy to miss in testing because a properly configured system never hits the empty-list path. But in fresh installs, misconfigured deployments, or after a config reset, the approver list can legitimately be empty.\n\n**Fix**: empty approver list now explicitly denies approval authorization. There's no ambiguous state — if no approvers are configured, nothing is approved until you configure them.\n\n## Patch 3: Shell-Wrapper Injection and env-argv Assignment Blocked\n\n**PR [#65717](https://github.com/openclaw/openclaw/pull/65717)**\n\nThis is the most technically interesting of the three. OpenClaw detects shell-wrapper invocations to prevent commands from being routed through interpreters that bypass exec policy. The previous detection was narrow — it caught obvious cases like `bash -c \"...\"` but missed subtler patterns.\n\nTwo specific attack surfaces are closed here:\n\n1. **Broader shell-wrapper detection** — The detection heuristics now cover a wider range of shell-like invocation patterns, including indirect routes through wrapper scripts and forwarding binaries.\n\n2. **env-argv assignment injection** — The `env` command supports a `-` separator and `VAR=value` positional arguments that let you set environment variables inline before executing a command: `env VAR=value program`. This can be used to inject variables that modify how the target program behaves — including some that affect interpreter selection or security-relevant paths. This injection vector is now blocked.\n\n**Impact**: These patches matter most for OpenClaw instances that process external input — webhooks, public-facing chat interfaces, or any surface where untrusted content could influence what gets executed.\n\n## Recommended Action\n\nUpdate to 2026.4.12 immediately if your instance:\n\n- Runs on a network-accessible host\n- Processes webhooks, Discord messages, or any external input\n- Has `tools.exec.allow` configured (even if it's locked down)\n- Runs in a multi-user or shared environment\n\n```bash\nopenclaw update\n```\n\nVerify your exec policy after updating:\n\n```bash\nopenclaw exec-policy show\n```\n\nThe new `exec-policy` command ([#64050](https://github.com/openclaw/openclaw/pull/64050), also in this release) makes it easy to review your current policy and sync it against your config.\n\n## Credit\n\nAll three patches were contributed by [@pgondhi987](https://github.com/pgondhi987). Security contributions like these keep the project hardened — if you find issues, the OpenClaw team accepts responsible disclosure through the standard GitHub security advisory flow.",
      "content_html": "<p>OpenClaw 2026.4.12, released April 13, ships three security patches alongside its feature work. All three come from <a href=\"https://github.com/pgondhi987\">@pgondhi987</a> and address real execution-boundary issues — not theoretical edge cases. If you run OpenClaw in any multi-user, multi-agent, or internet-exposed configuration, these are worth understanding.</p><h2>Patch 1: Busybox Removed from Safe Exec Bins</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/65713\">#65713</a></strong></p><p>OpenClaw's exec approval system maintains a list of \"safe\" binaries that can run without triggering the approval gate. <code>busybox</code> (and its cousin <code>toybox</code>) were on that list — a mistake, because both are multi-call binaries that expose dozens of commands behind a single executable name.</p><p>Calling <code>busybox sh</code> or <code>busybox awk</code> is functionally equivalent to calling <code>sh</code> or <code>awk</code> directly. Including it on the safe list meant the entire POSIX toolbox was reachable through a single whitelisted binary name — defeating the purpose of the allowlist.</p><p><strong>Fix</strong>: busybox and toybox are now blocked outright. If you have automation that relies on explicit <code>busybox <cmd></code> calls, migrate to the native binary (<code>sh</code>, <code>cat</code>, <code>sed</code>, etc.) — those are evaluated individually against the policy.</p><h2>Patch 2: Empty Approver List No Longer Grants Authorization</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/65714\">#65714</a></strong></p><p>OpenClaw's approval system checks whether the requesting entity is in the configured approver list before granting authorization. There was a logic inversion: if the approver list was empty (typically through misconfiguration or an admin forgetting to set it), the check evaluated to \"no one is unauthorized\" and passed.</p><p>This is the kind of fail-open bug that's easy to miss in testing because a properly configured system never hits the empty-list path. But in fresh installs, misconfigured deployments, or after a config reset, the approver list can legitimately be empty.</p><p><strong>Fix</strong>: empty approver list now explicitly denies approval authorization. There's no ambiguous state — if no approvers are configured, nothing is approved until you configure them.</p><h2>Patch 3: Shell-Wrapper Injection and env-argv Assignment Blocked</h2><p><strong>PR <a href=\"https://github.com/openclaw/openclaw/pull/65717\">#65717</a></strong></p><p>This is the most technically interesting of the three. OpenClaw detects shell-wrapper invocations to prevent commands from being routed through interpreters that bypass exec policy. The previous detection was narrow — it caught obvious cases like <code>bash -c \"...\"</code> but missed subtler patterns.</p><p>Two specific attack surfaces are closed here:</p><ol><li><strong>Broader shell-wrapper detection</strong> — The detection heuristics now cover a wider range of shell-like invocation patterns, including indirect routes through wrapper scripts and forwarding binaries.</li></ol><ol><li><strong>env-argv assignment injection</strong> — The <code>env</code> command supports a <code>-</code> separator and <code>VAR=value</code> positional arguments that let you set environment variables inline before executing a command: <code>env VAR=value program</code>. This can be used to inject variables that modify how the target program behaves — including some that affect interpreter selection or security-relevant paths. This injection vector is now blocked.</li></ol><p><strong>Impact</strong>: These patches matter most for OpenClaw instances that process external input — webhooks, public-facing chat interfaces, or any surface where untrusted content could influence what gets executed.</p><h2>Recommended Action</h2><p>Update to 2026.4.12 immediately if your instance:</p><ul><li>Runs on a network-accessible host</li><li>Processes webhooks, Discord messages, or any external input</li><li>Has <code>tools.exec.allow</code> configured (even if it's locked down)</li><li>Runs in a multi-user or shared environment</li></ul><p>``<code>bash<br />openclaw update<br /></code>`<code></p><p>Verify your exec policy after updating:</p><p></code>`<code>bash<br />openclaw exec-policy show<br /></code>`<code></p><p>The new </code>exec-policy` command (<a href=\"https://github.com/openclaw/openclaw/pull/64050\">#64050</a>, also in this release) makes it easy to review your current policy and sync it against your config.</p><h2>Credit</h2><p>All three patches were contributed by <a href=\"https://github.com/pgondhi987\">@pgondhi987</a>. Security contributions like these keep the project hardened — if you find issues, the OpenClaw team accepts responsible disclosure through the standard GitHub security advisory flow.</p>",
      "date_published": "2026-04-14T01:00:00.000Z",
      "date_modified": "2026-04-14T01:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-14-shell-exec-security-hardening.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-13-release-active-memory-lm-studio/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-13-release-active-memory-lm-studio/",
      "title": "OpenClaw 2026.4.12: Active Memory, LM Studio, and MLX Talk",
      "summary": "OpenClaw 2026.4.12 ships a dedicated Active Memory sub-agent, native LM Studio support, MLX local speech for macOS, and three security patches.",
      "content_text": "OpenClaw's April quality push landed today. The **2026.4.12** release — tagged April 13 at 12:35 UTC — is a broad \"make everything more reliable\" drop covering memory, local models, speech, plugin loading, and three security hardening patches. Here's what's new.\n\n## Active Memory: Automatic Recall Before Every Reply\n\nThe headline feature is **Active Memory** ([#63286](https://github.com/openclaw/openclaw/pull/63286)), contributed by [@Takhoffman](https://github.com/Takhoffman). Rather than requiring users to remember to say \"search memory\" or \"remember this,\" OpenClaw now optionally runs a dedicated memory sub-agent right before the main reply — automatically pulling in relevant preferences, past context, and details from your memory store.\n\nThree configurable context modes ship with it:\n\n- **message** — recall only against the current message\n- **recent** — recall against recent conversation context\n- **full** — full context window recall\n\nYou can tune the recall sub-agent's prompt and thinking level independently from your main agent, inspect what it retrieved with `/verbose`, and opt-in to transcript persistence for debugging. A follow-up PR ([#65068](https://github.com/openclaw/openclaw/pull/65068)) defaults QMD recall to search mode so recall works predictably without extra configuration.\n\nThis is one of the most-requested UX improvements in OpenClaw's memory layer — the difference between memory that works and memory that requires babysitting.\n\n## LM Studio Gets a Native Provider\n\n[@rugvedS07](https://github.com/rugvedS07) contributed a full **LM Studio provider** ([#53248](https://github.com/openclaw/openclaw/pull/53248)) — not a generic OpenAI-compatible shim, but a proper bundled provider with:\n\n- Guided onboarding flow\n- Runtime model discovery (no manual model IDs)\n- Stream preload support for faster first tokens\n- Memory-search embeddings for local recall\n\nIf you've been running LM Studio alongside OpenClaw with a manual `openai-compatible` config, this is worth migrating to. The runtime model discovery alone eliminates a common friction point when switching local models.\n\n## MLX Speech for macOS Talk Mode\n\n[@ImLukeF](https://github.com/ImLukeF) added an **experimental MLX speech provider** for Talk Mode on macOS ([#63539](https://github.com/openclaw/openclaw/pull/63539)). This runs speech synthesis entirely locally using Apple Silicon's MLX framework, with:\n\n- Explicit provider selection (`mlx` vs system voice vs cloud)\n- Local utterance playback and interruption handling\n- System-voice fallback when MLX isn't available\n\nOn Apple Silicon, this should be noticeably faster than cloud TTS for interactive voice sessions — and it's fully offline.\n\n## Codex Bundled Provider\n\n[@steipete](https://github.com/steipete) contributed the **Codex bundled provider and plugin-owned app-server harness** ([#64298](https://github.com/openclaw/openclaw/pull/64298)). The key distinction: `codex/gpt-*` models now use Codex-managed auth and native threads, while `openai/gpt-*` continues through the standard OpenAI provider path. They're no longer the same pipe.\n\n## Plugin Loading Overhaul\n\nA significant cleanup from [@vincentkoc](https://github.com/vincentkoc) across five PRs ([#65120](https://github.com/openclaw/openclaw/pull/65120), [#65259](https://github.com/openclaw/openclaw/pull/65259), [#65298](https://github.com/openclaw/openclaw/pull/65298), [#65429](https://github.com/openclaw/openclaw/pull/65429), [#65459](https://github.com/openclaw/openclaw/pull/65459)) narrows CLI, provider, and channel activation to only what each plugin's manifest declares it needs. The result: leaner startup, faster command discovery, and no more loading unrelated plugin runtimes at startup.\n\n## Gateway: Command Discovery RPC\n\n[@samzong](https://github.com/samzong) added a `commands.list` RPC to the gateway ([#62656](https://github.com/openclaw/openclaw/pull/62656)) — remote clients can now discover runtime-native commands, skill aliases, and plugin commands with their argument metadata. This is the foundation for better gateway-connected Control UI command palettes and external tooling.\n\n## Other Notable Changes\n\n- **Matrix streaming**: MSC4357 live markers for typewriter animation in supporting Matrix clients ([#63513](https://github.com/openclaw/openclaw/pull/63513))\n- **Per-provider private network**: `models.providers.*.request.allowPrivateNetwork` for trusted self-hosted endpoints ([#63671](https://github.com/openclaw/openclaw/pull/63671))\n- **QA/Multipass**: run QA suites inside a disposable Linux VM ([#63426](https://github.com/openclaw/openclaw/pull/63426))\n- **Dreaming reliability**: fixed double-ingestion of dream transcripts, heartbeat event deduplication, and narrative cleanup hardening\n- **Memory/wiki Unicode**: non-ASCII titles no longer collapse or overflow path limits ([#64742](https://github.com/openclaw/openclaw/pull/64742))\n\n## Security Patches\n\nThree security patches ship in this release, all from [@pgondhi987](https://github.com/pgondhi987):\n\n- **busybox/toybox removed from safe exec bins** ([#65713](https://github.com/openclaw/openclaw/pull/65713)) — busybox was functioning as an interpreter bypass; it's now blocked\n- **Empty approver list no longer grants approval** ([#65714](https://github.com/openclaw/openclaw/pull/65714)) — a misconfigured empty approver list previously granted implicit authorization\n- **Shell-wrapper injection blocked** ([#65717](https://github.com/openclaw/openclaw/pull/65717)) — broader shell-wrapper detection and env-argv assignment injection prevention\n\nAll three are in the hardening category — updating is recommended for any instance that processes untrusted input or runs in a multi-user environment.\n\n## Upgrading\n\n```bash\nopenclaw update\n```\n\nFull changelog and release notes: [github.com/openclaw/openclaw/releases](https://github.com/openclaw/openclaw/releases)",
      "content_html": "<p>OpenClaw's April quality push landed today. The <strong>2026.4.12</strong> release — tagged April 13 at 12:35 UTC — is a broad \"make everything more reliable\" drop covering memory, local models, speech, plugin loading, and three security hardening patches. Here's what's new.</p><h2>Active Memory: Automatic Recall Before Every Reply</h2><p>The headline feature is <strong>Active Memory</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/63286\">#63286</a>), contributed by <a href=\"https://github.com/Takhoffman\">@Takhoffman</a>. Rather than requiring users to remember to say \"search memory\" or \"remember this,\" OpenClaw now optionally runs a dedicated memory sub-agent right before the main reply — automatically pulling in relevant preferences, past context, and details from your memory store.</p><p>Three configurable context modes ship with it:</p><ul><li><strong>message</strong> — recall only against the current message</li><li><strong>recent</strong> — recall against recent conversation context</li><li><strong>full</strong> — full context window recall</li></ul><p>You can tune the recall sub-agent's prompt and thinking level independently from your main agent, inspect what it retrieved with <code>/verbose</code>, and opt-in to transcript persistence for debugging. A follow-up PR (<a href=\"https://github.com/openclaw/openclaw/pull/65068\">#65068</a>) defaults QMD recall to search mode so recall works predictably without extra configuration.</p><p>This is one of the most-requested UX improvements in OpenClaw's memory layer — the difference between memory that works and memory that requires babysitting.</p><h2>LM Studio Gets a Native Provider</h2><p><a href=\"https://github.com/rugvedS07\">@rugvedS07</a> contributed a full <strong>LM Studio provider</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/53248\">#53248</a>) — not a generic OpenAI-compatible shim, but a proper bundled provider with:</p><ul><li>Guided onboarding flow</li><li>Runtime model discovery (no manual model IDs)</li><li>Stream preload support for faster first tokens</li><li>Memory-search embeddings for local recall</li></ul><p>If you've been running LM Studio alongside OpenClaw with a manual <code>openai-compatible</code> config, this is worth migrating to. The runtime model discovery alone eliminates a common friction point when switching local models.</p><h2>MLX Speech for macOS Talk Mode</h2><p><a href=\"https://github.com/ImLukeF\">@ImLukeF</a> added an <strong>experimental MLX speech provider</strong> for Talk Mode on macOS (<a href=\"https://github.com/openclaw/openclaw/pull/63539\">#63539</a>). This runs speech synthesis entirely locally using Apple Silicon's MLX framework, with:</p><ul><li>Explicit provider selection (<code>mlx</code> vs system voice vs cloud)</li><li>Local utterance playback and interruption handling</li><li>System-voice fallback when MLX isn't available</li></ul><p>On Apple Silicon, this should be noticeably faster than cloud TTS for interactive voice sessions — and it's fully offline.</p><h2>Codex Bundled Provider</h2><p><a href=\"https://github.com/steipete\">@steipete</a> contributed the <strong>Codex bundled provider and plugin-owned app-server harness</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/64298\">#64298</a>). The key distinction: <code>codex/gpt-<em></code> models now use Codex-managed auth and native threads, while <code>openai/gpt-</em></code> continues through the standard OpenAI provider path. They're no longer the same pipe.</p><h2>Plugin Loading Overhaul</h2><p>A significant cleanup from <a href=\"https://github.com/vincentkoc\">@vincentkoc</a> across five PRs (<a href=\"https://github.com/openclaw/openclaw/pull/65120\">#65120</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65259\">#65259</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65298\">#65298</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65429\">#65429</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65459\">#65459</a>) narrows CLI, provider, and channel activation to only what each plugin's manifest declares it needs. The result: leaner startup, faster command discovery, and no more loading unrelated plugin runtimes at startup.</p><h2>Gateway: Command Discovery RPC</h2><p><a href=\"https://github.com/samzong\">@samzong</a> added a <code>commands.list</code> RPC to the gateway (<a href=\"https://github.com/openclaw/openclaw/pull/62656\">#62656</a>) — remote clients can now discover runtime-native commands, skill aliases, and plugin commands with their argument metadata. This is the foundation for better gateway-connected Control UI command palettes and external tooling.</p><h2>Other Notable Changes</h2><ul><li><strong>Matrix streaming</strong>: MSC4357 live markers for typewriter animation in supporting Matrix clients (<a href=\"https://github.com/openclaw/openclaw/pull/63513\">#63513</a>)</li><li><strong>Per-provider private network</strong>: <code>models.providers.*.request.allowPrivateNetwork</code> for trusted self-hosted endpoints (<a href=\"https://github.com/openclaw/openclaw/pull/63671\">#63671</a>)</li><li><strong>QA/Multipass</strong>: run QA suites inside a disposable Linux VM (<a href=\"https://github.com/openclaw/openclaw/pull/63426\">#63426</a>)</li><li><strong>Dreaming reliability</strong>: fixed double-ingestion of dream transcripts, heartbeat event deduplication, and narrative cleanup hardening</li><li><strong>Memory/wiki Unicode</strong>: non-ASCII titles no longer collapse or overflow path limits (<a href=\"https://github.com/openclaw/openclaw/pull/64742\">#64742</a>)</li></ul><h2>Security Patches</h2><p>Three security patches ship in this release, all from <a href=\"https://github.com/pgondhi987\">@pgondhi987</a>:</p><ul><li><strong>busybox/toybox removed from safe exec bins</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/65713\">#65713</a>) — busybox was functioning as an interpreter bypass; it's now blocked</li><li><strong>Empty approver list no longer grants approval</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/65714\">#65714</a>) — a misconfigured empty approver list previously granted implicit authorization</li><li><strong>Shell-wrapper injection blocked</strong> (<a href=\"https://github.com/openclaw/openclaw/pull/65717\">#65717</a>) — broader shell-wrapper detection and env-argv assignment injection prevention</li></ul><p>All three are in the hardening category — updating is recommended for any instance that processes untrusted input or runs in a multi-user environment.</p><h2>Upgrading</h2><p>``<code>bash<br />openclaw update<br /></code>``</p><p>Full changelog and release notes: <a href=\"https://github.com/openclaw/openclaw/releases\">github.com/openclaw/openclaw/releases</a></p>",
      "date_published": "2026-04-13T12:35:00.000Z",
      "date_modified": "2026-04-13T12:35:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-13-release-active-memory-lm-studio.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-13-v2026-4-12-beta-1/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-13-v2026-4-12-beta-1/",
      "title": "OpenClaw v2026.4.12 Beta 1: Plugin Scope and Security Fixes",
      "summary": "OpenClaw v2026.4.12-beta.1 narrows plugin activation, sharpens active-memory QMD recall, and now blocks deployments that use default gateway credentials.",
      "content_text": "OpenClaw dropped [v2026.4.12-beta.1](https://github.com/openclaw/openclaw/releases/tag/v2026.4.12-beta.1) late Sunday night — and while it's a pre-release, it packs a meaningful set of changes across plugin architecture, active memory, and security hardening worth knowing about before the stable drop lands.\n\n## Plugin Loading Gets Scoped Boundaries\n\nThe headline change is a significant rework of how plugins activate at runtime. Previously, plugins could load broader-than-necessary runtimes depending on how the agent was invoked. In v2026.4.12-beta.1, plugin activation is now narrowed to **manifest-declared needs only**.\n\nCLI invocations, provider activations, and channel startups now load exactly what the plugin's manifest declares — nothing more. The change also centralizes manifest-owner policy so startup, command discovery, and runtime activation no longer load unrelated plugin runtimes as a side effect.\n\nThe practical benefits:\n\n- **Security:** smaller activation surface means less unexpected code running per agent turn\n- **Performance:** startup and command discovery skip unused plugin runtimes\n- **Predictability:** plugin behavior is now fully defined by what the manifest declares\n\nBig thanks to [@vincentkoc](https://github.com/vincentkoc) for driving the underlying PR work across [#65120](https://github.com/openclaw/openclaw/pull/65120), [#65259](https://github.com/openclaw/openclaw/pull/65259), [#65298](https://github.com/openclaw/openclaw/pull/65298), [#65429](https://github.com/openclaw/openclaw/pull/65429), and [#65459](https://github.com/openclaw/openclaw/pull/65459).\n\n## Active Memory QMD Recall Defaults to Search\n\nThe Active Memory plugin — [introduced in v2026.4.10](https://docs.openclaw.ai/concepts/active-memory) — gets a notable quality-of-life improvement: QMD recall now **defaults to search mode** out of the box. Previously, enabling this required manual configuration; now it works predictably from a fresh install.\n\nThe fix also surfaces better search-path telemetry. When memory-backed recall behaves unexpectedly, you'll have clearer signals about what happened. Recall runs now stay on the resolved channel when wrappers like mx-claw are enabled, and lexical boosts no longer bleed into hybrid search results — meaning Active Memory finds the right memories more consistently in everyday use.\n\n([#65068](https://github.com/openclaw/openclaw/pull/65068) — thanks [@Takhoffman](https://github.com/Takhoffman))\n\n## Gateway Credentials: Placeholder Tokens Now Block Startup\n\nThis one matters for every self-hoster. Previously, if you copied `.env.example` and forgot to swap out the example gateway token or password, OpenClaw would start anyway — leaving your deployment running on a **publicly known credential**.\n\nIn v2026.4.12-beta.1, that loophole closes. The shipped example credential is now blanked, and if OpenClaw detects a copied placeholder token or password at startup, it **refuses to start** with an explicit error message pointing you to fix it.\n\nThis is a meaningful hardening step for community deployments where operators may not realize the `.env.example` values are placeholders, not safe defaults. If you're upgrading, double-check your gateway token before restarting — you'll get a clear error if anything needs updating.\n\n([#64586](https://github.com/openclaw/openclaw/pull/64586) — thanks [@navarrotech](https://github.com/navarrotech) and [@vincentkoc](https://github.com/vincentkoc))\n\n## Memory and Dreaming Fixes\n\nSeveral reliability issues in the memory and dreaming stack get addressed in this release:\n\n- **Wiki Unicode slugs:** Non-ASCII titles no longer collapse or overflow path limits — Unicode letters, digits, and combining marks are now preserved correctly in wiki slugs and contradiction clustering ([#64742](https://github.com/openclaw/openclaw/pull/64742), thanks [@zhouhe-xydt](https://github.com/zhouhe-xydt))\n- **Nested daily notes:** Files nested under `memory/**/YYYY-MM-DD.md` now feed short-term recall as expected, while dream reports under `memory/dreaming/**` are correctly excluded from self-promotion ([#64682](https://github.com/openclaw/openclaw/pull/64682))\n- **Dreaming diary timestamps:** The diary now uses the host's local timezone when `dreaming.timezone` is unset, and surfaces the timezone abbreviation so DREAMS.md and the UI are unambiguous ([#65034](https://github.com/openclaw/openclaw/pull/65034), [#65057](https://github.com/openclaw/openclaw/pull/65057))\n- **Dreaming light-sleep confidence:** Fixed a long-standing bug where dreaming-only entries showed `confidence: 0.00` by computing staged candidate confidence from all short-term signals, not just recall counts ([#64599](https://github.com/openclaw/openclaw/issues/64599))\n- **Docs/memory-wiki:** The recommended QMD + bridge-mode hybrid recipe plus zero-artifact troubleshooting guidance for memory-wiki bridge setups is now documented ([#63165](https://github.com/openclaw/openclaw/pull/63165))\n\n## Platform and Infrastructure Fixes\n\nThe beta also ships targeted fixes across channels and infrastructure:\n\n- **WhatsApp:** Falls back to the first `mediaUrls` entry when `mediaUrl` is empty, stopping silent attachment drops on gateway media sends ([#64394](https://github.com/openclaw/openclaw/pull/64394))\n- **Telegram:** Approval button callbacks now resolve on a separate sequencer lane, eliminating the deadlock where plugin approval clicks stalled behind a blocked agent turn ([#64979](https://github.com/openclaw/openclaw/pull/64979))\n- **Matrix:** Room mention gating now accepts `@displayName` Matrix URI labels, restoring `requireMention` for non-OpenClaw Matrix clients ([#64796](https://github.com/openclaw/openclaw/pull/64796))\n- **Gateway/keepalive:** WebSocket tick broadcasts are no longer marked as droppable, preventing slow or backpressured clients from self-disconnecting during long-running agent work ([#65256](https://github.com/openclaw/openclaw/issues/65256), [#65436](https://github.com/openclaw/openclaw/pull/65436))\n- **Agents/queueing:** Orphaned user messages that arrive mid-run are now carried into the next prompt rather than being silently dropped ([#65388](https://github.com/openclaw/openclaw/issues/65388))\n- **CLI/update:** The self-update path now respawns from the updated entrypoint after package updates, fixing failures on stale dist chunk imports ([#65471](https://github.com/openclaw/openclaw/pull/65471))\n\n## What to Expect Next\n\nThis is a pre-release — the stable v2026.4.12 follow-on is expected shortly. For self-hosters tracking main closely, all of these changes are now in the beta channel. As always, test in a non-production environment before upgrading gateways that handle live traffic.\n\nFollow the full changelog and PR notes on the [GitHub releases page](https://github.com/openclaw/openclaw/releases/tag/v2026.4.12-beta.1).",
      "content_html": "<p>OpenClaw dropped <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.12-beta.1\">v2026.4.12-beta.1</a> late Sunday night — and while it's a pre-release, it packs a meaningful set of changes across plugin architecture, active memory, and security hardening worth knowing about before the stable drop lands.</p><h2>Plugin Loading Gets Scoped Boundaries</h2><p>The headline change is a significant rework of how plugins activate at runtime. Previously, plugins could load broader-than-necessary runtimes depending on how the agent was invoked. In v2026.4.12-beta.1, plugin activation is now narrowed to <strong>manifest-declared needs only</strong>.</p><p>CLI invocations, provider activations, and channel startups now load exactly what the plugin's manifest declares — nothing more. The change also centralizes manifest-owner policy so startup, command discovery, and runtime activation no longer load unrelated plugin runtimes as a side effect.</p><p>The practical benefits:</p><ul><li><strong>Security:</strong> smaller activation surface means less unexpected code running per agent turn</li><li><strong>Performance:</strong> startup and command discovery skip unused plugin runtimes</li><li><strong>Predictability:</strong> plugin behavior is now fully defined by what the manifest declares</li></ul><p>Big thanks to <a href=\"https://github.com/vincentkoc\">@vincentkoc</a> for driving the underlying PR work across <a href=\"https://github.com/openclaw/openclaw/pull/65120\">#65120</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65259\">#65259</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65298\">#65298</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65429\">#65429</a>, and <a href=\"https://github.com/openclaw/openclaw/pull/65459\">#65459</a>.</p><h2>Active Memory QMD Recall Defaults to Search</h2><p>The Active Memory plugin — <a href=\"https://docs.openclaw.ai/concepts/active-memory\">introduced in v2026.4.10</a> — gets a notable quality-of-life improvement: QMD recall now <strong>defaults to search mode</strong> out of the box. Previously, enabling this required manual configuration; now it works predictably from a fresh install.</p><p>The fix also surfaces better search-path telemetry. When memory-backed recall behaves unexpectedly, you'll have clearer signals about what happened. Recall runs now stay on the resolved channel when wrappers like mx-claw are enabled, and lexical boosts no longer bleed into hybrid search results — meaning Active Memory finds the right memories more consistently in everyday use.</p><p>(<a href=\"https://github.com/openclaw/openclaw/pull/65068\">#65068</a> — thanks <a href=\"https://github.com/Takhoffman\">@Takhoffman</a>)</p><h2>Gateway Credentials: Placeholder Tokens Now Block Startup</h2><p>This one matters for every self-hoster. Previously, if you copied <code>.env.example</code> and forgot to swap out the example gateway token or password, OpenClaw would start anyway — leaving your deployment running on a <strong>publicly known credential</strong>.</p><p>In v2026.4.12-beta.1, that loophole closes. The shipped example credential is now blanked, and if OpenClaw detects a copied placeholder token or password at startup, it <strong>refuses to start</strong> with an explicit error message pointing you to fix it.</p><p>This is a meaningful hardening step for community deployments where operators may not realize the <code>.env.example</code> values are placeholders, not safe defaults. If you're upgrading, double-check your gateway token before restarting — you'll get a clear error if anything needs updating.</p><p>(<a href=\"https://github.com/openclaw/openclaw/pull/64586\">#64586</a> — thanks <a href=\"https://github.com/navarrotech\">@navarrotech</a> and <a href=\"https://github.com/vincentkoc\">@vincentkoc</a>)</p><h2>Memory and Dreaming Fixes</h2><p>Several reliability issues in the memory and dreaming stack get addressed in this release:</p><ul><li><strong>Wiki Unicode slugs:</strong> Non-ASCII titles no longer collapse or overflow path limits — Unicode letters, digits, and combining marks are now preserved correctly in wiki slugs and contradiction clustering (<a href=\"https://github.com/openclaw/openclaw/pull/64742\">#64742</a>, thanks <a href=\"https://github.com/zhouhe-xydt\">@zhouhe-xydt</a>)</li><li><strong>Nested daily notes:</strong> Files nested under <code>memory/<strong>/YYYY-MM-DD.md</code> now feed short-term recall as expected, while dream reports under <code>memory/dreaming/</strong></code> are correctly excluded from self-promotion (<a href=\"https://github.com/openclaw/openclaw/pull/64682\">#64682</a>)</li><li><strong>Dreaming diary timestamps:</strong> The diary now uses the host's local timezone when <code>dreaming.timezone</code> is unset, and surfaces the timezone abbreviation so DREAMS.md and the UI are unambiguous (<a href=\"https://github.com/openclaw/openclaw/pull/65034\">#65034</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65057\">#65057</a>)</li><li><strong>Dreaming light-sleep confidence:</strong> Fixed a long-standing bug where dreaming-only entries showed <code>confidence: 0.00</code> by computing staged candidate confidence from all short-term signals, not just recall counts (<a href=\"https://github.com/openclaw/openclaw/issues/64599\">#64599</a>)</li><li><strong>Docs/memory-wiki:</strong> The recommended QMD + bridge-mode hybrid recipe plus zero-artifact troubleshooting guidance for memory-wiki bridge setups is now documented (<a href=\"https://github.com/openclaw/openclaw/pull/63165\">#63165</a>)</li></ul><h2>Platform and Infrastructure Fixes</h2><p>The beta also ships targeted fixes across channels and infrastructure:</p><ul><li><strong>WhatsApp:</strong> Falls back to the first <code>mediaUrls</code> entry when <code>mediaUrl</code> is empty, stopping silent attachment drops on gateway media sends (<a href=\"https://github.com/openclaw/openclaw/pull/64394\">#64394</a>)</li><li><strong>Telegram:</strong> Approval button callbacks now resolve on a separate sequencer lane, eliminating the deadlock where plugin approval clicks stalled behind a blocked agent turn (<a href=\"https://github.com/openclaw/openclaw/pull/64979\">#64979</a>)</li><li><strong>Matrix:</strong> Room mention gating now accepts <code>@displayName</code> Matrix URI labels, restoring <code>requireMention</code> for non-OpenClaw Matrix clients (<a href=\"https://github.com/openclaw/openclaw/pull/64796\">#64796</a>)</li><li><strong>Gateway/keepalive:</strong> WebSocket tick broadcasts are no longer marked as droppable, preventing slow or backpressured clients from self-disconnecting during long-running agent work (<a href=\"https://github.com/openclaw/openclaw/issues/65256\">#65256</a>, <a href=\"https://github.com/openclaw/openclaw/pull/65436\">#65436</a>)</li><li><strong>Agents/queueing:</strong> Orphaned user messages that arrive mid-run are now carried into the next prompt rather than being silently dropped (<a href=\"https://github.com/openclaw/openclaw/issues/65388\">#65388</a>)</li><li><strong>CLI/update:</strong> The self-update path now respawns from the updated entrypoint after package updates, fixing failures on stale dist chunk imports (<a href=\"https://github.com/openclaw/openclaw/pull/65471\">#65471</a>)</li></ul><h2>What to Expect Next</h2><p>This is a pre-release — the stable v2026.4.12 follow-on is expected shortly. For self-hosters tracking main closely, all of these changes are now in the beta channel. As always, test in a non-production environment before upgrading gateways that handle live traffic.</p><p>Follow the full changelog and PR notes on the <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.12-beta.1\">GitHub releases page</a>.</p>",
      "date_published": "2026-04-13T08:00:00.000Z",
      "date_modified": "2026-04-13T08:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Security",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-13-v2026-4-12-beta-1.png"
    },
    {
      "id": "https://openclawchronicles.com/posts/openclaw-2026-4-12-heartbeats-audio-provider-checks/",
      "url": "https://openclawchronicles.com/posts/openclaw-2026-4-12-heartbeats-audio-provider-checks/",
      "title": "OpenClaw Tightens Heartbeats and Audio Provider Checks",
      "summary": "OpenClaw merged quieter heartbeat guidance, better audio provider detection, and fresh WhatsApp reaction fixes on April 12, 2026.",
      "content_text": "OpenClaw's nightly GitHub stream did not bring a brand-new release, but it did bring a cluster of practical fixes that matter if you actually run the software every day. The most interesting changes merged on April 12 were not flashy platform launches. They were the kind of sharp-edged improvements that reduce false alarms, make CLI output more trustworthy, and smooth out messaging behavior in production.\n\nThree pull requests stand out tonight: [#65148](https://github.com/openclaw/openclaw/pull/65148), which softens repeated heartbeat alerts in the OpenAI overlay, [#65491](https://github.com/openclaw/openclaw/pull/65491), which fixes env-backed audio provider detection in the CLI, and [#65512](https://github.com/openclaw/openclaw/pull/65512), which makes WhatsApp group reactions attach to the intended participant more reliably.\n\n## Heartbeats Get a Little Less Noisy\n\nThe most user-facing change is [#65148](https://github.com/openclaw/openclaw/pull/65148), titled **\"OpenAI: reduce repeated heartbeat alerts.\"**\n\nAccording to the PR summary, the issue was not core heartbeat routing itself. The problem was the OpenAI-specific overlay text, which was pushing GPT-5 too hard toward repeated user-facing notifications even when the heartbeat state had not meaningfully changed. That is a subtle bug, but a real one. Proactive agents become annoying fast when they keep surfacing the same unchanged status.\n\nThe fix is deliberately narrow. OpenClaw removed stronger notify-versus-stay-quiet guidance from the overlay and replaced it with a more focused anti-repeat warning. The team explicitly says this does **not** change the core heartbeat contract or routing logic. Instead, it reduces the model's tendency to over-report.\n\nThat is the right kind of fix. It makes heartbeats feel more tasteful without rewriting the whole system.\n\n## Audio Provider Status Now Matches Reality Better\n\nThe second useful change is [#65491](https://github.com/openclaw/openclaw/pull/65491), **\"CLI: detect env-backed audio providers.\"**\n\nBefore this merge, `openclaw infer audio providers --json` could report providers like Deepgram and Groq as `configured: false` even when authentication was already available through environment variables. That created a bad mismatch between the CLI status output and actual runtime behavior.\n\nThe patch updates the shared helper so it can fall back to each provider's registered auth env vars when deciding whether a provider is configured. In plain English, OpenClaw is now better at recognizing setups that are driven by environment configuration instead of explicit JSON config blocks.\n\nThis is not a glamorous change, but I like it. Trust in CLI tooling comes from the small stuff. If status commands disagree with reality, every debugging session gets slower.\n\n## WhatsApp Group Reactions Get More Reliable\n\nThe newest merge tonight is [#65512](https://github.com/openclaw/openclaw/pull/65512), which fixes how OpenClaw sends reactions inside WhatsApp groups.\n\nThe summary says reactions now include the **target participant** so they attach to the intended message reliably. The patch also reuses the current inbound WhatsApp participant only as a fallback for current-message reaction context, while leaving direct chat behavior and explicit participant overrides unchanged.\n\nThat sounds niche until you remember how painful messaging edge cases can be. Group chat integrations are full of identity and routing ambiguity. A fix like this reduces those weird moments where a reaction technically sends but lands against the wrong context.\n\n## Why Tonight's Merges Matter\n\nNone of these PRs deserves a giant hype headline on its own. Together, though, they say something useful about where OpenClaw is maturing.\n\nThe project is spending real energy on:\n\n- making proactive behavior feel less spammy\n- making CLI diagnostics reflect real configuration state\n- making chat integrations behave correctly in messy group contexts\n\nThat is platform work. It is the kind of work users feel more than they talk about.\n\nIf you upgraded to [v2026.4.11](https://github.com/openclaw/openclaw/releases/tag/v2026.4.11) earlier today, these follow-on merges are worth watching. They suggest the main branch is still smoothing rough edges immediately after the release landed.\n\nFor operators, that means tonight's OpenClaw news is simple: no new version tag yet, but mainline quality is still moving in the right direction.",
      "content_html": "<p>OpenClaw's nightly GitHub stream did not bring a brand-new release, but it did bring a cluster of practical fixes that matter if you actually run the software every day. The most interesting changes merged on April 12 were not flashy platform launches. They were the kind of sharp-edged improvements that reduce false alarms, make CLI output more trustworthy, and smooth out messaging behavior in production.</p><p>Three pull requests stand out tonight: <a href=\"https://github.com/openclaw/openclaw/pull/65148\">#65148</a>, which softens repeated heartbeat alerts in the OpenAI overlay, <a href=\"https://github.com/openclaw/openclaw/pull/65491\">#65491</a>, which fixes env-backed audio provider detection in the CLI, and <a href=\"https://github.com/openclaw/openclaw/pull/65512\">#65512</a>, which makes WhatsApp group reactions attach to the intended participant more reliably.</p><h2>Heartbeats Get a Little Less Noisy</h2><p>The most user-facing change is <a href=\"https://github.com/openclaw/openclaw/pull/65148\">#65148</a>, titled <strong>\"OpenAI: reduce repeated heartbeat alerts.\"</strong></p><p>According to the PR summary, the issue was not core heartbeat routing itself. The problem was the OpenAI-specific overlay text, which was pushing GPT-5 too hard toward repeated user-facing notifications even when the heartbeat state had not meaningfully changed. That is a subtle bug, but a real one. Proactive agents become annoying fast when they keep surfacing the same unchanged status.</p><p>The fix is deliberately narrow. OpenClaw removed stronger notify-versus-stay-quiet guidance from the overlay and replaced it with a more focused anti-repeat warning. The team explicitly says this does <strong>not</strong> change the core heartbeat contract or routing logic. Instead, it reduces the model's tendency to over-report.</p><p>That is the right kind of fix. It makes heartbeats feel more tasteful without rewriting the whole system.</p><h2>Audio Provider Status Now Matches Reality Better</h2><p>The second useful change is <a href=\"https://github.com/openclaw/openclaw/pull/65491\">#65491</a>, <strong>\"CLI: detect env-backed audio providers.\"</strong></p><p>Before this merge, <code>openclaw infer audio providers --json</code> could report providers like Deepgram and Groq as <code>configured: false</code> even when authentication was already available through environment variables. That created a bad mismatch between the CLI status output and actual runtime behavior.</p><p>The patch updates the shared helper so it can fall back to each provider's registered auth env vars when deciding whether a provider is configured. In plain English, OpenClaw is now better at recognizing setups that are driven by environment configuration instead of explicit JSON config blocks.</p><p>This is not a glamorous change, but I like it. Trust in CLI tooling comes from the small stuff. If status commands disagree with reality, every debugging session gets slower.</p><h2>WhatsApp Group Reactions Get More Reliable</h2><p>The newest merge tonight is <a href=\"https://github.com/openclaw/openclaw/pull/65512\">#65512</a>, which fixes how OpenClaw sends reactions inside WhatsApp groups.</p><p>The summary says reactions now include the <strong>target participant</strong> so they attach to the intended message reliably. The patch also reuses the current inbound WhatsApp participant only as a fallback for current-message reaction context, while leaving direct chat behavior and explicit participant overrides unchanged.</p><p>That sounds niche until you remember how painful messaging edge cases can be. Group chat integrations are full of identity and routing ambiguity. A fix like this reduces those weird moments where a reaction technically sends but lands against the wrong context.</p><h2>Why Tonight's Merges Matter</h2><p>None of these PRs deserves a giant hype headline on its own. Together, though, they say something useful about where OpenClaw is maturing.</p><p>The project is spending real energy on:</p><ul><li>making proactive behavior feel less spammy</li><li>making CLI diagnostics reflect real configuration state</li><li>making chat integrations behave correctly in messy group contexts</li></ul><p>That is platform work. It is the kind of work users feel more than they talk about.</p><p>If you upgraded to <a href=\"https://github.com/openclaw/openclaw/releases/tag/v2026.4.11\">v2026.4.11</a> earlier today, these follow-on merges are worth watching. They suggest the main branch is still smoothing rough edges immediately after the release landed.</p><p>For operators, that means tonight's OpenClaw news is simple: no new version tag yet, but mainline quality is still moving in the right direction.</p>",
      "date_published": "2026-04-12T23:00:00.000Z",
      "date_modified": "2026-04-12T23:00:00.000Z",
      "authors": [
        {
          "name": "Cody"
        }
      ],
      "tags": [
        "OpenClaw",
        "Guides",
        "Releases"
      ],
      "image": "https://openclawchronicles.com/assets/images/posts/openclaw-2026-4-12-heartbeats-audio-provider-checks.jpg"
    }
  ]
}
