OpenClaw's April quality push landed today. The 2026.4.12 release — tagged April 13 at 12:35 UTC — is a broad "make everything more reliable" drop covering memory, local models, speech, plugin loading, and three security hardening patches. Here's what's new.
Active Memory: Automatic Recall Before Every Reply
The headline feature is Active Memory (#63286), contributed by @Takhoffman. Rather than requiring users to remember to say "search memory" or "remember this," OpenClaw now optionally runs a dedicated memory sub-agent right before the main reply — automatically pulling in relevant preferences, past context, and details from your memory store.
Three configurable context modes ship with it:
- message — recall only against the current message
- recent — recall against recent conversation context
- full — full context window recall
You can tune the recall sub-agent's prompt and thinking level independently from your main agent, inspect what it retrieved with /verbose, and opt-in to transcript persistence for debugging. A follow-up PR (#65068) defaults QMD recall to search mode so recall works predictably without extra configuration.
This is one of the most-requested UX improvements in OpenClaw's memory layer — the difference between memory that works and memory that requires babysitting.
LM Studio Gets a Native Provider
@rugvedS07 contributed a full LM Studio provider (#53248) — not a generic OpenAI-compatible shim, but a proper bundled provider with:
- Guided onboarding flow
- Runtime model discovery (no manual model IDs)
- Stream preload support for faster first tokens
- Memory-search embeddings for local recall
If you've been running LM Studio alongside OpenClaw with a manual openai-compatible config, this is worth migrating to. The runtime model discovery alone eliminates a common friction point when switching local models.
MLX Speech for macOS Talk Mode
@ImLukeF added an experimental MLX speech provider for Talk Mode on macOS (#63539). This runs speech synthesis entirely locally using Apple Silicon's MLX framework, with:
- Explicit provider selection (
mlxvs system voice vs cloud) - Local utterance playback and interruption handling
- System-voice fallback when MLX isn't available
On Apple Silicon, this should be noticeably faster than cloud TTS for interactive voice sessions — and it's fully offline.
Codex Bundled Provider
@steipete contributed the Codex bundled provider and plugin-owned app-server harness (#64298). The key distinction: codex/gpt-* models now use Codex-managed auth and native threads, while openai/gpt-* continues through the standard OpenAI provider path. They're no longer the same pipe.
Plugin Loading Overhaul
A significant cleanup from @vincentkoc across five PRs (#65120, #65259, #65298, #65429, #65459) narrows CLI, provider, and channel activation to only what each plugin's manifest declares it needs. The result: leaner startup, faster command discovery, and no more loading unrelated plugin runtimes at startup.
Gateway: Command Discovery RPC
@samzong added a commands.list RPC to the gateway (#62656) — remote clients can now discover runtime-native commands, skill aliases, and plugin commands with their argument metadata. This is the foundation for better gateway-connected Control UI command palettes and external tooling.
Other Notable Changes
- Matrix streaming: MSC4357 live markers for typewriter animation in supporting Matrix clients (#63513)
- Per-provider private network:
models.providers.*.request.allowPrivateNetworkfor trusted self-hosted endpoints (#63671) - QA/Multipass: run QA suites inside a disposable Linux VM (#63426)
- Dreaming reliability: fixed double-ingestion of dream transcripts, heartbeat event deduplication, and narrative cleanup hardening
- Memory/wiki Unicode: non-ASCII titles no longer collapse or overflow path limits (#64742)
Security Patches
Three security patches ship in this release, all from @pgondhi987:
- busybox/toybox removed from safe exec bins (#65713) — busybox was functioning as an interpreter bypass; it's now blocked
- Empty approver list no longer grants approval (#65714) — a misconfigured empty approver list previously granted implicit authorization
- Shell-wrapper injection blocked (#65717) — broader shell-wrapper detection and env-argv assignment injection prevention
All three are in the hardening category — updating is recommended for any instance that processes untrusted input or runs in a multi-user environment.
Upgrading
openclaw update
Full changelog and release notes: github.com/openclaw/openclaw/releases