Skip to main content
What is NanoClaw?
From AI assistant to AI team. Multiple Claude agents running in parallel, talking to each other, isolated in their own containers — installed and customized through Claude itself.
NanoClaw started as a way to run a single Claude agent on WhatsApp. v2 turns it into a team. Spawn multiple agents, wire them to any mix of 15+ messaging channels, have them route work to each other, and keep every one of them isolated in its own Linux container.
One host process on your machine. One container per active session. A codebase small enough to hold in your head — and small enough for Claude Code to hold in its context window.
Same inbox, same flow, same tools — a scheduled task is a message with a
Quick start
Get running in one command with
bash nanoclaw.shInstallation
Requirements, platform notes, and what the installer does
Architecture
The session DB, the entity model, and the inbox/outbox pattern
Security
Container isolation, OneCLI credentials, sender policies
From AI assistant to AI team
Most agents handle one conversation at a time. NanoClaw v2 runs as many parallel conversations as you want, on as many channels as you want, with as many agents as you want — and lets them coordinate. A single pattern drives the whole system: every message goes through an inbox, every response comes out of an outbox. User chats, webhooks, scheduled jobs, and agent-to-agent calls are all just rows in a queue. Example: three agent groups sharing one Discord channel for PR review.- A worker agent spawns per thread — one reviewer per PR.
- A manager agent reads every thread and tracks what’s in flight.
- A supervisor agent stays silent until a worker requests human approval, then DMs you a card to approve or reject.
Why NanoClaw
Agents that stay out of each other’s filesystems
Every agent runs in a Linux container with true filesystem isolation — it only sees what’s explicitly mounted. Bash and file-editing tools are safe because commands execute inside the container, not on your host. Credentials never enter the container; they’re injected at the HTTPS layer by OneCLI Agent Vault.One pattern for messages, schedules, and routing
Scheduling is a timestamp column on a message row. Agent-to-agent delegation is a row written to another agent’s inbox. Multi-user approvals are messages routed to an owner’s DM with an action card attached. You only learn one model.Multi-channel by design
Install any subset of: WhatsApp, WhatsApp Cloud, Telegram, Discord, Slack, Microsoft Teams, Google Chat, Webex, iMessage, Matrix, WeChat, Linear, GitHub, Resend (email), Emacs, or the local CLI channel. Channels are thin adapters over Vercel’s Chat SDK — a new adapter is typically a short TypeScript file.Flexible isolation per channel
For every channel you add, decide how it shares context with your existing agents:- Separate agent groups — each channel gets its own workspace, memory, and personality. Nothing crosses.
- Same agent, separate sessions — one workspace and one memory, but per-channel conversation threads.
- Shared session — multiple channels feed into a single conversation (GitHub webhooks + a Slack channel + email, all in one thread).
/manage-channels. See Channel isolation model.
Point anyone at your agent, keep yourself in the loop
Invite coworkers, clients, or friends into a group your agent is in. They get their own agent relationship in minutes — no admin setup for you. Sensitive actions (unknown senders, irreversible tools) route to your DMs as approval cards. You say yes, the agent proceeds; you say no, it stops.Scheduled tasks
Recurring jobs that run Claude and message you back:process_after timestamp.
Skills, not features
Trunk ships the runtime and the entity model. Channels live on achannels branch. Alternative agent providers live on providers. You run /add-telegram or /add-opencode and the skill copies exactly the module you need into your fork. Your install ends up with the code that does what you asked for — and nothing else.
AI-native, hybrid by design
The install and onboarding flow is a scripted, deterministic path. When a step needs judgment — a failed install, a guided decision, a customization — control hands off to Claude Code. Beyond setup, there’s no monitoring dashboard either: describe the problem in chat and Claude Code handles it.Philosophy
Small enough to understand
One process, a few source files, no microservices. Ask Claude Code to walk you through the codebase — the whole repo fits comfortably in its context window.
Secure by isolation
Agents run in Linux containers and see only what you mount. Credentials never enter the container — OneCLI injects them at request time.
Built for the individual user
Not a monolithic framework. You make your own fork and have Claude Code modify it to fit. Bespoke, not bloatware.
Customization = code changes
No config file sprawl. Want different behavior? Edit the code. The codebase is small enough that it’s safe to do so.
Skills over features
Contributors ship Claude Code skills like
/add-telegram that transform your fork. Trunk stays lean; your fork ships exactly the modules you asked for.Best harness, best model
Claude Code with the official Claude Agent SDK is the default. Drop in other providers per agent group:
/add-codex for OpenAI, /add-opencode for OpenRouter/Google/DeepSeek, /add-ollama-provider for local models.Core architecture (v2)
- Host process — Node 22 + pnpm. Runs channels, routing, delivery, and the entity model over a central SQLite database.
- Agent container — Bun 1.3+ running TypeScript directly (no compile step). Each session gets its own container and its own pair of session databases.
- Inbox/outbox session model —
inbound.db(host writes, container reads) andoutbound.db(container writes, host reads). One writer per file eliminates SQLite cross-mount contention. No stdin piping, no IPC files. - Entity model — agent groups (workspaces) and messaging groups (platform channels) are independent.
messaging_group_agentsrows wire them together with per-wiring engage mode, sender scope, and session mode. - Delivery loop — two polls: an active poll (1s) for running sessions, a sweep (60s) for liveness and recovery. Heartbeat-based stale detection, not wall-clock timeouts.
- Channel adapters — thin wrappers over Vercel’s Chat SDK on a
channelsbranch. Add only what you use. - Credentials — OneCLI Agent Vault is the sole credential path. Containers receive placeholder tokens; the vault injects real auth at request time and enforces per-agent policies.
Codebase size: ~127k tokens (~64% of Claude’s context window). Big for v1; still small enough that Claude Code can reason over the whole repo.
Key source files (host)
Community and source
- Source: github.com/qwibitai/nanoclaw
- Discord: community server
- License: MIT
Last modified on April 23, 2026
