The local-first control plane for AI operations.
Status: Pre-1.0 convergence
Focus: stability, truthfulness, and operator trust
borg helps operators run a fragmented AI tool stack from one local control plane. It is designed for people who already use multiple MCP servers, multiple model providers, and multiple coding or session workflows—and want one place to inspect, route, recover, and understand them.
borg is primarily four things:
- MCP control plane — manage and inspect MCP servers and tool inventories from one local service.
- Provider routing layer — handle quota-aware fallback across model providers.
- Session and memory substrate — preserve continuity across work sessions.
- Operator dashboard — make runtime state visible and diagnosable.
Modern AI work is messy:
- too many MCP servers,
- too many providers and quotas,
- too many half-connected tools,
- too little context continuity,
- and weak observability when something breaks.
borg exists to reduce that fragmentation without requiring a hosted backend.
- Local control-plane foundations
- MCP aggregation and management primitives
- Provider fallback infrastructure
- Core dashboard architecture
- Build, test, and typecheck workflows
- Session supervision workflows
- Memory retrieval and inspection UX
- Discovered external session import from supported tools, including Copilot CLI, VS Code Copilot Chat, Simon Willison
llmCLI logs, OpenAI or ChatGPT export roots, and Prism local SQLite histories plus behavioral metadata, with derived memories and generated instruction docs; Antigravity local~/.gemini/antigravity/braindiscovery is now available as an explicitly Experimental, reverse-engineered import lane - MCP traffic inspection and tool search UX
- Billing and routing visibility
- Browser and IDE bridge integration surfaces
- borg assimilation via
submodules/borgplus primary borg CLI harness registration - Council or debate workflows
- Broader autonomous workflow layers
- Mobile and desktop parity layers
- Mesh and marketplace concepts
- A definitive internal library of MCP servers and tool metadata aggregated from public lists and operator-added sources
- Continuous normalization, deduplication, and refresh of that MCP library inside borg
- Eventual operator-controlled access to any relevant MCP tool through one local control plane
- Operator-owned discovery, benchmarking, and ranking of the MCP ecosystem so borg knows what tools exist, how well they work, and when to trust them
- A universal model-facing substrate where any model, any provider, any session, and any relevant MCP tool can be coordinated through borg
borg is not yet a fully hardened universal “AI operating system.” The most honest current description is:
borg is an ambitious, local-first AI control plane with real implementation across MCP routing, provider management, sessions, and memory—plus a broader experimental layer around orchestration and automation.
The current release track centers on:
- core MCP reliability,
- provider routing correctness,
- practical memory usefulness,
- session continuity,
- and honest dashboard or operator UX.
Longer-term, borg should become the place where operators maintain a definitive internal MCP server library, benchmark the live tool ecosystem, and expose universal tool reach through one operator-owned control plane. That ambition is intentionally large, but it is still Vision work until the current control plane is more reliable.
borg currently presents three operator-facing orchestrator identities:
packages/cliis the cli-orchestrator lane.apps/maestrois the desktop electron-orchestrator lane.apps/cloud-orchestratoris the web cloud-orchestrator lane.
The experimental Go workspace under go/ is a sidecar cli-orchestrator coexistence port for read-parity and feasibility work, not a replacement fork and not yet the primary control-plane implementation.
Today, electron-orchestrator and cli-orchestrator do not yet have 100% feature parity. The desktop lane currently exposes the broader operator UX, while the Node-based CLI lane remains the cleaner control-plane foundation. borg should not drop either surface until parity gaps and operator workflows are intentionally closed. The Go lane should currently be described as Experimental read-only bridge replacement work, not as a completed daemon extraction.
- Node.js 22+
- pnpm 10+
pnpm install
pnpm run devborg session harnesses
borg session start ./my-app --harness borg
borg mesh statusborg is now borg's primary CLI harness identity, backed by the submodules/borg upstream. The upstream now exposes a Go/Cobra CLI with a default TUI REPL plus a pipe command, and borg now surfaces borg's source-backed tool inventory from submodules/borg/tools/*.go via borg session harnesses and the Go sidecar harness registry. borg's harness catalogs now also track the broader known external identities it already references elsewhere in the repo, including aider, cursor, copilot, qwen, superai-cli, codebuff, codemachine, and factory-droid, but those still expose install/runtime metadata only until borg has equally source-backed bridge contracts for them. borg's maturity remains Experimental while the cross-runtime adapter contract is still shallow.
The CLI mesh surface is now operator-visible through borg mesh status, borg mesh peers, borg mesh capabilities [nodeId], and borg mesh find --capability <name>. These commands query the live local control plane through BORG_TRPC_UPSTREAM or the borg startup lock, so they report real mesh visibility instead of placeholder CLI output.
docker compose up --buildapps/
web/ Next.js dashboard
borg-extension/ Browser extension surfaces (compatibility path)
maestro/ electron-orchestrator desktop shell work (legacy path)
vscode/ VS Code integration
packages/
core/ Main control plane backend
ai/ Provider/model routing
cli/ cli-orchestrator entrypoints
ui/ Shared UI package
types/ Shared types
submodules/
borg/ External borg harness upstream (experimental assimilation track)
go/
cmd/borg/ Experimental sidecar Go cli-orchestrator port workspace
The Go port is intentionally isolated from the main Node/Next fork. It uses its own `.borg-go` config directory and can observe the primary borg lock state via `/api/runtime/locks`, summarize its interop visibility via `/api/runtime/status` including compact lock visibility/running counts, config-path health, total and available CLI tool/harness counts, provider totals plus configured/authenticated/executable counts and auth/task buckets, memory availability plus default-section and per-section entry breakdowns, discovered-session counts plus session-type, task, model-hint, and TypeScript supervisor-bridge visibility, and import-root plus import-source health including valid/invalid counts, aggregate estimated size, and compact source-type, model-hint, and error buckets, expose a self-describing route index via `/api/index`, inspect effective path wiring via `/api/config/status` including repo-level `borg.config.json` and `mcp.jsonc` presence, expose read-only provider credential visibility via `/api/providers/status`, expose provider catalog metadata via `/api/providers/catalog`, expose compact provider rollups via `/api/providers/summary`, preview intended task-type routing order via `/api/providers/routing-summary`, read the main fork's generated imported-instructions artifact via `/api/runtime/imported-instructions`, expose discovered session artifacts through `/api/sessions` and `/api/sessions/summary`, and bridge or selectively replace TypeScript read routes across `/api/sessions/supervisor/*`, `/api/sessions/imported/*`, `/api/mcp/*`, `/api/memory/*`, `/api/agent-memory/*`, `/api/graph/*`, `/api/context/*`, `/api/git/*`, `/api/tests/*`, `/api/metrics/*`, `/api/logs/*`, `/api/server-health/*`, `/api/settings/*`, `/api/tools/*`, `/api/tool-sets/*`, `/api/project/*`, `/api/shell/*`, `/api/agent/*`, `/api/commands/*`, `/api/skills/*`, `/api/workflows/*`, `/api/symbols/*`, `/api/lsp/*`, `/api/api-keys/*`, `/api/audit/*`, `/api/scripts/*`, `/api/links-backlog/*`, `/api/infrastructure/*`, `/api/expert/*`, `/api/policies/*`, `/api/secrets/*`, `/api/marketplace/*`, `/api/catalog/*`, `/api/oauth/*`, `/api/research/*`, `/api/pulse/*`, `/api/session-export/*`, `/api/browser-extension/*`, `/api/open-webui/*`, `/api/code-mode/*`, `/api/submodules/*`, `/api/suggestions/*`, and `/api/plan/*`. Some of those reads now have truthful local Go fallbacks backed by the same SQLite database, local config files, or deterministic local defaults, but many orchestration-heavy routes remain bridge-only by design. Its current role is to validate a Go-native cli-orchestrator path, grow honest read-only local truth where practical, and avoid overstating daemon-extraction maturity before the underlying contracts are stable.
The repo does not yet ship the full recommended borg binary family, but the current workspace already suggests the right extraction seams.
- Future binaries:
borg,borgd - Current likely sources:
packages/clipackages/corepackages/aipackages/typespackages/toolsgo/cmd/borggo/internal/controlplane,go/internal/httpapi,go/internal/providers
- Future binaries:
borgmcpd,hypermcp-indexer - Current likely sources:
packages/mcp-clientpackages/mcp-registrypackages/mcp-router-cli- MCP-related surfaces inside
packages/core go/internal/httpapiand future Go MCP-specific packages as extraction work continues
- Future binaries:
borgmemd,borgingest - Current likely sources:
packages/memorypackages/claude-mem- session and import flows inside
packages/core go/internal/memorystorego/internal/sessionimport
- Future binaries:
borgharnessborgharness,borgharnessborgharnessd - Current likely sources:
packages/agentspackages/adkpackages/borg-supervisorpackages/browserpackages/search- harness registration and supervisor flows in
packages/core go/internal/harnesses
- Future apps/binaries:
borg-web,borg-native - Current likely sources:
apps/webapps/maestroapps/maestro-goapps/mobilepackages/ui
Keep shared contracts, config, auth, logging, and transport schemas in reusable packages first. Extract a new binary only after the package seam is clear enough that process separation improves reliability or operator clarity instead of just adding more moving parts.
If work proceeds incrementally, the first concrete seams should be:
borgd- pull top-level control-plane routing, operator health/status APIs, lock/config coordination, and provider-routing orchestration toward a cleaner daemon-owned boundary
- keep CLI, web, and native surfaces as clients of that boundary
borgmcpd- pull MCP registry state, runtime-server lifecycle, working-set state, tool inventory/search/call mediation, and probe/test flows toward a dedicated service boundary
- keep scrape/probe refresh and offline metadata enrichment as
hypermcp-indexerworker responsibilities rather than interactive daemon logic
These seams are preferred first because they already have visible operator-facing surfaces, clear uptime concerns, and strong pressure to separate control-plane truth from client UX.
- Local first — default to local state and operator control.
- Truth over hype — label maturity honestly.
- Interoperability over reinvention — unify tools where possible.
- Visibility over magic — make system state inspectable.
- Continuity over novelty — prioritize recovery, routing, and memory.
For now, compatibility paths, package names, and the borg CLI command remain unchanged while the visible branding shifts to borg.
Use pnpm v10 and verify changes before claiming success:
pnpm -C packages/core exec tsc --noEmit
pnpm -C apps/web exec tsc --noEmit --pretty false
pnpm run testAlso review:
AGENTS.mdROADMAP.mdTODO.mdVISION.md
VISION.md— long-term directionROADMAP.md— now/next/laterTODO.md— active worklistAGENTS.md— contributor and agent rulesCHANGELOG.md— release history
MIT
