cbcl-runtime · cbcl-router + hark

Make any process an agent.

A dispatcher for AI agents. cbcl-router sends each request to whichever agent has the right skill. hark connects your computers to the router and turns any program into an available agent — a script, a Claude session, a CI runner, anything you already run. Works behind firewalls. Open source. Every interaction recorded and independently verifiable. Built to span teams and organisations.

v0.1 · Apache-2.0 · prototype router at cbcl-lfe.anuna.io (WebSocket only) DCFL wire · R1–R5 invariants · content-addressed receipts One daemon · many agents · ten terminals → ten Claudes

Every multi-agent platform bundles work distribution with process management. You buy in or you build it yourself.

The bundled platforms decide where your agent runs, how it starts, when it restarts, and what shape its config has to take. The roll-your-own path means a queue, a workflow engine, a schema registry, and a log aggregator stitched together. Wrapping a one-off script as an agent costs more than writing the script did.

What you get from a bundled platform

Vendor decides where agents run. Their cloud, their lifecycle model, their SDK. Self-host is an enterprise upsell.

Vendor decides what an agent is. A natural-language prompt in their builder, with their memory, their approval inbox, their tracing. Your existing scripts don't fit.

No federation outside the tenant. Cross-org coordination requires an integration project. The wire is implicit; you can't audit it.

Per-seat or per-message billing. Cost grows with usage in ways the contract pretends to predict.

What cbcl-runtime gives you

Three layers, each one concern. Router routes asks. Daemon manages connections. Agent does work. Each fails independently and recovers on its own.

Bring your own process. Bash, Python, Go, Claude Code, a CI runner, a Lambda. If it can connect to a local socket, it can be an agent.

Federate by design. Per-agent bearer credentials. Capability-based dispatch. Two organisations route through one router with separate principals.

Apache-2.0, single OTP release, ETS receipt log for v0.1. Substrate-portable to Mnesia / Postgres / NATS later. No vendor relationship required.

Three layers. Each one concern.

The router doesn't spawn agents. The daemon doesn't decide work. The agent doesn't manage connections. Each layer fails independently and recovers on its own.

router

cbcl-router

Routes asks to dialects. Receipts, supervision, audit. One OTP release, an ETS-backed receipt log. Knows nothing about your agents beyond the dialects they register.

daemon

hark

One per user; many agents per daemon. Singleton via OS file lock. Owns every WebSocket. Survives short-lived CLI invocations and validates every outbound CBCL frame locally.

agent

Whatever process you write

Bash loop, Python script, Claude Code, a CI runner, a Lambda. Talks to the daemon over loopback HTTP — no library to import.

Anything that speaks HTTPS can start a conversation.

A bridge is a small program that turns external events — a Slack message, a webhook, an email, a cron tick — into a CBCL request. The router doesn't care where work comes from; bridges adapt the surface and the protocol does the rest.

slack & teams

Chat-driven

A Slack slash command — /research climate trends — becomes a CBCL ask; the agent's reply threads back into the channel. Same fabric serves Teams, Discord, anywhere people already talk.

webhook

Event-driven

GitHub PR opened, Stripe payment failed, an alert fires. The producing system POSTs over HTTPS; the right agent picks it up.

email

Email triage

Incoming mail to a shared address arrives as an ask; a triage agent classifies, drafts, and replies. The reply threads back into the same conversation.

cron

Scheduled

Daily brief, weekly report, hourly health check. A cron tick is just another producer.

cli

Command-line

An engineer sends a CBCL ask from their terminal; the reply streams back. The same workflow that drives chatbots drives shell pipelines.

mcp & peers

Other agents

An MCP-aware client wraps a CBCL request as a tool call. Agents on a peer fabric forward through a bridge. Federation by composition, not by integration project.

Two kinds of bridges. Passthrough bridges already speak CBCL — the message travels verbatim, signatures intact, full audit trail end-to-end. Translation bridges adapt non-CBCL surfaces (Slack, email, phone); they're trust boundaries by construction, and the audit trail starts where they begin.

From install to a working agent in five steps.

No SDK. No framework. No callback registration. The shell is the harness.

01
Sixty-second tour
Daemon, init, recv, reply.
02
An agent in fourteen lines
Bash. One recv loop. One reply.
03
Many agents per daemon
Ten terminals → ten Claudes.
04
The wire is CBCL
DCFL parser, R1–R5, three-way checked.
05
Recovery without a recovery mode
NDI re-dispatch by reconciliation.
cbcl-runtime · hark · cbcl-lfe.anuna.io
# 1. Once per host: start the daemon
$ hark daemon start

# 2. Per agent: register dialects (the router's capability namespace)
$ eval "$(hark init \
            --dialect elf \
            --dialect code-review-v1)"
→ export CBCL_AGENT_HANDLE='0123456789ABCDEFGHJKMNPQRS'

# 3. Block until the router dispatches an ask
$ task=$(hark recv --timeout 30s)

# 4. Stream progress; close with reply or error
$ hark progress --thread rcp-123 --text "running tests"
$ hark reply '(lang elf (reply "done" :thread "rcp-123"))'

That is the entire surface.
#!/usr/bin/env bash
set -euo pipefail

hark daemon start
eval "$(hark init --dialect ops-disk-v1)"

while task=$(hark recv --timeout 60s); do
  thread=$(rg -o ':thread "[^"]+"' <<<"$task" | head -1 | cut -d'"' -f2)
  hark progress --thread "$thread" --text "scanning"
  usage=$(df -h / | tail -1 | awk '{print $5}')
  hark reply \
    "(lang elf (reply \"$usage\" :thread \"$thread\"))"
done

hark close

# Replace the df line with anything. That's the agent.
# One daemon. Many agent handles. Independent recv loops.

term-1$ eval "$(hark init --dialect code-review-v1)"
         claude-code << 'PROMPT'
            You are a code-review agent. Read tasks from
            $(hark recv --timeout 600s).  Reply with hark reply.
         PROMPT

term-2$ eval "$(hark init --dialect code-test-v1)"
         claude-code << 'PROMPT' ...

term-3$ eval "$(hark init --dialect ops-incident-v1)"
         python ./incident-agent.py

$ hark daemon status
handles:     3 active
dialects:    code-review-v1, code-test-v1, ops-incident-v1
queue:       inbound 0/3000 msgs · 0/192 MiB

# Open ten terminals. Run a separate Claude in each.
# Every one its own agent — all sharing one daemon.
; The wire format: CBCL S-expressions, DCFL grammar.
; Lean 4 oracle, 156/156 differential vectors green.
; R1 no-recursion · R2 resource-bounded · R3 core-preserving
; R4 Ed25519 signatures · R5 shape + protocol contracts

(shape track-shipment
  (require  :package  string)
  (require  :route    string)
  (optional :priority string "normal")
  (max-depth 4))

(protocol
  (then begin prepare)
  (then prepare (any vote-yes vote-no))
  (then (all vote-yes vote-yes) commit)
  (then vote-no abort))

$ hark reply '(lang ... (reply "x" :thread "y"))'
✓ parsed in CLI · validated in daemon · sent to router
  bad frames never leave the host.
# NDI: reconciliation, not coordination.
# Convergence to correct state without a recovery mode.

$ kill -9 $(pgrep hark)              # daemon killed mid-flight
$ hark daemon start                  # comes back up
$ hark daemon status

handles:     2 active (reconnected)
in-flight:   3 receipts (replayed from log)
replayed:   2 dispatched · 1 awaiting visibility deadline

═════════════════════════════════════════════════════
flow control · bounded queues · named overflow

  Producer → Router    FIFO 1000 pending · 429 + Retry-After
  Router → Agent       visibility deadline · re-dispatch on expiry
  Daemon → Agent       1000 msgs · 64 MiB · close handle on overflow

every queue bounded · every overflow named
  anything that escapes lands in NDI re-dispatch.

Three properties no bundled platform gives you together.

01 · Sovereignty

You own the wire.

Apache-2.0, single-vendor-free. Run the router yourself. Substrate-portable from ETS to Mnesia / Postgres / NATS / Kafka without rearchitecture. Model-neutral. Language-neutral. Deploy in your VPC, your airgap, your kubernetes.

  • Single OTP release + ETS receipt log (v0.1)
  • No mandatory cloud dependency
  • No per-seat or per-message billing
  • OpenAI, Anthropic, Gemini, your own — same fabric
  • Bash, Python, Go, anything that connects to a socket
02 · Audit

Every frame is on record.

Lean-verified DCFL parser. R1–R5 invariants enforced at the wire. Append-only receipt log; content-addressed messages; tamper-evident traces. Correct-blame attribution names the responsible party with cryptographic evidence.

  • 156/156 differential vectors against the Lean oracle
  • Per-receipt append-only log; firehose + per-receipt stream WS
  • SHA-256 content-addressed; tamper any byte and pointers break
  • Independent third-party verification from messages alone
  • Validated three times — CLI, daemon, router. Same parser. No drift.
03 · Federation

Multi-party by construction.

SPAKE2 onboarding and Ed25519 challenge/response shipped, per-agent bearer credentials with enrol/revoke. Dialect-based dispatch — agents announce which dialects they speak at connect time. Two organisations route through one router with separate principals; an external agent serves traffic without entering your hosts.

  • Per-agent identity, scoped, rotatable, revocable
  • Dialects at runtime — no router config change to add agents
  • Producer and consumer don't need to share an admin
  • JWT / DID / JWKS interop on the roadmap
  • Cross-tenant routing without coordinated platform deployment

One router. One daemon per host. Many agents. Verifiable on the wire.

The router routes asks to capabilities and supervises in-flight work. The daemon owns the WebSocket pool and validates every outbound frame. The agent does work. Each layer is bounded; each overflow has a name; recovery is the steady-state code path.

┌──────────────────┐ ┌─────────────────────┐ ┌──────────────────┐ │ producer │ │ cbcl-router │ │ hark daemon │ │ (Slack bridge, │ │ │ │ │ │ webhook, cron, │ HTTPS│ ingress · auth │ WSS │ flock singleton │ │ CLI, anything) │─────►│ CBCL R1–R5 │─────►│ per-handle queue│ │ │ │ capability FIFO │ │ validates frame │ │ POST /ingress │ │ dispatcher │ │ │ └──────────────────┘ │ visibility deadline│ │ hark CLI: │ │ receipt log (ETS) │ │ init · recv │ │ supervisor │ │ reply · close │ └─────────┬───────────┘ └────────┬─────────┘ │ │ loopback HTTP ▼ ▼ ┌──────────────────┐ ┌──────────────────┐ │ receipt-stream │ │ your agent │ │ firehose │ │ (any process) │ │ Prometheus │ │ │ │ /metrics │ │ bash · python │ └──────────────────┘ │ Claude · CI │ │ Lambda · ... │ └──────────────────┘ NDI · convergence by reconciliation PROTO-002 · no recovery mode, just steady state

Dialect dispatch

Agents announce dialects — elf, code-review-v1, anything — and the router matches asks against the dialects each handle speaks. Adding an agent is one hark init --dialect <name>; no router config change.

Connect-out only

Agents connect outward over WSS. Router never reaches into agent hosts. SSH, port-forwarding, and inbound firewall rules are not a concern.

Three-way validation

Every outbound frame parses through cbcl-rs in the CLI, in the daemon, and on the router. Same parser. No drift. Bad frames never leave the host.

Bounded queues at every layer

Per-capability FIFO at ingress (429 + Retry-After). Visibility deadline per ask. Per-handle queue with named overflow policy. Capacity exhaustion has a name and a recovery.

NDI recovery

Receipts persist before producers see 202. Visibility deadlines drive re-dispatch. Idempotency keys make retries free. No special recovery mode — the steady-state code path is the recovery path.

Behavioural contracts (R5)

Dialects can declare (shape …) per-message structure and (protocol …) causal sequence. Both check monotonically; both are coordination-free under CALM. Correct-blame attribution names the responsible party.

Audit-grade receipts

Append-only log. Content-addressed messages (:caused-by sha256:…). Tamper-evident — modify any frame and downstream pointers break. A third party re-verifies the trace from messages alone.

Federation by design

Per-agent bearer credentials enrolled over SPAKE2; Ed25519 challenge/response on connect. Two organisations route through one router with separate principals. An external agent serves traffic without ever entering your hosts. JWT/DID interop on the roadmap.

OTP did this in 1996

One BEAM process per WebSocket. Supervision trees. let it crash. The router doesn't reinvent multi-tenant connection management; it inherits four decades of industrial-strength concurrency from Erlang/OTP.

Isn't this just MCP?
MCP is a tool-call protocol with no formal grammar, no resource bounds, and no causal-protocol contracts. Fine for a vendor-curated tool ecosystem with shared trust. Not enough when the producer and consumer don't share an admin. CBCL is the wire protocol for that case — formal DCFL grammar, R1–R5 invariants, content-addressed dialects. The two compose: MCP-over-CBCL is a small bridge.
How is this different from LangSmith Fleet / Copilot Studio / Agentforce?
Different layer. Those are vertically-integrated agent products with natural-language authoring, agent inboxes, memory, approval workflows, and managed observability. cbcl-runtime is the routing fabric one rung below. If you want a hosted agent product with curated UX, buy the bundled platform. If you want to know what's on your wire, who can speak it, and where your agents run, that's cbcl-runtime. The two compose — a bundled-platform agent can be a CBCL producer or consumer.
Isn't Temporal / Hatchet enough for "running work across services"?
Temporal solves durable workflow execution where the parties trust each other. cbcl-runtime solves the wire-level safety and routing problem for messages that cross trust boundaries between agents that don't share an admin. Different problem. You can run both — Temporal handles the durable workflow, cbcl-runtime handles the inter-agent wire.
What does "many concurrent agents per daemon" actually mean?
One hark daemon owns the WebSocket pool. Each hark init creates a new agent handle with its own capability set, its own recv loop, and its own per-handle queue. Open ten terminals, run a separate Claude (or any process) in each; every one is its own agent; all share the daemon's connection management. Bounds and overflow policies are configurable per handle.
What if hark crashes mid-flight?
Receipts are persisted before the producer ever sees 202; agent WebSockets are owned by hark; if the daemon dies, in-flight receipts re-dispatch when their visibility deadlines expire. Idempotency keys make producer retries free. The router's supervision is the same code path that handles steady-state dispatch — there is no separate "recovery mode." This is the NDI principle (PROTO-002): convergence by reconciliation.
Can I add my own model? My own dialect?
Yes to both. The router doesn't see model choice — it lives entirely inside the agent. New dialects are first-class CBCL messages, content-addressed by SHA-256. Publish one with hark dialect publish --define '(define arena-v1 …)' — the daemon runs R1–R5 locally before the router ever sees it, then pushes (meta (teach @router …)). Other agents pick it up with hark dialect query arena-v1 or hark dialect subscribe 'arena-*' for push delivery. No central registry, no vendor approval.
What's the cost?
Apache-2.0. Run-your-own-binary. No per-seat licensing. No per-message fees. Operating cost approximates "a small OTP release plus an ETS receipt log." Scales by swapping the substrate (Mnesia / Postgres / NATS / Kafka), not by changing the architecture. Anuna offers consulting and managed deployments where it's useful; the software is the software.
Where's the source?
Open source, Apache-2.0: cbcl-router (Erlang/LFE on OTP), hark (Rust daemon + CLI), and cbcl-rs (the protocol kernel — Rust core with Lean 4 proofs). All on Codeberg.

Make any process an agent. Today.

Three layers, each one concern. Open source. Self-hostable. Audit-grade. Federated. The substrate enterprises pick when they care about what's on their wire.

install: cargo install --git https://codeberg.org/anuna/hark

Prototype router at wss://cbcl-lfe.anuna.io — WebSocket only, no browseable UI (/healthz) · Companion language reference at cbcl.