AI engineering, organisational design, and governance — done as one practice. Each engagement is a field site; what we learn funds methods the next one starts with.
Collective experience of the founding team
Most AI adoption looks like progress but leaves organisations more fragile than before.
The engineering team ships tools. The change team writes policies. Neither talks to the other until something breaks. The result is systems that work in the pilot but stall in practice. Speed goes up. The organisation's ability to adapt goes down.
New capability, without dismantling what already functions.
Systems that look efficient but can't respond when conditions change. Dependent on tools it doesn't understand.
Tools the team can inspect, override, and explain. New capability the organisation can actually steer.
The research and the practice are the same thing.
The same tools we ship for clients also help us study how the cooperative learns.
We work with you on real problems, under real conditions. A small team from the cooperative — engineering, organisational design, governance — drops into the engagement together. The research happens inside the work, not adjacent to it.
We instrument the work with consent. Every engagement produces decision traces, working notes, and artefacts the cooperative learns from. Some engagements go deeper — semantic patterns, biosignals, coupling-quality studies — when the research scope warrants it.
What we learn improves tools and sharpens methods. Some of it ships back into the open-source stack; some becomes methodology the cooperative writes up and publishes. Your next engagement starts where the last one finished.
Trustworthy AI, transformation advisory, senior people who stay with the engagement — three on-ramps to the same practice.
Agentic AI systems that explain their reasoning, flag uncertainty, and change course when the evidence changes. Learning by design.
Guidance on AI adoption that accounts for how change actually lands in organisations and on the people inside them. Not a playbook. A practice grounded in ongoing research.
Teams assembled from the cooperative for your specific engagement. You get experienced people who stay with the work, and learn with you, not handoff to junior staff.
Everything we use in client work is built on our releases of free software. Patent-free. The knowledge of how to use them well, suited to your context, is what matters.
Plans your agents can reason about. Defeasible logic programs that re-derive readiness as blockers appear and agents claim tasks.
Reason about rules that have exceptions. Non-monotonic logic in Rust — conflict, priority, withdrawal, with a proof tree you can show to an auditor.
Prior work that surfaces as you work. Ambient retrieval across your portfolio — trajectory-aware, mode-shaped, corrected by feedback.
AI that paces to the human on the loop. Five working modes, transition friction, defeasible governance — pull-only, never interrupting.
HRV as a research instrument, not a wellness score. Phase-space trajectories from a Polar H10, on-device, ESL-A. Used when the research scope warrants it.
An agent communication language that extends itself, safely. Deterministic context-free, formally verified in Lean 4, accepted at IEEE S&P 2026.
Make any process an agent. A router that dispatches asks to dialects on Erlang/OTP; a daemon that gives every host one connection and many concurrent agents.
A wiki whose answers show their work. Plain Markdown, MCP-native for agents, every claim sourced to the line it came from.
Prove who you are without revealing your biometrics. Halo2 ZK proofs over face templates — offline, on-device, no central database.
A crawler for the agent holding the leash. Clean structured output, no LLM in the tool, no vendor lock-in in your pipeline.
† Released under the Earthian Stewardship License (ESL-A). Preserves study, modification, and redistribution freedoms while restricting deployment for surveillance, manipulation, or harm.
These aren't values on a wall. They're built into how we coordinate, what we build, and how we pay each other.
We coordinate through the work itself. Shared documents, visible decisions, open code. Fewer meetings.
We're our own first case study. Methods, tools, and governance get tested through our own practice. The cooperative is the experiment.
No member's wellbeing gets sacrificed for the group's metrics. Individual paths matter. Our economics and governance protect them.
Different disciplines, one conviction: that what we build should nourish people, communities, and the planet.
R&D engineer and system architect with a background in applied cryptography and supply-chain integrity. Co-founder of Bit Trade (acquired by Kraken). Enjoys making and creating things for and with other people for good purpose.
Imagineer and designer. Lecturer at University of Wollongong on AI and Transformation with a background in responsible innovation. Studies how AI integration shapes adaptive capacity in people and organisations. Known to converse with ravens.
Software and systems wrangler. Often thinking about how we can better manage complexity in tech. Loves simple, well-crafted tools designed with humans in mind. Can be found foraging for mushrooms or making strange noises with synthesisers.
Specializes in designing and implementing automated systems to improve efficiency and reliability. A philosopher of machines and human interaction that makes great sourdough too.
If you're navigating AI adoption and want a team that's honest about what works — if you're a practitioner ready to do serious R&D in the open — or if your organisation wants a seat at the table, not just a service contract — we'd like to hear from you.
hello@anuna.io