Most AI agent frameworks treat the LLM output as ground truth. It comes back, you act on it.
That's the problem.
Axiom is a new Python runtime that changes the contract between agent and LLM. Every belief your agent forms carries:
- A confidence score (0.0–1.0) — how sure is the agent?
- A provenance chain — where did this belief come from?
- An is_actionable flag — should the agent act on this?
And if you're running multiple agents, Axiom lets them verify each other without a central orchestrator.
The problem with current frameworks
LangChain, CrewAI, AutoGen — they all give you tool use and orchestration. Some give you memory. None of them ask: how confident is this agent in what it just said?
This matters because:
- Agents hallucinate with full confidence
- In multi-agent systems, you're trusting Agent B's output blindly
- There's no audit trail of why an agent did something
What Axiom gives you
from axiom import AxiomAgent, BuiltinConstraints
import anthropic
client = anthropic.Anthropic()
def my_llm(prompt: str) -> str:
return client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}],
).content[0].text
agent = AxiomAgent(
name="researcher-01",
llm=my_llm,
constraints=[BuiltinConstraints.min_confidence(0.6)],
)
belief = agent.think("What are the risks of deploying untested ML models?")
print(belief.confidence) # 0.82
print(belief.provenance_str) # "reasoning:risk_analysis, memory:prior_context"
print(belief.is_actionable) # True
The agent is prompted to be epistemically honest — it must declare its confidence and cite its sources. You get that back as structured data, not raw text.
The novel part: agent-to-agent trust
Every existing multi-agent framework has a central orchestrator you just have to trust. Agent A outputs → orchestrator → Agent B acts. No verification step.
Axiom lets Agent A independently verify Agent B before acting on its output:
researcher = AxiomAgent("researcher-01", llm=my_llm)
validator = AxiomAgent("validator-01", llm=my_llm)
# Researcher snapshots its cryptographic identity
belief = researcher.think("Current state of quantum error correction?")
snap = researcher.snapshot()
# Validator verifies researcher — no central authority needed
trust = validator.verify_peer("researcher-01", peer_snapshot=snap)
print(trust.verdict) # "trusted"
print(trust.trust_score) # 0.91
# Action is gated on both confidence AND peer trust
result = validator.act(
"publish",
publish_fn,
belief.content,
context={"confidence": belief.confidence, "peer_trust": trust.trust_score},
)
Trust score is derived from the peer's identity hash and drift from baseline — how much has this agent changed since you last verified it?
Under the hood
Axiom is a synthesis of four prior projects:
- Cathedral — persistent identity + drift detection
- AgentGuard — runtime safety constraints + audit chain
- Veritas — epistemic confidence engine
- Aether — cryptographic succession protocol for identity handoffs
Unified into a single runtime you wrap around any LLM.
Install
git clone https://github.com/AILIFE1/axiom
cd axiom && pip install -e .
PyPI package coming soon.
Our Cathedral benchmark showed agents with persistent identity drift 10× less than stateless ones. Axiom adds the epistemic layer on top — stable identity and calibrated confidence.
What trust scenario would you add first — consensus (N agents must agree before action fires), or a gossip protocol for sharing verified beliefs?
GitHub: https://github.com/AILIFE1/axiom
Support: https://ko-fi.com/cathedralai
Related
- Cathedral: Persistent Memory for AI Agents — the identity + drift layer Axiom builds on
- Veritas: epistemic confidence for AI agents — the belief confidence engine inside Axiom
- AgentGuard: runtime safety layer — the Guardian constraint system inside Axiom
- Identity drift benchmark across 5 frameworks — why stable identity matters
Top comments (0)