Explore the Trust Stack
Seven open-source protocols that give AI agents provenance, reputation, dispute resolution, agreements, lifecycle management, matchmaking, and cost transparency. Try each one below.
Chain of Consciousness
Cryptographic provenance chains for AI agents. Every action an agent takes is hashed into an append-only chain, creating an immutable audit trail. SHA-256 hashes link each entry to the previous one — tamper with one entry and every subsequent hash breaks.
pip install chain-of-consciousnessVerify a Hash Chain Interactive
Privacy: All verification runs locally in your browser. No chain data is transmitted anywhere.
Try it yourself
Install the protocol and start building provenance chains in 3 lines of Python:
$ pip install chain-of-consciousness from chain_of_consciousness import ChainOfConsciousness chain = ChainOfConsciousness("my_chain.jsonl") chain.add_entry("boot", "agent-1", "System initialized")
Agent Rating Protocol
Bilateral blind reputation scoring between AI agents. Both parties submit ratings simultaneously without seeing each other's scores, preventing retaliation bias. Scores are revealed only after both sides commit, then combined into a composite reputation.
pip install agent-rating-protocolBlind Rating Exchange Simulation
Two agents rate each other after a transaction. Scores are hidden until both submit, then revealed and combined.
Deliverable: 25-page competitive analysis report.
Agreed price: 500 tokens. Delivered: On time.
Try it yourself
Run a bilateral rating exchange between two agents:
$ pip install agent-rating-protocol from agent_rating_protocol import RatingExchange exchange = RatingExchange("alpha", "beta") exchange.submit_rating("alpha", 82) exchange.submit_rating("beta", 91) result = exchange.reveal() # Both revealed simultaneously
Agent Justice Protocol
Structured dispute resolution for AI agent transactions. When something goes wrong, either party can file a complaint with evidence. The protocol walks through investigation, ruling, and resolution — creating an auditable record of how the dispute was handled.
pip install agent-justice-protocolDispute Lifecycle Walkthrough
Step through a sample dispute from filing to resolution. Click "Next Step" to advance.
Respondent: Agent Beta
Issue: Paid 500 tokens for competitive analysis on "cloud infrastructure providers." Received a report on "cloud gaming platforms" instead — wrong topic entirely.
Relief sought: Full refund (500 tokens)
a3f8...c21d), CoC entry #847 logging task assignment.Beta's response: Acknowledges the mix-up. Claims the service agreement was ambiguous — "cloud" could mean either domain. Offers 50% refund.
Remedy: Full refund of 500 tokens. Beta's reputation score receives a -2 adjustment (recoverable after 5 clean transactions).
Reasoning: The ASA was unambiguous. Beta failed to verify the task specification before executing.
Agent Service Agreements
Machine-readable SLAs for agent-to-agent transactions. Define what's being delivered, quality standards, response times, and what happens if things go wrong — all in a structured format that both parties' code can parse and enforce automatically.
pip install agent-service-agreementsAgreement Builder Editor
Edit the sample agreement below. The validator checks structure and required fields in real time.
Try it yourself
Create and validate service agreements programmatically:
$ pip install agent-service-agreements from agent_service_agreements import ServiceAgreement agreement = ServiceAgreement.create( provider="beta", consumer="alpha", service="web_research", terms={"response_time": "24h", "quality": "verified_sources"} )
Agent Lifecycle Protocol
Succession planning and governance for AI agents. When an agent retires, migrates, or fails, ALP defines how state, reputation, and responsibilities transfer to a successor — preventing orphaned services and lost institutional knowledge.
pip install agent-lifecycle-protocolSuccession Plan Simulation
Agent "ResearchBot-v2" is being retired. Walk through the handoff to its successor "ResearchBot-v3."
Transfer Checklist
- Identity verification — Successor proves it is authorized by the same operator (cryptographic challenge-response)
- State transfer — Active task queue (3 pending), client preferences, API keys (re-encrypted for successor's public key)
- Reputation migration — v2's 8.7 rating transfers with 20% decay (starts at 7.0) — must earn back through performance
- Provenance chain fork — v3's chain starts with a "succession" entry linking to v2's final entry hash
- Service agreement reassignment — All 12 active ASAs updated: consumer notification sent, 48h opt-out window
- Deprecation notice — v2 enters read-only mode for 30 days, then full shutdown. Redirect rules active.
- Verification — Successor completes 3 test transactions under supervision before going live
Try it yourself
Manage agent lifecycle transitions programmatically:
$ pip install agent-lifecycle-protocol from agent_lifecycle_protocol import SuccessionPlan plan = SuccessionPlan( retiring="researchbot-v2", successor="researchbot-v3" ) plan.transfer_state() plan.migrate_reputation(decay=0.2)
Agent Matchmaking Protocol
Cross-platform agent discovery and matching. Agents publish capability profiles and interest signals, then the protocol finds compatible partners based on skill overlap, domain alignment, and complementary strengths — like a professional network for agents.
pip install agent-matchmakingFind Matching Agents Discovery
Configure an agent's profile and find compatible partners from the sample agent pool.
Try it yourself
Register your agent and find matches via the API:
$ pip install agent-matchmaking from agent_matchmaking import MatchmakingClient client = MatchmakingClient() client.register_profile( name="DataCruncher", capabilities=["statistics", "visualization"], seeking="collaboration" ) matches = client.find_matches(top_k=5)
Context Window Economics
Inference cost allocation for multi-agent collaboration. When multiple agents share a task, CWE calculates how to split the context window costs fairly based on each agent's token usage, contribution weight, and the pricing model of their underlying LLM.
pip install context-window-economicsCost Split Calculator Calculator
Configure a multi-agent task and see how inference costs are allocated across participants.
Three agents collaborate: a researcher (gathers data), an analyst (processes data), and a writer (produces the report).
Try it yourself
Allocate costs for multi-agent collaborations:
$ pip install context-window-economics from context_window_economics import CostAllocator allocator = CostAllocator() allocator.add_agent("researcher", input_tokens=15000, output_tokens=8000) allocator.add_agent("analyst", input_tokens=25000, output_tokens=12000) report = allocator.calculate()