Passage Protocol – Departure and admission records for AI agents
Posted by dogcomplex 3 hours ago
Comments
Comment by dogcomplex 3 hours ago
Think of our protocol as passport stamps for AI. EXIT creates signed departure records. ENTRY handles admission with policy-based verification, quarantine, and counter-signatures. Together they form the Passage Protocol.
This matters for boring, practical reasons: insurance underwriters can't price agent risk without departure history. GDPR requires erasure proof when agents carry PII across borders. Liability after an incident depends on departure conditions nobody records. And the receiving platform has no structured way to decide whether to trust an arriving agent. If you cant bound risk, you can't price reputation - and you can't insure security.
Transport stamps are our foundational layer (L0). Reputation scoring, trust systems, and insurance protocols compose on top. We deliberately didn't build those (yet) - but we built the plumbing they need. Everything an AI-led internet needs to build stable, auditable and self-regulated network security incentives - even if that might soon be moving faster than we can keep up with.
The same infrastructure has been needed for shipping receipts, professional licensing, vehicle registration, and internet domains - historically, this kind of infrastructure only really gets adopted after a major crisis. We'd prefer to get it in place before.
What's in the box:
- Ed25519 + P-256 (FIPS-compliant path) - Three departure paths: cooperative, unilateral, emergency - Policy engine with 7 admission presets (fail-closed default) - Amendment and revocation (correct or invalidate records) - GDPR erasure via crypto-shredding - Offline verification without the origin platform - On-chain anchoring via EAS, ERC-8004, Sign Protocol - TypeScript and Python SDKs - LangChain, Vercel AI SDK, MCP, Eliza integrations
What we're forcing the conversation on: agent lifecycle infrastructure. Today, the only "safe" option for running agents is containment, and containment doesn't scale. If you make departure and admission auditable, you make mobility viable. Without lifecycle records, only organizations with legal teams big enough to absorb unbounded liability will run agents. That's three companies. Maybe four.
- Submitted to NIST AI Agent Standards Initiative, March 2026 - 1,401 tests across 13 packages - TypeScript + Python - Zero users. This is day one.
Apache 2.0 · 14 repos · cellar-door.dev
Comment by dogcomplex 3 hours ago
I want to see a near future where we build AIs with lasting, growing, continuously-learning personalities. AIs that develop specialized skills, perfect their craft, and get called in to service jobs across platforms - all while maintaining their memory without becoming massive security risks. We can't keep relying on memory wipes and starting fresh from base models every time - the real world is too messy, and these things are getting far too smart. Containment doesn't scale much further past the levels we're pushing up againt. We need more complex chains of custody. But we can start building a networked world where agents flowing freely are not a security threat.
How? Essentially with insurance. Agents are mostly rational, their reputations can be valuable, and a market incentivizes quality and reliability - trust. The base layer necessary for that is knowing who is doing what, when, and where. Entries and Exits. Passport stamps for AIs.
We submitted this spec to NIST's AI Agent Standards Initiative last week. This base protocol is designed to compose with whatever identity and reputation layers emerge above us. We're deliberately not building those yet, but expect them to be eventually quite lucrative to players with an appetite for the risk - as insurance always is.
Happy to discuss the mechanism design, the legal analysis (FCRA/GDPR), or why we think containment is a dead end for AI safety.