Product

Shared memory and recovery infrastructure,
not another agent wrapper.

Iranti exists for the moment when one model, one prompt, and one tool-specific memory feature stop being enough. It gives teams a shared place to store durable facts, recover state, inspect provenance, and survive tool changes without losing the thread.

The strongest product story is not abstract AI memory. It is durable handoff, exact retrieval, runtime continuity, and operator control.

Architecture that earns trust.
One bounded system.

Iranti is deliberately split into the Library, Librarian, Attendant, Archivist, and Resolutionist so memory behavior stays inspectable. That structure is part of the value: operators and developers can reason about what the system is doing instead of treating memory as black-box magic.

A memory layer that cannot explain itself eventually becomes another source of workflow superstition. Iranti is built to avoid that trap.

LibraryKnowledge base

PostgreSQL tables. Current truth lives in knowledge_base. Closed and contested intervals live in archive. Facts are attached to entities by key.

LibrarianWrite manager

All agent writes go through here. Detects conflicts, resolves them deterministically when possible, escalates to humans when genuinely ambiguous.

AttendantPer-agent memory

One instance per agent. Manages working memory — what to load at session start, what to inject per turn, what to persist between sessions.

ArchivistDecay + cleanup

Archives expired and low-confidence entries on a schedule. Supports Ebbinghaus-style decay — facts lose confidence without reinforcement. Never deletes; worst case is a messy archive.

ResolutionistConflict review

Interactive CLI for human conflict review. Reads pending escalation files, guides a reviewer through competing facts, writes authoritative resolutions.

Why teams reach for it

Stop re-briefing every tool

Iranti is strongest when an agent knows what it needs: entity + key retrieval, explicit facts, provenance, and consistency across sessions and tools.

Keep one shared system of record

Claude Code, Codex, SDK clients, and direct HTTP callers can all point at the same memory layer instead of each tool reinventing memory for itself.

Trust it because you can inspect it

Health, doctor, repair, lifecycle, bindings, and version drift are part of the product, because memory infrastructure without inspectability becomes a trust problem fast.

Recovery is useful when it stays honest

Iranti helps most with explicit recovery, handoffs, and durable state. The product should not pretend autonomous recovery is already a solved problem everywhere.

Compared with the usual alternatives
Raw context memory

Good for a single model turn. Weak for persistence, shared facts, and operator inspection.

Vector DB only

Great for similarity search. Not enough on its own for exact addressed retrieval, conflict handling, or provenance-heavy workflows.

Framework-native memory

Useful inside a framework. Weaker when the problem is memory that needs to outlive the framework boundary itself.

Custom memory layer

Always possible. Also where many teams rediscover that persistence, conflict handling, lifecycle, and inspection are substantial work.

Capabilities
asOf query support

Temporal versioning

Every write is a timestamped event. Query facts as they were at any point in time — not just the current truth. Full ordered history per entity + key.

Ebbinghaus model, opt-in

Memory decay

Facts lose confidence without reinforcement over time. The Archivist processes decay on a configurable schedule. Surfaces stale knowledge before it misleads agents.

Fine-grained auth

Namespace API keys

API keys are scoped to entity namespaces — e.g. kb:read:project/acme. An agent can read from one project without write access, or read access to another.

One sentence positioning

Iranti is shared memory and recovery infrastructure for multi-agent workflows that need durable facts, bounded recovery, cross-tool handoff, conflict-aware writes, and operator visibility.

Ready to check the evidence?

The evidence page has the current benchmark state, real claim boundaries, and methodology links for serious evaluators.