Blog

Research notes

Benchmark methodology, engineering decisions, and observations on building persistent memory infrastructure for multi-agent AI systems.

Claude CodeCodexcross-tool

How to share context between Claude Code and Codex (without re-briefing)

Every time you switch from Claude Code to Codex, it starts blank. Here's the problem, the current workarounds developers are using, and the one setup that actually solves it.

MCPmemoryinfrastructure

Iranti: a persistent memory MCP server for AI agents

Iranti ships a stdio MCP server that any MCP-compatible client can connect to. Connect Claude Code, GitHub Copilot, Codex, or your own agent and get structured, persistent, cross-session memory with exact retrieval, conflict handling, and operator visibility.

comparisonbenchmarksMem0

Iranti vs Mem0: what the benchmarks actually show

A direct comparison across four benchmarks: recall accuracy, pool efficiency, conflict resolution, and cross-session persistence. Where each system wins and where the architectural tradeoffs land.

Claude CodeMCPsetup

How to give Claude Code persistent memory across sessions

Claude Code starts every session with no memory of previous work. Iranti adds a persistent MCP memory layer in one command. How it works, what it stores, and what changes in practice.

researchuse casesworkflows

Your AI research assistant shouldn't lose its memory every session

Three research workflows where persistent agent memory eliminates the most frustrating part of working with AI: literature review that builds across sessions, hypothesis tracking that survives experiment cycles, and manuscript writing with real continuity.

benchmarkstoken efficiencyB14

Why Iranti uses 37% fewer tokens in long coding sessions

We measured cumulative input token usage over a 15-turn coding session with and without Iranti. By turn 15, the Iranti arm uses 37% fewer tokens. Here's exactly how we measured it and why the gap grows over time.