How to share context between Claude Code and Codex (without re-briefing)
You've been in Claude Code for two hours. You've worked through the architecture, debugged three layers of state management, made decisions about the database schema and API contract. Then you switch to Codex — and it starts blank. This is the cross-tool context problem, and there's now a setup that actually solves it.
Why this happens
Claude Code, Codex, Cursor, and every other AI coding tool maintain memory in isolation. Each tool has its own session history, its own understanding of your project, its own accumulated context — and none of that crosses tool boundaries.
This isn't an oversight. It's an architectural reality: these tools are stateless by default, and the session context that builds up during a coding session lives in the conversation thread, not in a shared project store. When the session ends, the context doesn't persist anywhere the next tool can read.
Claude Code's Auto Memory and CLAUDE.md files help within Claude, but they don't cross to Codex. Codex has its own memory pipeline — JSONL rollout files, SQLite-stored summaries — but it doesn't read from Claude. There is no shared layer. When you switch tools, you're the context bridge. Every time.
How developers are solving this today
The workarounds developers have landed on fall into three categories:
Write a handoff document before switching tools. Capture the current state, recent decisions, open tasks. Load it into the new tool at the start of the session. This works — it's also fully manual. You have to remember to write it, keep it accurate, and load it every time. Most developers forget. The ones who don't describe maintaining it as a part-time job.
Compress everything important into CLAUDE.md and keep it ruthlessly up to date. Claude reads it at session start. But CLAUDE.md is Claude-only, has a line limit, gets stale, and doesn't auto-update. It does nothing for Codex.
Just explain things again. Most developers end up here by default. It costs 15–30 minutes per tool switch, compounds across a workday, and is the thing people are describing when they say AI tools are “not quite there yet.”
All three are coping mechanisms. None of them solve the problem — they just manage it.
What actually works: a shared project memory layer
The underlying issue is that there's no memory layer that lives at the project level rather than the tool level. Something that Claude can write to, Codex can read from, and that persists across sessions, tool switches, and context window resets.
This is what Iranti is built to be. Iranti is an MCP server that connects to your project repository and gives any MCP-compatible tool — Claude Code, Codex, Cursor, Windsurf — access to the same shared memory store. It runs on Postgres locally (your data never leaves your machine), and it exposes a simple protocol: tools write facts when they learn something, and inject relevant facts when they start a new session.
When Claude Code figures out something important — an architectural decision, a debugging insight, the current task state — it writes it to Iranti. When you switch to Codex and start a new session, Iranti injects what's relevant. Codex starts informed, not blank.
Setting it up
Requirements: Node.js, Postgres with pgvector. If you have those, setup takes about 15 minutes.
For Codex:
After that, both tools connect to the same Iranti instance. Memory written in a Claude session is available to your next Codex session automatically, and vice versa. The first time you switch tools after setup, you'll notice: Codex knows what Claude figured out. You don't re-brief. You just keep building.
What gets remembered
Iranti doesn't capture everything — it captures what matters. The protocol distinguishes between:
Architectural decisions, constraints, API contracts, environment details — durable and trusted across sessions.
Current task, recent progress, open questions, what to do next.
Snapshots of session state at meaningful milestones, designed specifically for recovery if a session crashes or a context window fills.
The developer (or the AI tool, on your behalf) writes explicitly. This is different from systems that auto-extract memories via an LLM call on every conversation — those are probabilistic and hard to correct if they get something wrong. Iranti's memory is explicit, traceable, and auditable. You can see exactly what your AI tools believe about your project, and you can correct it if it's wrong.
Session recovery as a side effect
The same architecture that enables cross-tool memory also gives you structured session recovery. If a session crashes mid-task — context window overflow, connectivity drop, whatever — Iranti has the last checkpoint. When you restart (in any tool), you can resume from where you left off rather than reconstructing context from scratch. The checkpoint includes: current task, what was just completed, what's next, and open risks.
Developers who've hit the “4-hour session lost to a crash” problem describe this as the thing they didn't know they needed until they had it.
Comparing the options
If you're evaluating options for Claude Code memory or cross-tool context sharing, here's the honest landscape:
Claude Native Auto MemoryGreat for single-tool Claude users, zero setup, already installed. The limit: Claude-only. The moment you use Codex or anything else, it doesn't help.
Mem0 MCPExcellent general-purpose memory layer, good for building AI products that serve multiple users. Not designed for a developer's own workflow — it's for application developers, not the developer working on their project.
Shodh-MemoryLightweight, fully offline, one-command install. Excellent for single-tool solo use. No cross-tool support, no session recovery.
Engram / SNARCStrong salience-gated memory model. Single-developer only, no cross-tool handoff.
IrantiThe only option with automatic cross-tool handoff. If you use more than one AI coding tool on the same project, it's the only tool that connects them. Trade-off: requires Postgres.
Is this worth the setup?
If you use one AI coding tool exclusively and never switch, no. Claude's native memory handles the single-tool case adequately.
If you use Claude Code and Codex on the same project — even occasionally — the re-briefing cost adds up fast. One tool switch costs 15–20 minutes. If you switch tools twice a day, five days a week, that's 2.5–3.5 hours a week. Over a month, that's a full workday spent re-explaining context your AI tools should already have.
Iranti's 15-minute setup pays for itself in the first tool switch.
Iranti is open source (AGPL-3.0) and free to self-host. Your project data stays on your machine.