Your LLM session died. ctx pack, ctx prime, paste — keep going in any other tool in 30 seconds.
You're deep in a coding session with an LLM. Then one of these happens — and you lose 10–30 minutes re-explaining everything from scratch.
API quota or subscription tier caps hit. You switch providers immediately but lose all built-up context.
The model starts forgetting earlier decisions. You need a fresh session — but it doesn't know what happened.
The service goes down. You need a fallback fast. Re-briefing a different LLM from scratch is painful.
You want to cross-check decisions with a different model — but first you'd have to explain the entire project state.
Pack your session, prime the target, paste. The new LLM picks up exactly where the old one left off.
ctx pack
Reads your last LLM session, distills 50k tokens into a 2k-token instructional snapshot with confidence ratings.
ctx prime --target chatgpt
Wraps the snapshot in role framing, anti-repetition guards, and verification hooks tailored for the target LLM.
--copy
The briefing is on your clipboard. Paste into any LLM. It knows what to do, what's done, and what not to redo.
Reads raw session JSONL from Claude Code
LLM-powered compression with confidence
Plain markdown & JSON in .ctx/
Instructional framing & guard rails
Target-specific rendering
Every feature exists to maximize task-resumption fidelity across providers.
The entire point. Move context between Claude, ChatGPT, Cursor, and any future LLM. Not locked to one ecosystem.
Every section is rated high / medium / low. Low-confidence claims are flagged with explicit "verify before relying" warnings.
No cloud. No database. No telemetry. Transcripts contain code and secrets — everything stays on your disk as plain files.
Snapshots are markdown. Users can open and fix mistakes manually. Boring, hand-editable formats are a feature, not a limitation.
The briefing explicitly tells the next LLM: don't restart, don't re-implement finished work, don't retry failed approaches.
Add new sources (ingestors) or targets (adapters) with a single file. The middle three stages never change.
The market is full of memory systems for app builders. Nobody owns the developer mid-task handoff problem.
| Tool | Audience | Cross-Provider | Open Source | Confidence-Aware | Lightweight |
|---|---|---|---|---|---|
| mem0 | App developers | N/A (SDK) | ✓ | ✗ | Medium |
| Letta / MemGPT | App developers | N/A (SDK) | ✓ | ✗ | Heavy |
| Zep | App developers | N/A (SDK) | Partial | ✗ | Heavy |
| Pieces | End-user devs | Some | ✗ | ✗ | Heavy |
| Claude MEMORY.md | Claude only | ✗ | N/A | ✗ | ✓ |
| Cursor Rules | Cursor only | ✗ | N/A | ✗ | ✓ |
| ctx ⚡ | End-user devs | ✓ Core goal | ✓ | ✓ | ✓ |
The .ctx snapshot format is designed like .editorconfig — boring enough that other tools could eventually adopt it natively.
ctx_version and prompt_version allow the format to evolve without breaking old snapshots--- ctx_version: 0.1 project: multiagent distiller_model: claude-sonnet-4-6 budget_tokens: 2000 --- # Task Building a multi-agent orchestration layer with tool-use routing and state persistence. # Status Core agent loop ✓ | Tool router ✓ | State: in progress # Decisions - Use SQLite for state. Why: no server dependency. _conf: high_ - Event sourcing over CRUD. Why: audit trail. _conf: medium_ # Tried & failed - Redis for state — overkill for single-user. Don't redo. # Next step Implement the state persistence layer in store.py # Confidence Report - Overall: high - High: Task, Decisions, Next step - Medium: Code map - Could not determine: backward compat requirements
Install ctx in 30 seconds. The next time your session dies, you'll be glad you did.