Open Source • v0.1.0 • Python CLI

Git for LLM Context.

Your LLM session died. ctx pack, ctx prime, paste — keep going in any other tool in 30 seconds.

~/my-project — ctx
# Session died? Credits ran out? Need a second opinion?
$ ctx pack
✓ Ingested session c12eefd0 (58 turns, 8.5k tokens)
✓ Distilled → .ctx/current.md (2k budget, conf: high)
$ ctx prime --target chatgpt --copy
✓ Briefing copied to clipboard (1,847 tokens)
# Paste into ChatGPT → it picks up where Claude left off
$

The 10-minute tax
you pay every time

You're deep in a coding session with an LLM. Then one of these happens — and you lose 10–30 minutes re-explaining everything from scratch.

💳

Credits Exhausted

API quota or subscription tier caps hit. You switch providers immediately but lose all built-up context.

🧠

Context Window Full

The model starts forgetting earlier decisions. You need a fresh session — but it doesn't know what happened.

Provider Outage

The service goes down. You need a fallback fast. Re-briefing a different LLM from scratch is painful.

🔀

Second Opinion

You want to cross-check decisions with a different model — but first you'd have to explain the entire project state.

Three commands.
30 seconds.

Pack your session, prime the target, paste. The new LLM picks up exactly where the old one left off.

1

Pack

ctx pack

Reads your last LLM session, distills 50k tokens into a 2k-token instructional snapshot with confidence ratings.

2

Prime

ctx prime --target chatgpt

Wraps the snapshot in role framing, anti-repetition guards, and verification hooks tailored for the target LLM.

3

Paste

--copy

The briefing is on your clipboard. Paste into any LLM. It knows what to do, what's done, and what not to redo.

📥

Ingestor

Reads raw session JSONL from Claude Code

🧬

Distiller

LLM-powered compression with confidence

💾

Storage

Plain markdown & JSON in .ctx/

🎯

Composer

Instructional framing & guard rails

🔌

Adapter

Target-specific rendering

Built different.
By design.

Every feature exists to maximize task-resumption fidelity across providers.

🌐

Cross-Provider Portability

The entire point. Move context between Claude, ChatGPT, Cursor, and any future LLM. Not locked to one ecosystem.

🎯

Confidence-Aware Briefings

Every section is rated high / medium / low. Low-confidence claims are flagged with explicit "verify before relying" warnings.

🔒

Local-First & Private

No cloud. No database. No telemetry. Transcripts contain code and secrets — everything stays on your disk as plain files.

📝

Hand-Editable Snapshots

Snapshots are markdown. Users can open and fix mistakes manually. Boring, hand-editable formats are a feature, not a limitation.

🛡️

Anti-Repetition Guards

The briefing explicitly tells the next LLM: don't restart, don't re-implement finished work, don't retry failed approaches.

🔌

Extensible Pipeline

Add new sources (ingestors) or targets (adapters) with a single file. The middle three stages never change.

Not a memory SDK.
A handoff tool.

The market is full of memory systems for app builders. Nobody owns the developer mid-task handoff problem.

Tool Audience Cross-Provider Open Source Confidence-Aware Lightweight
mem0 App developers N/A (SDK) Medium
Letta / MemGPT App developers N/A (SDK) Heavy
Zep App developers N/A (SDK) Partial Heavy
Pieces End-user devs Some Heavy
Claude MEMORY.md Claude only N/A
Cursor Rules Cursor only N/A
ctx ⚡ End-user devs ✓ Core goal

A format designed to
become a standard.

Versioned. Markdown. Boring on purpose.

The .ctx snapshot format is designed like .editorconfig — boring enough that other tools could eventually adopt it natively.

  • Versionedctx_version and prompt_version allow the format to evolve without breaking old snapshots
  • Self-contained — all information to resume a task is in one file
  • Confidence-aware — the Confidence Report section is mandatory and surfaced in every command
  • Hand-editable — it's markdown — open it, fix it, save it
  • Diff-friendly — plain text plays nicely with git
📄 .ctx/current.md
---
ctx_version: 0.1
project: multiagent
distiller_model: claude-sonnet-4-6
budget_tokens: 2000
---

# Task
Building a multi-agent orchestration layer
with tool-use routing and state persistence.

# Status
Core agent loop ✓ | Tool router ✓ | State: in progress

# Decisions
- Use SQLite for state. Why: no server dependency.
  _conf: high_
- Event sourcing over CRUD. Why: audit trail.
  _conf: medium_

# Tried & failed
- Redis for state — overkill for single-user. Don't redo.

# Next step
Implement the state persistence layer in store.py

# Confidence Report
- Overall: high
- High: Task, Decisions, Next step
- Medium: Code map
- Could not determine: backward compat requirements

Stop re-explaining.
Start continuing.

Install ctx in 30 seconds. The next time your session dies, you'll be glad you did.

$ pip install ctx-llm click to copy