When you switch AI services, you lose months of accumulated context.
Every project. Every preference. Every decision. Gone.
OwnYourContext fixes that — entirely on your machine.
Your ChatGPT export is 400–500MB of JSON — approximately 50 million tokens of conversation history. Claude's context window is 200,000 tokens. You can't upload 50 million tokens into a 200K window. The math doesn't work. So you start over. Every time.
OwnYourContext compresses your conversation history into a context window that fits. A local AI model running on your machine summarizes and categorizes every conversation. 489 conversations become 10 clean markdown files. Upload to Claude Projects. Context restored.
Drop your ChatGPT export zip. The full, unmodified file. 500MB is fine.
Local Ollama + Llama 3.2 reads every conversation on your machine. No API calls. Nothing leaves.
Auto-classified into topic buckets. Merge, rename, or reorganize before export.
Clean markdown files, one per topic. Ready for Claude Projects or any LLM service.
$ python app.py Extracting conversations... Parsed 489 conversations. 🧹 Cleaned up temporary files. ✅ Analysis complete — 481/489 conversations summarized Detected Topics: Work & Career: 169 conversations Technical & Coding: 138 conversations Research & Learning: 32 conversations ... ✅ Export complete — 10 files written 📁 Output: ~/Documents/LLMMigrator/output
Your conversation history contains things you wouldn't share with a third party. Health decisions. Financial discussions. Career strategy. Personal relationships. Every summarization call goes to localhost:11434. Nothing leaves your machine until you decide otherwise.
Conversation selection, richer summaries, and export improvements. Drop your email and we'll ping you once.
Power user? Star the repo on GitHub
// no accounts. no email lists. no tracking. ever.