Long-term Shared Context Between LLMs
Dex Bridge is an innovative system that captures, stores, and enables semantic search across conversations from multiple AI assistants (ChatGPT, Claude) using a local vector database. It creates a persistent memory layer that allows different LLMs to access your conversation history through the Model Context Protocol (MCP).
[] []
.:[]: .:[]:
.: :[]: :. .: :[]: :.
.: : :[]: : :. .: : :[]: : :.
.: : : :[]: : : :-.___.-: : : :[]: : : :.
_:_:_:_:_:[]:_:_:_:_:_::_:_:_:_ :[]:_:_:_:_:_
^^^^^^^^^^[]^^^^^^^^^^^^^^^^^^^^^[]^^^^^^^^^^
[] []
Every time you start a new conversation with an AI assistant, it has zero context of your previous conversations. Dex Bridge solves this by:
- 📝 Capturing real-time conversations from ChatGPT and Claude
- 💾 Storing them in a vector database with semantic search capabilities
- 🔍 Enabling any LLM to search and access your entire conversation history
- 🔗 Bridging the context gap between different AI assistants
┌─────────────────────────────────────────────────────────────┐
│ Browser / LLM Clients │
│ (ChatGPT, Claude, GitHub Copilot) │
└──────────────────────┬──────────────────────────────────────┘
│ HTTPS Traffic
↓
┌─────────────────────────────────────────────────────────────┐
│ MITM Proxy Layer │
│ (mitmproxy @ localhost:8080) │
│ │
│ ┌──────────────────┐ ┌─────────────────────┐ │
│ │ capture_req.py │ │ capture_claude.py │ │
│ │ (ChatGPT) │ │ (Claude.ai) │ │
│ └──────────────────┘ └─────────────────────┘ │
└──────────────────────┬──────────────────────────────────────┘
│ Intercept & Parse SSE/NDJSON
↓
┌─────────────────────────────────────────────────────────────┐
│ Processing Pipeline │
│ │
│ parsed_matches/ merge_conversations.py │
│ ├─ chatgpt.com/ ─────────────► merged_conversations/ │
│ │ └─ conv_id__ts.json ├─ chatgpt.com/ │
│ └─ claude.ai/ │ └─ merged.json │
│ └─ conv_id__ts.json └─ claude.ai/ │
└──────────────────────┬──────────────────────────────────────┘
│ Merge by conversation_id
↓
┌─────────────────────────────────────────────────────────────┐
│ Vector Database (Qdrant) │
│ store_chat_message.py │
│ │
│ Collection: chat_messages │
│ ├─ Embeddings (OpenAI text-embedding-3-small) │
│ ├─ Metadata (role, timestamp, model, conversation_id) │
│ └─ Content Hash (deduplication) │
└──────────────────────┬──────────────────────────────────────┘
│ Semantic Search
↓
┌─────────────────────────────────────────────────────────────┐
│ MCP Server (access_llm_memory.py) │
│ │
│ Tool: search_memory(query, top_k) │
│ ├─ Generate query embedding │
│ ├─ Search Qdrant vector DB │
│ └─ Return relevant conversation snippets │
└──────────────────────┬──────────────────────────────────────┘
│ MCP Protocol
↓
┌─────────────────────────────────────────────────────────────┐
│ VS Code / AI Clients │
│ (GitHub Copilot, Claude Desktop, etc.) │
│ │
│ Now can access your entire conversation history! │
└─────────────────────────────────────────────────────────────┘
- ✅ ChatGPT (chatgpt.com)
- ✅ Claude (claude.ai)
- 🔄 Extensible architecture for other LLMs
- Handles SSE (Server-Sent Events) streams
- Parses NDJSON (Newline Delimited JSON)
- Extracts text from complex patch events
- Preserves conversation structure and metadata
- SHA-256 content hashing
- Prevents duplicate embeddings
- Efficient storage management
- Vector embeddings via OpenAI's
text-embedding-3-small - Cosine similarity search in Qdrant
- Context-aware retrieval
- Standard Model Context Protocol server
- Easy integration with any MCP-compatible client
- Accessible from VS Code, Claude Desktop, and more
- Python 3.10+
- macOS (for proxy management scripts)
- OpenAI API Key
- Qdrant (local or cloud instance)
- Clone the repository:
git clone https://github.com/Dextron04/dex_bridge.git
cd dex_bridge- Create and activate virtual environment:
python3 -m venv dexenv
source dexenv/bin/activate # On macOS/Linux- Install dependencies:
pip install -r dex_bridge/requirements.txt- Configure environment variables:
cd dex_bridge
cp .env.example .env
# Edit .env and add your OPENAI_API_KEY- Install and start Qdrant:
# Using Docker
docker pull qdrant/qdrant
docker run -p 6333:6333 qdrant/qdrant
# Or install locally from https://qdrant.tech/documentation/quick-start/cd dex_bridge
sudo bash dex_bridge.shThis interactive menu lets you:
- ✅ Enable/disable system proxy
- 📡 Start mitmproxy capture
- 🔍 View current status
- 🔐 Manage SSL certificates
Browse to ChatGPT or Claude and have conversations as usual. Dex Bridge captures everything automatically in the background.
The system automatically:
- Captures streams to
parsed_matches/{provider}/ - Merges by conversation ID to
merged_conversations/{provider}/ - Generates embeddings and stores in Qdrant
You can also manually trigger:
# Merge conversations
python merge_conversations.py
# Store in vector DB
python store_chat_message.pyThe MCP server is configured in .vscode/mcp.json:
{
"servers": {
"memory_mcp": {
"type": "stdio",
"command": "/path/to/dexenv/bin/python3",
"args": ["/path/to/dex_bridge/memory_mcp/access_llm_memory.py"]
}
}
}Now ask any MCP-compatible AI:
"Search my memory for conversations about Python async programming"
- Main control script
- Manages macOS network proxy settings
- Handles SSL certificate installation
- Interactive TUI for system control
capture_req.py: ChatGPT conversation capturecapture_claude.py: Claude conversation capture- Real-time SSE/NDJSON parsing
- Automatic file organization
- Groups conversations by ID
- Extracts user/assistant exchanges
- Creates structured JSON output
- Supports multiple providers
- Generates embeddings via OpenAI API
- Stores vectors in Qdrant
- Implements content-based deduplication
- Preserves rich metadata
- MCP server implementation
search_memory(query, top_k)tool- Returns semantically relevant results
- Includes conversation context
Raw HTTPS Stream → MITM Capture → Parse Events → Extract Text
↓
Parsed JSON
↓
Merge by Conversation ID
↓
Structured Conversations
↓
Generate Embeddings
↓
Store in Vector Database
↓
MCP Semantic Search API
↓
Any LLM Client Access
- Local First: All data stored locally by default
- SSL/TLS: mitmproxy CA certificate for HTTPS inspection
- No Cloud: Conversations never leave your machine (except OpenAI API for embeddings)
- Content Hash: Prevents accidental duplicate storage
- Proxy Control: Easy on/off toggle for privacy
Once set up, you can ask your AI assistant:
"What did I discuss about system architecture last week?"
"Show me all conversations where I talked about Python optimization"
"Find the conversation where I got help with React hooks"
"What database recommendations did I receive recently?"
- Support for more LLM providers (Gemini, Perplexity, etc.)
- Web UI for conversation browsing
- Export to Markdown/PDF
- Custom embedding models
- Conversation analytics and insights
- Multi-user support
- Cloud sync options
- Advanced filtering and tagging
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is open source and available under the MIT License.
- mitmproxy - Powerful HTTP/HTTPS proxy
- Qdrant - High-performance vector database
- OpenAI - Embedding models
- Model Context Protocol - Standard for AI context sharing
Tushin Kulshreshtha - @Dextron04
Project Link: https://github.com/Dextron04/dex_bridge
Built with ❤️ to bridge the context gap between AI conversations