You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Adds Ollama provider support for running local LLMs (e.g.,
`ollama:llama3.2`, `ollama:qwen2.5-coder`) with full tool calling and
streaming support.
## Setup
1. Install Ollama from [ollama.com](https://ollama.com)
2. Pull a model: `ollama pull gpt-oss:20b`
3. That's it! Works out-of-the-box with no configuration needed.
Optional: Configure custom URL in `~/.cmux/providers.jsonc`:
```jsonc
{
"ollama": {
"baseUrl": "http://your-server:11434/api"
}
}
```
## Key Changes
- Model string parsing handles Ollama format (`ollama:model-id:tag`)
- Integration with `ollama-ai-provider-v2` from Vercel AI SDK
- No configuration required - defaults to `http://localhost:11434/api`
- 4 integration tests with CI support (gated by `TEST_OLLAMA=1`)
- Tokenizer support for common Ollama models
## Tests
✅ 4 new integration tests (102s)
✅ 964 unit tests pass
✅ All CI checks pass
---
_Generated with `cmux`_
---------
Co-authored-by: Ammar <ammar+ai@ammar.io>
0 commit comments