One UI for hosting communicating to all LLMs. Our UI, your LLM.
- No more complex docker setups to communicate to ollama running locally or remotely.
- Bring your own API tokens and pay for what you use
- Completely free to use.
- Everything's stored locally.
bun install
bun dev- Chat
- Support for local models
- With Ollama - #15
- With LM Studio
- Custom config
- Support for third-party models using API keys
- OpenAI
- Anthropic
- Geminai
- Deepseek
- Support for self-hosted models
- With Ollama
- With LM Studio
- Custom config
- Documentation
This will be replaced by dedicated documentation later.
- Setup
ollamafollowing the documentation on https://ollama.com/download - Choose any model or models from https://ollama.com/search that you wanna run and pull them using
ollama pull <MODEL_NAME> - Run one of the following commands to start
ollamalocally- For unix systems
OLLAMA_ORIGINS=https://slatechat.vercel.app ollama serve - For powershell
$env:OLLAMA_ORIGINS="https://slatechat.vercel.app"; ollama serve
- For unix systems
- Go to https://slatechat.vercel.app/chat and start a new local chat, you should see your models listed in the dropdown
If you're using the Brave browser or any other ad blockers, don't forget to disable the ad blocking or it won't work.
