|
| 1 | +# AI Assistant |
| 2 | + |
| 3 | +This feature allows you to interact directly with various AI models from different providers within the editor. |
| 4 | + |
| 5 | +## Overview |
| 6 | + |
| 7 | +The LLM Chat UI provides a dedicated panel where you can: |
| 8 | + |
| 9 | +* Start multiple chat conversations. |
| 10 | +* Interact with different LLM providers and models (like OpenAI, Anthropic, Google AI, Mistral, DeepSeek, XAI, and local models via Ollama/LMStudio). |
| 11 | +* Manage chat history (save, rename, clone, lock). |
| 12 | +* Use AI assistance directly related to your code or general queries without leaving the editor. |
| 13 | + |
| 14 | +## Configuration |
| 15 | + |
| 16 | +Configuration is managed through `ecode`'s settings file (a JSON file). The relevant sections are `config`, `keybindings`, and `providers`. |
| 17 | + |
| 18 | +```json |
| 19 | +// Example structure in your ecode settings file |
| 20 | +{ |
| 21 | + "config": { |
| 22 | + // API Keys go here |
| 23 | + }, |
| 24 | + "keybindings": { |
| 25 | + // Chat UI keybindings |
| 26 | + }, |
| 27 | + "providers": { |
| 28 | + // LLM Provider definitions |
| 29 | + } |
| 30 | +} |
| 31 | +``` |
| 32 | + |
| 33 | +### API Keys (`config` section) |
| 34 | + |
| 35 | +To use cloud-based LLM providers, you need to add your API keys. You can do this in two ways: |
| 36 | + |
| 37 | +1. **Via the `config` object in settings:** |
| 38 | + |
| 39 | + Add your keys directly into the `config` section of your `ecode` settings file. |
| 40 | + |
| 41 | + ```json |
| 42 | + { |
| 43 | + "config": { |
| 44 | + "anthropic_api_key": "YOUR_ANTHROPIC_API_KEY", |
| 45 | + "deepseek_api_key": "YOUR_DEEPSEEK_API_KEY", |
| 46 | + "google_ai_api_key": "YOUR_GOOGLE_AI_API_KEY", // For Google AI / Gemini models |
| 47 | + "mistral_api_key": "YOUR_MISTRAL_API_KEY", |
| 48 | + "openai_api_key": "YOUR_OPENAI_API_KEY", |
| 49 | + "xai_api_key": "YOUR_XAI_API_KEY" // For xAI / Grok models |
| 50 | + } |
| 51 | + } |
| 52 | + ``` |
| 53 | + Leave the string empty (`""`) for services you don't intend to use. |
| 54 | + |
| 55 | +2. **Via Environment Variables:** |
| 56 | + |
| 57 | + The application can also read API keys from environment variables. This is often a more secure method, especially in shared environments or when committing configuration files. If an environment variable is set, it will override the corresponding key in the `config` object. |
| 58 | + |
| 59 | + The supported environment variables are: |
| 60 | + |
| 61 | + * `ANTHROPIC_API_KEY` |
| 62 | + * `DEEPSEEK_API_KEY` |
| 63 | + * `GOOGLE_AI_API_KEY` (or `GEMINI_API_KEY`) |
| 64 | + * `MISTRAL_API_KEY` |
| 65 | + * `OPENAI_API_KEY` |
| 66 | + * `XAI_API_KEY` (or `GROK_API_KEY`) |
| 67 | + |
| 68 | +### Keybindings (`keybindings` section) |
| 69 | + |
| 70 | +The following default keybindings are provided for interacting with the LLM Chat UI. You can customize these in the `keybindings` section of your settings file. |
| 71 | + |
| 72 | +*(Note: `mod` typically refers to `Cmd` on macOS and `Ctrl` on Windows/Linux)* |
| 73 | + |
| 74 | +| Action | Default Keybinding | Description | |
| 75 | +| :--------------------------- | :----------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
| 76 | +| Add New Chat | `mod+shift+return` | Adds a new, empty chat tab to the UI. | |
| 77 | +| Show Chat History | `mod+h` | Displays a chat history panel. | |
| 78 | +| Toggle Message Role | `mod+shift+r` | Changes the role [user/assistant] of a selected message | |
| 79 | +| Clone Current Chat | `mod+shift+c` | Creates a duplicate of the current chat conversation in a new tab. | |
| 80 | +| Send Prompt / Submit Message | `mod+return` | Sends the message currently typed in the input box to the selected LLM. | |
| 81 | +| Refresh Local Models | `mod+shift+l` | Re-fetches the list of available models from local providers like Ollama or LMStudio. | |
| 82 | +| Rename Current Chat | `f2` | Allows you to rename the currently active chat tab. | |
| 83 | +| Save Current Chat | `mod+s` | Saves the current chat conversation state. | |
| 84 | +| Open AI Settings | `mod+shift+s` | Opens the settings file | |
| 85 | +| Show Chat Menu | `mod+m` | Displays a context menu with common chat actions: New Chat, Save Chat, Rename Chat, Clone Chat, Lock Chat Memory (prevents removal during batch clear operations). | |
| 86 | +| Toggle Private Chat | `mod+shift+p` | Toggles if chat must be persisted or no in the chat history (incognito mode) | |
| 87 | +| Open New AI Assistant Tab | `mod+shift+m` | Opens a new LLM Chat UI | |
| 88 | + |
| 89 | +## LLM Providers (`providers` section) |
| 90 | + |
| 91 | +### Overview |
| 92 | + |
| 93 | +The `providers` object defines the different LLM services available in the chat UI. `ecode` comes pre-configured with several popular providers (Anthropic, DeepSeek, Google, Mistral, OpenAI, XAI) and local providers (Ollama, LMStudio). |
| 94 | + |
| 95 | +You generally don't need to modify the default providers unless you want to disable one or add/adjust model parameters. However, you can add definitions for new or custom LLM providers and models. |
| 96 | + |
| 97 | +### Adding Custom Providers |
| 98 | + |
| 99 | +To add a new provider, you need to add a new key-value pair to the `providers` object in your settings file. The key should be a unique identifier for your provider (e.g., `"my_custom_ollama"`), and the value should be an object describing the provider and its models. |
| 100 | + |
| 101 | +**Provider Object Structure:** |
| 102 | + |
| 103 | +```json |
| 104 | +"your_provider_id": { |
| 105 | + "api_url": "string", // Required: The base URL endpoint for the chat completion API. |
| 106 | + "display_name": "string", // Optional: User-friendly name shown in the UI. Defaults to the provider ID if missing. |
| 107 | + "enabled": boolean, // Optional: Set to `false` to disable this provider. Defaults to `true`. |
| 108 | + "version": number, // Optional: Identifier for the API version if the provider requires it (e.g., 1 for Anthropic). |
| 109 | + "open_api": boolean, // Optional: Set to `true` if the provider uses an OpenAI-compatible API schema (common for local models like Ollama, LMStudio). |
| 110 | + "fetch_models_url": "string", // Optional: URL to dynamically fetch the list of available models (e.g., from Ollama or LMStudio). If provided, the static `models` array below might be ignored or populated dynamically. |
| 111 | + "models": [ // Required (unless `fetch_models_url` is used and sufficient): An array of model objects available from this provider. |
| 112 | + // Model Object structure described below |
| 113 | + ] |
| 114 | +} |
| 115 | +``` |
| 116 | + |
| 117 | +**Model Object Structure (within the `models` array):** |
| 118 | + |
| 119 | +```json |
| 120 | +{ |
| 121 | + "name": "string", // Required: The internal model identifier used in API requests (e.g., "claude-3-5-sonnet-latest"). |
| 122 | + "display_name": "string", // Optional: User-friendly name shown in the model selection dropdown. Defaults to `name` if missing. |
| 123 | + "max_tokens": number, // Optional: The maximum context window size (input tokens + output tokens) supported by the model. |
| 124 | + "max_output_tokens": number, // Optional: The maximum number of tokens the model can generate in a single response. |
| 125 | + "default_temperature": number, // Optional: Default sampling temperature (controls randomness/creativity). Typically between 0.0 and 2.0. Defaults might vary per provider or model. (e.g., 1.0) |
| 126 | + "cheapest": boolean, // Optional: Flag to indicate if this model is considered a cheaper option. It's usually used to generate the summary of the chat when the specific provider is being used. |
| 127 | + "cache_configuration": { // Optional: Configuration for potential internal caching or speculative execution features. May not apply to all providers or setups. |
| 128 | + "max_cache_anchors": number, // Specific caching parameter. |
| 129 | + "min_total_token": number, // Specific caching parameter. |
| 130 | + "should_speculate": boolean // Specific caching parameter. |
| 131 | + } |
| 132 | + // or "cache_configuration": null if not applicable/used for this model. |
| 133 | +} |
| 134 | +``` |
| 135 | + |
| 136 | +**Example: Adding a hypothetical local provider** |
| 137 | + |
| 138 | +```json |
| 139 | +{ |
| 140 | + "providers": { |
| 141 | + // ... other existing providers ... |
| 142 | + "my_local_llm": { |
| 143 | + "api_url": "http://localhost:8080/v1/chat/completions", |
| 144 | + "display_name": "My Local LLM", |
| 145 | + "open_api": true, // Assuming it uses an OpenAI-compatible API |
| 146 | + "models": [ |
| 147 | + { |
| 148 | + "name": "local-model-v1", |
| 149 | + "display_name": "Local Model V1", |
| 150 | + "max_tokens": 4096 |
| 151 | + }, |
| 152 | + { |
| 153 | + "name": "local-model-v2-experimental", |
| 154 | + "display_name": "Local Model V2 (Experimental)", |
| 155 | + "max_tokens": 8192 |
| 156 | + } |
| 157 | + ] |
| 158 | + } |
| 159 | + } |
| 160 | +} |
| 161 | +``` |
0 commit comments