Skip to content

Commit e03594b

Browse files
committed
Merge branch 'develop'
2 parents d768e3f + ddf48b4 commit e03594b

File tree

7 files changed

+381
-90
lines changed

7 files changed

+381
-90
lines changed

.github/workflows/ecode-release.yml

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ on:
99
inputs:
1010
version:
1111
description: Release Version
12-
default: ecode-0.6.3
12+
default: ecode-0.7.1
1313
required: true
1414

1515
permissions: write-all
@@ -87,9 +87,9 @@ jobs:
8787
sudo apt-get update
8888
- name: Install dependencies
8989
run: |
90-
sudo apt-get install -y gcc-11 g++-11
91-
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 10
92-
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-11 10
90+
sudo apt-get install -y gcc-13 g++-13 libdw-dev
91+
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-13 10
92+
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-13 10
9393
sudo update-alternatives --install /usr/bin/cc cc /usr/bin/gcc 30
9494
sudo update-alternatives --set cc /usr/bin/gcc
9595
sudo update-alternatives --install /usr/bin/c++ c++ /usr/bin/g++ 30
@@ -160,7 +160,6 @@ jobs:
160160
- name: Install dependencies
161161
run: |
162162
sudo apt-get install -y premake4 libfuse2 fuse gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
163-
# sudo apt-get install -y libsdl2-dev:arm64 libsdl2-2.0-0:arm64
164163
bash projects/linux/scripts/install_sdl2.sh --aarch64
165164
- name: Build ecode
166165
run: |
@@ -357,8 +356,8 @@ jobs:
357356
- name: Install Dependencies
358357
run: |
359358
brew install bash create-dmg premake p7zip
360-
curl -OL https://github.com/libsdl-org/SDL/releases/download/release-2.30.9/SDL2-2.30.9.dmg
361-
hdiutil attach SDL2-2.30.9.dmg
359+
curl -OL https://github.com/libsdl-org/SDL/releases/download/release-2.32.2/SDL2-2.32.2.dmg
360+
hdiutil attach SDL2-2.32.2.dmg
362361
sudo cp -r /Volumes/SDL2/SDL2.framework /Library/Frameworks/
363362
hdiutil detach /Volumes/SDL2
364363
- name: Build

README.md

Lines changed: 28 additions & 3 deletions
Large diffs are not rendered by default.

docs/aiassistant.md

Lines changed: 161 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,161 @@
1+
# AI Assistant
2+
3+
This feature allows you to interact directly with various AI models from different providers within the editor.
4+
5+
## Overview
6+
7+
The LLM Chat UI provides a dedicated panel where you can:
8+
9+
* Start multiple chat conversations.
10+
* Interact with different LLM providers and models (like OpenAI, Anthropic, Google AI, Mistral, DeepSeek, XAI, and local models via Ollama/LMStudio).
11+
* Manage chat history (save, rename, clone, lock).
12+
* Use AI assistance directly related to your code or general queries without leaving the editor.
13+
14+
## Configuration
15+
16+
Configuration is managed through `ecode`'s settings file (a JSON file). The relevant sections are `config`, `keybindings`, and `providers`.
17+
18+
```json
19+
// Example structure in your ecode settings file
20+
{
21+
"config": {
22+
// API Keys go here
23+
},
24+
"keybindings": {
25+
// Chat UI keybindings
26+
},
27+
"providers": {
28+
// LLM Provider definitions
29+
}
30+
}
31+
```
32+
33+
### API Keys (`config` section)
34+
35+
To use cloud-based LLM providers, you need to add your API keys. You can do this in two ways:
36+
37+
1. **Via the `config` object in settings:**
38+
39+
Add your keys directly into the `config` section of your `ecode` settings file.
40+
41+
```json
42+
{
43+
"config": {
44+
"anthropic_api_key": "YOUR_ANTHROPIC_API_KEY",
45+
"deepseek_api_key": "YOUR_DEEPSEEK_API_KEY",
46+
"google_ai_api_key": "YOUR_GOOGLE_AI_API_KEY", // For Google AI / Gemini models
47+
"mistral_api_key": "YOUR_MISTRAL_API_KEY",
48+
"openai_api_key": "YOUR_OPENAI_API_KEY",
49+
"xai_api_key": "YOUR_XAI_API_KEY" // For xAI / Grok models
50+
}
51+
}
52+
```
53+
Leave the string empty (`""`) for services you don't intend to use.
54+
55+
2. **Via Environment Variables:**
56+
57+
The application can also read API keys from environment variables. This is often a more secure method, especially in shared environments or when committing configuration files. If an environment variable is set, it will override the corresponding key in the `config` object.
58+
59+
The supported environment variables are:
60+
61+
* `ANTHROPIC_API_KEY`
62+
* `DEEPSEEK_API_KEY`
63+
* `GOOGLE_AI_API_KEY` (or `GEMINI_API_KEY`)
64+
* `MISTRAL_API_KEY`
65+
* `OPENAI_API_KEY`
66+
* `XAI_API_KEY` (or `GROK_API_KEY`)
67+
68+
### Keybindings (`keybindings` section)
69+
70+
The following default keybindings are provided for interacting with the LLM Chat UI. You can customize these in the `keybindings` section of your settings file.
71+
72+
*(Note: `mod` typically refers to `Cmd` on macOS and `Ctrl` on Windows/Linux)*
73+
74+
| Action | Default Keybinding | Description |
75+
| :--------------------------- | :----------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
76+
| Add New Chat | `mod+shift+return` | Adds a new, empty chat tab to the UI. |
77+
| Show Chat History | `mod+h` | Displays a chat history panel. |
78+
| Toggle Message Role | `mod+shift+r` | Changes the role [user/assistant] of a selected message |
79+
| Clone Current Chat | `mod+shift+c` | Creates a duplicate of the current chat conversation in a new tab. |
80+
| Send Prompt / Submit Message | `mod+return` | Sends the message currently typed in the input box to the selected LLM. |
81+
| Refresh Local Models | `mod+shift+l` | Re-fetches the list of available models from local providers like Ollama or LMStudio. |
82+
| Rename Current Chat | `f2` | Allows you to rename the currently active chat tab. |
83+
| Save Current Chat | `mod+s` | Saves the current chat conversation state. |
84+
| Open AI Settings | `mod+shift+s` | Opens the settings file |
85+
| Show Chat Menu | `mod+m` | Displays a context menu with common chat actions: New Chat, Save Chat, Rename Chat, Clone Chat, Lock Chat Memory (prevents removal during batch clear operations). |
86+
| Toggle Private Chat | `mod+shift+p` | Toggles if chat must be persisted or no in the chat history (incognito mode) |
87+
| Open New AI Assistant Tab | `mod+shift+m` | Opens a new LLM Chat UI |
88+
89+
## LLM Providers (`providers` section)
90+
91+
### Overview
92+
93+
The `providers` object defines the different LLM services available in the chat UI. `ecode` comes pre-configured with several popular providers (Anthropic, DeepSeek, Google, Mistral, OpenAI, XAI) and local providers (Ollama, LMStudio).
94+
95+
You generally don't need to modify the default providers unless you want to disable one or add/adjust model parameters. However, you can add definitions for new or custom LLM providers and models.
96+
97+
### Adding Custom Providers
98+
99+
To add a new provider, you need to add a new key-value pair to the `providers` object in your settings file. The key should be a unique identifier for your provider (e.g., `"my_custom_ollama"`), and the value should be an object describing the provider and its models.
100+
101+
**Provider Object Structure:**
102+
103+
```json
104+
"your_provider_id": {
105+
"api_url": "string", // Required: The base URL endpoint for the chat completion API.
106+
"display_name": "string", // Optional: User-friendly name shown in the UI. Defaults to the provider ID if missing.
107+
"enabled": boolean, // Optional: Set to `false` to disable this provider. Defaults to `true`.
108+
"version": number, // Optional: Identifier for the API version if the provider requires it (e.g., 1 for Anthropic).
109+
"open_api": boolean, // Optional: Set to `true` if the provider uses an OpenAI-compatible API schema (common for local models like Ollama, LMStudio).
110+
"fetch_models_url": "string", // Optional: URL to dynamically fetch the list of available models (e.g., from Ollama or LMStudio). If provided, the static `models` array below might be ignored or populated dynamically.
111+
"models": [ // Required (unless `fetch_models_url` is used and sufficient): An array of model objects available from this provider.
112+
// Model Object structure described below
113+
]
114+
}
115+
```
116+
117+
**Model Object Structure (within the `models` array):**
118+
119+
```json
120+
{
121+
"name": "string", // Required: The internal model identifier used in API requests (e.g., "claude-3-5-sonnet-latest").
122+
"display_name": "string", // Optional: User-friendly name shown in the model selection dropdown. Defaults to `name` if missing.
123+
"max_tokens": number, // Optional: The maximum context window size (input tokens + output tokens) supported by the model.
124+
"max_output_tokens": number, // Optional: The maximum number of tokens the model can generate in a single response.
125+
"default_temperature": number, // Optional: Default sampling temperature (controls randomness/creativity). Typically between 0.0 and 2.0. Defaults might vary per provider or model. (e.g., 1.0)
126+
"cheapest": boolean, // Optional: Flag to indicate if this model is considered a cheaper option. It's usually used to generate the summary of the chat when the specific provider is being used.
127+
"cache_configuration": { // Optional: Configuration for potential internal caching or speculative execution features. May not apply to all providers or setups.
128+
"max_cache_anchors": number, // Specific caching parameter.
129+
"min_total_token": number, // Specific caching parameter.
130+
"should_speculate": boolean // Specific caching parameter.
131+
}
132+
// or "cache_configuration": null if not applicable/used for this model.
133+
}
134+
```
135+
136+
**Example: Adding a hypothetical local provider**
137+
138+
```json
139+
{
140+
"providers": {
141+
// ... other existing providers ...
142+
"my_local_llm": {
143+
"api_url": "http://localhost:8080/v1/chat/completions",
144+
"display_name": "My Local LLM",
145+
"open_api": true, // Assuming it uses an OpenAI-compatible API
146+
"models": [
147+
{
148+
"name": "local-model-v1",
149+
"display_name": "Local Model V1",
150+
"max_tokens": 4096
151+
},
152+
{
153+
"name": "local-model-v2-experimental",
154+
"display_name": "Local Model V2 (Experimental)",
155+
"max_tokens": 8192
156+
}
157+
]
158+
}
159+
}
160+
}
161+
```

0 commit comments

Comments
 (0)