-
Notifications
You must be signed in to change notification settings - Fork 29
feat: Add model selector feature to chat interface #82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Allow users to override bot's default model when sending messages. Changes: - Added model preference management (save/restore from localStorage) - Created ModelSelector component with model list fetching - Integrated ModelSelector in ChatArea UI (positioned next to TeamSelector) - Updated messageService to pass selectedModel parameter - Extended CreateTaskRequest API to include override_model field - Enhanced TaskCreate schema to accept override_model - Added override_model_config column to Subtask model - Modified task_kinds.py to store and apply override_model_config - Updated get_task_detail to return overridden bot configs for display Features: - User model selection is saved to localStorage for new tasks - Model selection can be changed when appending messages - Frontend reads current task's model from latest ASSISTANT subtask - Backend stores override in subtask and applies to bot config - Fully backward compatible: no override means use bot's default config
WalkthroughThis PR introduces a feature enabling users to override the model/agent configuration for tasks. Changes include adding a nullable Changes
Sequence DiagramsequenceDiagram
actor User
participant ChatArea
participant ModelSelector
participant messageService
participant Backend
participant DB as Database
User->>ChatArea: Open chat, mount component
ChatArea->>ModelSelector: Render with selectedModel state
ModelSelector->>Backend: Fetch available models
Backend-->>ModelSelector: Return model list
ModelSelector->>ChatArea: Update with fetched models
User->>ChatArea: Select model from dropdown
ChatArea->>ChatArea: handleModelChange(selected)
ChatArea->>localStorage: saveLastModel(selected)
ChatArea->>ChatArea: setSelectedModel(selected)
User->>ChatArea: Send message
ChatArea->>messageService: sendMessage({..., selectedModel})
messageService->>Backend: POST /task/message<br/>{..., override_model}
rect rgb(200, 230, 255)
note right of Backend: Task Service Processing
Backend->>Backend: create_task_or_append()<br/>receives override_model
Backend->>Backend: _create_subtasks(override_model)
Backend->>Backend: Set override_model_config<br/>on each subtask
end
Backend->>DB: Persist subtasks with<br/>override_model_config
DB-->>Backend: Success
rect rgb(220, 240, 220)
note right of Backend: Task Detail Rendering
Backend->>Backend: get_task_detail()
Backend->>Backend: For each subtask:<br/>Apply override_model_config<br/>to bot agent_config
end
Backend-->>ChatArea: Return updated task
ChatArea-->>User: Display task with<br/>selected model applied
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–25 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (7)
backend/app/schemas/task.py (1)
56-56: Consider adding validation for model names.While the optional field is correctly defined, there's no validation to ensure
override_modelcontains a valid model name. Invalid model names could cause runtime errors when the bot attempts to use them.Consider adding a validator or constraint:
from pydantic import field_validator # Inside TaskCreate class: @field_validator('override_model') @classmethod def validate_model_name(cls, v: Optional[str]) -> Optional[str]: if v is not None: # Add validation logic for allowed model names # Could check against a predefined list or pattern if not v.strip(): raise ValueError("override_model cannot be empty string") return vAlternatively, verify the model name exists when processing the task creation request in the service layer.
frontend/src/utils/userPreferences.ts (1)
118-136: Consider adding validation for empty model names.Unlike
saveLastTeamwhich validates the teamId input (lines 48-51),saveLastModeldoesn't check ifmodelNameis empty or whitespace-only, which could store invalid preferences.Apply this diff to add validation:
export function saveLastModel(modelName: string): void { try { + if (!modelName || !modelName.trim()) { + console.warn('[userPreferences] Invalid model name, not saving:', modelName); + return; + } localStorage.setItem(STORAGE_KEYS.LAST_MODEL, modelName); } catch (error) { console.warn('Failed to save last model to localStorage:', error); } }Additionally, consider filtering out invalid values in
getLastModel:export function getLastModel(): string | null { try { - return localStorage.getItem(STORAGE_KEYS.LAST_MODEL); + const model = localStorage.getItem(STORAGE_KEYS.LAST_MODEL); + if (!model || model === 'undefined' || model === 'null') { + return null; + } + return model; } catch (error) { console.warn('Failed to get last model from localStorage:', error); return null; } }frontend/src/features/tasks/service/messageService.ts (1)
55-55: Clarify the null vs undefined conversion.Line 55 uses
selectedModel ?? undefinedto convertnulltoundefined. While this works, the distinction might not be necessary since:
- JSON.stringify treats both
nullandundefinedobject properties differently (nullis serialized,undefinedis omitted)- The backend accepts
Optional[str]which can beNone(equivalent to bothnullandundefinedin Python)Consider simplifying to be explicit about the intent:
- override_model: selectedModel ?? undefined, + override_model: selectedModel || undefined, // Omit property if empty stringOr document why the nullish coalescing is preferred:
// Convert null to undefined to omit the field from the request body override_model: selectedModel ?? undefined,This ensures the property is omitted entirely when not set, rather than sent as
"override_model": null.frontend/src/features/tasks/components/ModelSelector.tsx (3)
39-58: Consider making agent_name configurable and adding retry logic.The model fetching logic has two concerns:
Hardcoded agent_name (line 45):
agent_name: 'ClaudeCode'is hardcoded, which limits flexibility if different agents/bots need different model lists.No retry logic: If the API call fails (network issue, temporary outage), the component permanently hides itself until page refresh.
Consider these improvements:
1. Make agent_name configurable:
interface ModelSelectorProps { selectedModel: string | null; setSelectedModel: (model: string | null) => void; disabled?: boolean; isLoading?: boolean; + agentName?: string; } export default function ModelSelector({ selectedModel, setSelectedModel, disabled = false, isLoading = false, + agentName = 'ClaudeCode', }: ModelSelectorProps) { // ... const response = await apiClient.get<{ data: ModelOption[] }>('/models/names', { - agent_name: 'ClaudeCode', + agent_name: agentName, });2. Add retry logic for resilience:
const [retryCount, setRetryCount] = useState(0); const MAX_RETRIES = 2; useEffect(() => { const fetchModels = async () => { setModelsLoading(true); setError(null); try { const response = await apiClient.get<{ data: ModelOption[] }>('/models/names', { agent_name: 'ClaudeCode', }); setModels(response.data || []); setRetryCount(0); // Reset on success } catch (err) { console.error('Failed to fetch models:', err); if (retryCount < MAX_RETRIES) { setRetryCount(prev => prev + 1); setTimeout(() => { // Trigger re-fetch by updating a dependency }, 1000 * (retryCount + 1)); // Exponential backoff } else { setError('Failed to load models'); setModels([]); } } finally { setModelsLoading(false); } }; fetchModels(); }, [retryCount]);
61-71: Validation clears selection without user feedback.Lines 61-71 silently clear the
selectedModelif it's not found in the fetched models list. This could happen if:
- The stored model from localStorage is deprecated/removed
- The API returns a different model list than expected
- There's a race condition during model list loading
Users might be confused why their selection disappeared without explanation.
Consider providing feedback when clearing the selection:
+import { App } from 'antd'; + export default function ModelSelector({ selectedModel, setSelectedModel, disabled = false, isLoading = false, }: ModelSelectorProps) { + const { message } = App.useApp(); // ... existing code ... // Validate selectedModel exists in models list useEffect(() => { if (selectedModel && models.length > 0) { const exists = models.some(m => m.name === selectedModel); if (!exists) { console.warn( `Selected model "${selectedModel}" not found in models list, clearing selection` ); + message.warning( + `Previously selected model "${selectedModel}" is no longer available. Please select a new model.` + ); setSelectedModel(null); } } }, [selectedModel, models, setSelectedModel]);
93-95: Reconsider hiding the selector on error.Returning
nullwhen models fail to load (line 93-95) completely hides the model selector from the UI. This creates a poor user experience because:
- Users don't know the feature exists if it fails to load on first visit
- No way to manually retry without refreshing the page
- Silent failures are hard to debug
Consider showing a disabled selector with error state instead:
if (error) { - return null; // Hide selector if models failed to load + return ( + <div className="flex items-baseline space-x-1 min-w-0"> + <CpuChipIcon className="w-3 h-3 text-error flex-shrink-0" /> + <Tooltip title="Failed to load models. Please refresh the page."> + <Select + disabled + placeholder="Failed to load models" + size="small" + status="error" + style={{ width: 'auto', maxWidth: 200 }} + /> + </Tooltip> + </div> + ); }This provides better visibility and helps users understand why the feature isn't working.
backend/app/services/adapters/task_kinds.py (1)
895-914: USER subtasks don't need override_model_config.Lines 895-914 create a USER role subtask, but override_model_config is never set on it (correctly, since users don't execute with models). However, later at line 916-920,
override_model_configis created but only used for ASSISTANT subtasks.The current code is correct, but the comment or variable naming could be clearer.
Consider adding a comment to clarify:
user_subtask = Subtask( user_id=user_id, task_id=task.id, team_id=team.id, title=f"{task_crd.spec.title} - User", bot_ids=bot_ids, role=SubtaskRole.USER, executor_namespace="", executor_name="", prompt=user_prompt, status=SubtaskStatus.COMPLETED, progress=0, message_id=next_message_id, parent_id=parent_id, error_message="", completed_at=datetime.now(), result=None, + # USER subtasks don't need override_model_config as they don't execute with models ) db.add(user_subtask) - # Prepare override_model_config if override_model is provided + # Prepare override_model_config for ASSISTANT subtasks override_model_config = None
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
backend/app/models/subtask.py(1 hunks)backend/app/schemas/task.py(1 hunks)backend/app/services/adapters/task_kinds.py(7 hunks)frontend/src/apis/tasks.ts(1 hunks)frontend/src/features/tasks/components/ChatArea.tsx(7 hunks)frontend/src/features/tasks/components/ModelSelector.tsx(1 hunks)frontend/src/features/tasks/service/messageService.ts(2 hunks)frontend/src/utils/userPreferences.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/src/features/tasks/components/ModelSelector.tsx (2)
frontend/src/features/layout/hooks/useMediaQuery.ts (1)
useMediaQuery(14-43)frontend/src/apis/client.ts (1)
apiClient(105-105)
frontend/src/features/tasks/components/ChatArea.tsx (3)
frontend/src/utils/userPreferences.ts (2)
getLastModel(129-136)saveLastModel(118-124)frontend/src/features/tasks/components/TeamSelector.tsx (1)
TeamSelector(27-198)frontend/src/features/tasks/components/ModelSelector.tsx (1)
ModelSelector(25-135)
🔇 Additional comments (18)
frontend/src/apis/tasks.ts (1)
22-22: LGTM! Type definition aligns with backend schema.The optional
override_modelfield correctly mirrors the backend'sTaskCreate.override_model: Optional[str]field, maintaining type consistency across the stack.frontend/src/utils/userPreferences.ts (1)
14-14: LGTM! Storage key follows existing naming convention.The
LAST_MODELkey is consistent with other storage keys in the module.frontend/src/features/tasks/components/ChatArea.tsx (7)
12-12: LGTM! Import added correctly.
22-22: LGTM! Preference utilities imported.
51-51: LGTM! State initialization appropriate.The
selectedModelstate correctly starts asnullto represent no selection.
179-188: LGTM! Model change handler follows existing pattern.The implementation mirrors
handleTeamChangeand correctly persists the selection.
207-207: LGTM! Model propagated to message service.
407-424: Clarify the disabled behavior for ModelSelector.The
ModelSelectoris rendered withdisabled={false}(line 420) whileTeamSelectorusesdisabled={hasMessages}(line 414). This inconsistency means:
- Team selection is locked when appending messages to existing tasks
- Model selection remains enabled when appending messages
Is this intentional? The PR description states "ability to change the model when appending messages to existing tasks," suggesting this is by design. However, this could lead to confusion:
- Users might expect the model to remain fixed for a task conversation
- Changing models mid-conversation could cause inconsistent behavior if the backend doesn't properly handle model switches
Please confirm:
- Is allowing model changes on existing tasks the intended behavior?
- Does the backend correctly handle model changes between messages in the same task?
- Should there be a visual indicator that changing the model mid-task might affect response quality/consistency?
If model changes mid-task are intentional, consider adding a tooltip or help text explaining this capability to users.
119-149: The model restoration logic is properly handled and does not require changes.The two concerns raised do not reflect actual issues in the code:
Null safety: The optional chaining syntax (
bot.agent_config?.private_model) combined with the truthiness check (if (privateModel)) already prevents errors whenagent_configorprivate_modelis missing.Timing/override risk: The effects target mutually exclusive conditions. The first effect runs only when
!hasMessages(no task selected), while the second runs only whenhasMessages && selectedTaskDetail?.subtasks(task is selected). They cannot both execute in the same render cycle—when a task is selected,hasMessagestransitions from false to true, causing the first effect's condition to become false while the second becomes true.frontend/src/features/tasks/service/messageService.ts (2)
20-20: LGTM! Parameter type correctly defined.
22-22: LGTM! Destructuring includes new parameter.backend/app/services/adapters/task_kinds.py (6)
233-233: LGTM! Parameter correctly passed to subtask creation.
846-846: LGTM! Function signature updated correctly.The optional
override_modelparameter withNonedefault maintains backward compatibility.
850-850: LGTM! Logging added for observability.
916-920: LGTM! Config structure correctly created.The override config format
{"private_model": override_model}aligns with the expected agent_config structure.
964-964: LGTM! Override config correctly applied to pipeline subtasks.
998-998: LGTM! Override config correctly applied to non-pipeline subtasks.backend/app/models/subtask.py (1)
46-46: Migration script not found foroverride_model_configcolumn addition.The model column has been added to
backend/app/models/subtask.py:46, and it's actively referenced inbackend/app/services/adapters/task_kinds.py, but no corresponding migration script was found. The project uses manual migration scripts (e.g.,backend/migrate_encrypt_tokens.py) rather than Alembic. A migration script must be created to add this column to existing database deployments.
| # Apply override_model_config to bot configs if present | ||
| subtask_bots = [] | ||
| for bot_id in subtask.bot_ids: | ||
| if bot_id in bots: | ||
| bot_dict = bots[bot_id].copy() | ||
| # If subtask has override_model_config, use it to override agent_config | ||
| if subtask.override_model_config: | ||
| bot_dict["agent_config"] = subtask.override_model_config | ||
| logger.info(f"Applied override_model_config to bot {bot_id} in subtask {subtask.id}: {subtask.override_model_config}") | ||
| subtask_bots.append(bot_dict) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Override replaces entire agent_config instead of merging.
Line 446 replaces the entire agent_config with override_model_config:
bot_dict["agent_config"] = subtask.override_model_configThis means if the original agent_config had other important fields (e.g., temperature, max_tokens, top_p), they will be lost. Only private_model will remain.
Consider merging the override config with the existing config:
if bot_id in bots:
bot_dict = bots[bot_id].copy()
# If subtask has override_model_config, use it to override agent_config
if subtask.override_model_config:
- bot_dict["agent_config"] = subtask.override_model_config
+ # Merge override config with existing config to preserve other settings
+ if bot_dict.get("agent_config"):
+ bot_dict["agent_config"] = {**bot_dict["agent_config"], **subtask.override_model_config}
+ else:
+ bot_dict["agent_config"] = subtask.override_model_config
logger.info(f"Applied override_model_config to bot {bot_id} in subtask {subtask.id}: {subtask.override_model_config}")
subtask_bots.append(bot_dict)This preserves other configuration parameters while overriding only the private_model field.
🤖 Prompt for AI Agents
In backend/app/services/adapters/task_kinds.py around lines 439-448, the code
currently replaces the entire bot_dict["agent_config"] with
subtask.override_model_config which discards other fields; instead, ensure
bot_dict has an agent_config dict (create empty dict if missing), make a shallow
copy of it, update that copy with subtask.override_model_config (so override
keys replace existing ones but other keys like temperature/max_tokens/top_p are
preserved), assign the merged dict back to bot_dict["agent_config"], and keep
the info log (optionally logging both original and merged configs or just the
applied override).
| # Prepare override_model_config if override_model is provided | ||
| override_model_config = None | ||
| if override_model: | ||
| override_model_config = {"private_model": override_model} | ||
| logger.info(f"Using override_model_config: {override_model_config}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Add validation for override_model value.
The code creates override_model_config without validating that override_model is a valid model name. This could lead to runtime errors when the executor tries to use an invalid model.
Add validation before creating the config:
# Prepare override_model_config if override_model is provided
override_model_config = None
if override_model:
+ # Validate override_model is not empty
+ if not override_model.strip():
+ raise HTTPException(
+ status_code=400,
+ detail="override_model cannot be empty"
+ )
+ # TODO: Consider validating against available models from the models API
override_model_config = {"private_model": override_model}
logger.info(f"Using override_model_config: {override_model_config}")Additionally, verify if there's a models service that can validate model names:
🏁 Script executed:
#!/bin/bash
# Description: Check for model validation services or available models lists
# Search for models API endpoints or services
echo "=== Checking for models API or validation services ==="
rg -n "def.*models" --type=py -g "backend/**" -A3
# Search for model name validation
echo "=== Checking for model name validation ==="
rg -n "validate.*model|model.*valid" --type=py -g "backend/**" -C3
# Look for /models/names endpoint implementation
echo "=== Checking /models/names endpoint ==="
rg -n "/models/names|models/names" --type=py -g "backend/**" -C5Length of output: 44837
🏁 Script executed:
#!/bin/bash
# Examine context around lines 916-920 in task_kinds.py
echo "=== Code context around lines 916-920 ==="
sed -n '900,930p' backend/app/services/adapters/task_kinds.py
echo -e "\n=== Searching for override_model usage ==="
rg -n "override_model" --type=py -C5 -g "backend/**"
echo -e "\n=== Checking function signature where override_model is used ==="
rg -n "def.*override_model" --type=py -C3 -g "backend/**"Length of output: 10999
🏁 Script executed:
#!/bin/bash
# Check where override_model_config is used downstream and if validation exists
echo "=== Searching for where override_model_config is processed ==="
rg -n "override_model_config" --type=py -C3 -g "backend/**"
echo -e "\n=== Checking models endpoint for validation capabilities ==="
cat backend/app/api/endpoints/adapter/models.py | head -150
echo -e "\n=== Checking if there's a method to validate model names ==="
rg -n "def.*valid.*model|def.*check.*model" --type=py -C2 -g "backend/**"Length of output: 8660
🏁 Script executed:
#!/bin/bash
# Check list_model_names implementation and related context
echo "=== Checking list_model_names implementation ==="
rg -n "def list_model_names" --type=py -C15 -g "backend/**"
echo -e "\n=== Checking if validation is done elsewhere for similar parameters ==="
rg -n "HTTPException.*400|raise.*ValueError" --type=py -C2 -g "backend/app/services/adapters/task_kinds.py" | head -50
echo -e "\n=== Check if override_model is ever validated at API level ==="
rg -n "override_model" --type=py backend/app/api/ -C3Length of output: 5483
Add validation for override_model before creating config.
The code accepts override_model without validation and directly passes it to subtasks, which then override bot configurations. This could send invalid model names to the executor.
At minimum, validate that override_model is not empty/whitespace-only before creating the config. Additionally, consider validating against available models using the list_model_names() service (which exists at /models/names), though this would require accessing the agent_name from the team's bot references.
# Prepare override_model_config if override_model is provided
override_model_config = None
if override_model:
+ if not override_model.strip():
+ raise HTTPException(
+ status_code=400,
+ detail="override_model cannot be empty"
+ )
+ # TODO: Consider validating against available models for the shells used in this team
override_model_config = {"private_model": override_model}
logger.info(f"Using override_model_config: {override_model_config}")Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In backend/app/services/adapters/task_kinds.py around lines 916 to 920, the code
builds override_model_config whenever override_model is truthy but does not
validate it; change this to first trim and ensure override_model is not empty or
only whitespace before creating override_model_config, log a warning and skip
creating the config if invalid, and (if agent_name/team context is available)
call the models name service (/models/names or list_model_names()) to verify the
override_model exists before using it — if the model is not found, log an error
and do not pass the invalid override_model to subtasks. Ensure logging includes
the problematic value and that behavior is consistent (no config created) when
validation fails.
Summary
This PR adds a model selector feature to the chat interface, allowing users to dynamically override the bot's default model when sending messages.
Key Features
Implementation Details
Frontend Changes
userPreferences.ts: AddedsaveLastModel()andgetLastModel()functionsModelSelector.tsx: New component that fetches models from/api/models/namesendpointChatArea.tsx: Integrated ModelSelector, added model restoration logic from localStorage and task detailmessageService.ts: Extended to passselectedModelto backendtasks.ts: Addedoverride_modelfield toCreateTaskRequestinterfaceBackend Changes
task.py: Addedoverride_modelfield toTaskCreateschemasubtask.py: Addedoverride_model_configJSON column to Subtask modeltask_kinds.py:create_task_or_append()to accept and passoverride_model_create_subtasks()to store override config in subtasksget_task_detail()to apply override config when returning bot dataData Flow
override_modelto backend{"private_model": "model_name"}in subtask'soverride_model_configagent_configif presentTesting Checklist
Summary by CodeRabbit