Skip to content

Conversation

@qdaxb
Copy link
Contributor

@qdaxb qdaxb commented Nov 18, 2025

Summary

This PR adds a model selector feature to the chat interface, allowing users to dynamically override the bot's default model when sending messages.

Key Features

  • Model Selection UI: Added a ModelSelector component next to the TeamSelector in the chat interface
  • Persistent Preferences: User's model selection is saved to localStorage and restored for new tasks
  • Dynamic Override: Users can change the model when appending messages to existing tasks
  • Backend Support: Model override is stored in subtask and applied to bot configuration at runtime
  • Backward Compatible: If no model is selected, the system uses the bot's default configuration

Implementation Details

Frontend Changes

  • userPreferences.ts: Added saveLastModel() and getLastModel() functions
  • ModelSelector.tsx: New component that fetches models from /api/models/names endpoint
  • ChatArea.tsx: Integrated ModelSelector, added model restoration logic from localStorage and task detail
  • messageService.ts: Extended to pass selectedModel to backend
  • tasks.ts: Added override_model field to CreateTaskRequest interface

Backend Changes

  • task.py: Added override_model field to TaskCreate schema
  • subtask.py: Added override_model_config JSON column to Subtask model
  • task_kinds.py:
    • Modified create_task_or_append() to accept and pass override_model
    • Updated _create_subtasks() to store override config in subtasks
    • Enhanced get_task_detail() to apply override config when returning bot data

Data Flow

  1. User selects model in UI → saved to localStorage
  2. User sends message → frontend passes override_model to backend
  3. Backend stores {"private_model": "model_name"} in subtask's override_model_config
  4. When fetching task detail, backend applies override to bot's agent_config if present
  5. Frontend reads current model from latest ASSISTANT subtask for display

Testing Checklist

  • Model selector appears in chat interface
  • Model selection is saved to localStorage
  • Model is restored when creating new task
  • Model override is applied when sending messages
  • Current task's model is displayed when appending messages
  • Backward compatibility: existing tasks without override work correctly

Summary by CodeRabbit

  • New Features
    • Added model selection dropdown in the chat interface for task creation
    • Last selected model preference is automatically saved and restored on subsequent use

Allow users to override bot's default model when sending messages.

Changes:
- Added model preference management (save/restore from localStorage)
- Created ModelSelector component with model list fetching
- Integrated ModelSelector in ChatArea UI (positioned next to TeamSelector)
- Updated messageService to pass selectedModel parameter
- Extended CreateTaskRequest API to include override_model field
- Enhanced TaskCreate schema to accept override_model
- Added override_model_config column to Subtask model
- Modified task_kinds.py to store and apply override_model_config
- Updated get_task_detail to return overridden bot configs for display

Features:
- User model selection is saved to localStorage for new tasks
- Model selection can be changed when appending messages
- Frontend reads current task's model from latest ASSISTANT subtask
- Backend stores override in subtask and applies to bot config
- Fully backward compatible: no override means use bot's default config
@coderabbitai
Copy link

coderabbitai bot commented Nov 18, 2025

Walkthrough

This PR introduces a feature enabling users to override the model/agent configuration for tasks. Changes include adding a nullable override_model_config field to the Subtask model, extending task creation schemas to accept an override_model parameter, propagating this parameter through subtask creation and detail retrieval logic, adding a UI component for model selection in the chat area, and persisting the selected model preference locally in storage.

Changes

Cohort / File(s) Summary
Backend Data Models
backend/app/models/subtask.py, backend/app/schemas/task.py
Added override_model_config JSON field to Subtask model and override_model optional string field to TaskCreate schema to support model override configuration.
Backend Task Service Logic
backend/app/services/adapters/task_kinds.py
Updated _create_subtasks method to accept and propagate override_model parameter, derives override_model_config from the parameter value, assigns it to created user and assistant subtasks, and applies per-subtask agent configuration overrides during task detail rendering.
Frontend API Interface
frontend/src/apis/tasks.ts
Extended CreateTaskRequest interface with optional override_model string field to align with backend schema changes.
Frontend UI Components
frontend/src/features/tasks/components/ChatArea.tsx, frontend/src/features/tasks/components/ModelSelector.tsx
Integrated new ModelSelector component into chat input area with state management for selectedModel; ModelSelector fetches available models from API, validates selection, and renders antd Select with search and loading states.
Frontend Services & Utilities
frontend/src/features/tasks/service/messageService.ts, frontend/src/utils/userPreferences.ts
Extended sendMessage to accept and thread selectedModel parameter into task API payload; added saveLastModel and getLastModel utility functions to persist/retrieve selected model from localStorage.

Sequence Diagram

sequenceDiagram
    actor User
    participant ChatArea
    participant ModelSelector
    participant messageService
    participant Backend
    participant DB as Database

    User->>ChatArea: Open chat, mount component
    ChatArea->>ModelSelector: Render with selectedModel state
    ModelSelector->>Backend: Fetch available models
    Backend-->>ModelSelector: Return model list
    ModelSelector->>ChatArea: Update with fetched models

    User->>ChatArea: Select model from dropdown
    ChatArea->>ChatArea: handleModelChange(selected)
    ChatArea->>localStorage: saveLastModel(selected)
    ChatArea->>ChatArea: setSelectedModel(selected)

    User->>ChatArea: Send message
    ChatArea->>messageService: sendMessage({..., selectedModel})
    messageService->>Backend: POST /task/message<br/>{..., override_model}
    
    rect rgb(200, 230, 255)
    note right of Backend: Task Service Processing
    Backend->>Backend: create_task_or_append()<br/>receives override_model
    Backend->>Backend: _create_subtasks(override_model)
    Backend->>Backend: Set override_model_config<br/>on each subtask
    end
    
    Backend->>DB: Persist subtasks with<br/>override_model_config
    DB-->>Backend: Success

    rect rgb(220, 240, 220)
    note right of Backend: Task Detail Rendering
    Backend->>Backend: get_task_detail()
    Backend->>Backend: For each subtask:<br/>Apply override_model_config<br/>to bot agent_config
    end

    Backend-->>ChatArea: Return updated task
    ChatArea-->>User: Display task with<br/>selected model applied
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–25 minutes

  • Attention areas:
    • backend/app/services/adapters/task_kinds.py: Verify that override_model_config is correctly threaded through all three subtask creation paths (user, assistant in non-pipeline, assistant in pipeline mode) and that the override logic in get_task_detail properly applies per-subtask configuration without breaking existing bot rendering.
    • frontend/src/features/tasks/components/ModelSelector.tsx: Confirm error handling for model fetch failures, proper validation logic when clearing invalid selections, and mobile responsiveness of the Select component.
    • frontend/src/features/tasks/components/ChatArea.tsx: Ensure state initialization correctly reads from localStorage and ASSISTANT subtask, and that the sendMessage payload is correctly constructed with the selectedModel value.
    • Integration flow: Trace the override_model value from UI selection through localStorage persistence, into messageService payload, and verify it reaches backend with correct key names (override_model vs. override_model_config).

Poem

🐰 A model menu hops into view,
Where tasks may choose which bot to pursue!
With LocalStorage holding each pick,
The overrides apply—click-click-click!
From selection to subtask, the flow runs true! 🎯

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding a model selector feature to the chat interface, which is reflected across frontend UI components (ModelSelector, ChatArea), backend schema updates, and the data flow implementation.
Docstring Coverage ✅ Passed Docstring coverage is 80.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch wegent/add-model-selector-feature

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (7)
backend/app/schemas/task.py (1)

56-56: Consider adding validation for model names.

While the optional field is correctly defined, there's no validation to ensure override_model contains a valid model name. Invalid model names could cause runtime errors when the bot attempts to use them.

Consider adding a validator or constraint:

from pydantic import field_validator

# Inside TaskCreate class:
@field_validator('override_model')
@classmethod
def validate_model_name(cls, v: Optional[str]) -> Optional[str]:
    if v is not None:
        # Add validation logic for allowed model names
        # Could check against a predefined list or pattern
        if not v.strip():
            raise ValueError("override_model cannot be empty string")
    return v

Alternatively, verify the model name exists when processing the task creation request in the service layer.

frontend/src/utils/userPreferences.ts (1)

118-136: Consider adding validation for empty model names.

Unlike saveLastTeam which validates the teamId input (lines 48-51), saveLastModel doesn't check if modelName is empty or whitespace-only, which could store invalid preferences.

Apply this diff to add validation:

 export function saveLastModel(modelName: string): void {
   try {
+    if (!modelName || !modelName.trim()) {
+      console.warn('[userPreferences] Invalid model name, not saving:', modelName);
+      return;
+    }
     localStorage.setItem(STORAGE_KEYS.LAST_MODEL, modelName);
   } catch (error) {
     console.warn('Failed to save last model to localStorage:', error);
   }
 }

Additionally, consider filtering out invalid values in getLastModel:

 export function getLastModel(): string | null {
   try {
-    return localStorage.getItem(STORAGE_KEYS.LAST_MODEL);
+    const model = localStorage.getItem(STORAGE_KEYS.LAST_MODEL);
+    if (!model || model === 'undefined' || model === 'null') {
+      return null;
+    }
+    return model;
   } catch (error) {
     console.warn('Failed to get last model from localStorage:', error);
     return null;
   }
 }
frontend/src/features/tasks/service/messageService.ts (1)

55-55: Clarify the null vs undefined conversion.

Line 55 uses selectedModel ?? undefined to convert null to undefined. While this works, the distinction might not be necessary since:

  1. JSON.stringify treats both null and undefined object properties differently (null is serialized, undefined is omitted)
  2. The backend accepts Optional[str] which can be None (equivalent to both null and undefined in Python)

Consider simplifying to be explicit about the intent:

-    override_model: selectedModel ?? undefined,
+    override_model: selectedModel || undefined,  // Omit property if empty string

Or document why the nullish coalescing is preferred:

// Convert null to undefined to omit the field from the request body
override_model: selectedModel ?? undefined,

This ensures the property is omitted entirely when not set, rather than sent as "override_model": null.

frontend/src/features/tasks/components/ModelSelector.tsx (3)

39-58: Consider making agent_name configurable and adding retry logic.

The model fetching logic has two concerns:

  1. Hardcoded agent_name (line 45): agent_name: 'ClaudeCode' is hardcoded, which limits flexibility if different agents/bots need different model lists.

  2. No retry logic: If the API call fails (network issue, temporary outage), the component permanently hides itself until page refresh.

Consider these improvements:

1. Make agent_name configurable:

 interface ModelSelectorProps {
   selectedModel: string | null;
   setSelectedModel: (model: string | null) => void;
   disabled?: boolean;
   isLoading?: boolean;
+  agentName?: string;
 }

 export default function ModelSelector({
   selectedModel,
   setSelectedModel,
   disabled = false,
   isLoading = false,
+  agentName = 'ClaudeCode',
 }: ModelSelectorProps) {
   // ...
   const response = await apiClient.get<{ data: ModelOption[] }>('/models/names', {
-    agent_name: 'ClaudeCode',
+    agent_name: agentName,
   });

2. Add retry logic for resilience:

const [retryCount, setRetryCount] = useState(0);
const MAX_RETRIES = 2;

useEffect(() => {
  const fetchModels = async () => {
    setModelsLoading(true);
    setError(null);
    try {
      const response = await apiClient.get<{ data: ModelOption[] }>('/models/names', {
        agent_name: 'ClaudeCode',
      });
      setModels(response.data || []);
      setRetryCount(0); // Reset on success
    } catch (err) {
      console.error('Failed to fetch models:', err);
      if (retryCount < MAX_RETRIES) {
        setRetryCount(prev => prev + 1);
        setTimeout(() => {
          // Trigger re-fetch by updating a dependency
        }, 1000 * (retryCount + 1)); // Exponential backoff
      } else {
        setError('Failed to load models');
        setModels([]);
      }
    } finally {
      setModelsLoading(false);
    }
  };

  fetchModels();
}, [retryCount]);

61-71: Validation clears selection without user feedback.

Lines 61-71 silently clear the selectedModel if it's not found in the fetched models list. This could happen if:

  1. The stored model from localStorage is deprecated/removed
  2. The API returns a different model list than expected
  3. There's a race condition during model list loading

Users might be confused why their selection disappeared without explanation.

Consider providing feedback when clearing the selection:

+import { App } from 'antd';
+
 export default function ModelSelector({
   selectedModel,
   setSelectedModel,
   disabled = false,
   isLoading = false,
 }: ModelSelectorProps) {
+  const { message } = App.useApp();
   // ... existing code ...
   
   // Validate selectedModel exists in models list
   useEffect(() => {
     if (selectedModel && models.length > 0) {
       const exists = models.some(m => m.name === selectedModel);
       if (!exists) {
         console.warn(
           `Selected model "${selectedModel}" not found in models list, clearing selection`
         );
+        message.warning(
+          `Previously selected model "${selectedModel}" is no longer available. Please select a new model.`
+        );
         setSelectedModel(null);
       }
     }
   }, [selectedModel, models, setSelectedModel]);

93-95: Reconsider hiding the selector on error.

Returning null when models fail to load (line 93-95) completely hides the model selector from the UI. This creates a poor user experience because:

  1. Users don't know the feature exists if it fails to load on first visit
  2. No way to manually retry without refreshing the page
  3. Silent failures are hard to debug

Consider showing a disabled selector with error state instead:

   if (error) {
-    return null; // Hide selector if models failed to load
+    return (
+      <div className="flex items-baseline space-x-1 min-w-0">
+        <CpuChipIcon className="w-3 h-3 text-error flex-shrink-0" />
+        <Tooltip title="Failed to load models. Please refresh the page.">
+          <Select
+            disabled
+            placeholder="Failed to load models"
+            size="small"
+            status="error"
+            style={{ width: 'auto', maxWidth: 200 }}
+          />
+        </Tooltip>
+      </div>
+    );
   }

This provides better visibility and helps users understand why the feature isn't working.

backend/app/services/adapters/task_kinds.py (1)

895-914: USER subtasks don't need override_model_config.

Lines 895-914 create a USER role subtask, but override_model_config is never set on it (correctly, since users don't execute with models). However, later at line 916-920, override_model_config is created but only used for ASSISTANT subtasks.

The current code is correct, but the comment or variable naming could be clearer.

Consider adding a comment to clarify:

         user_subtask = Subtask(
             user_id=user_id,
             task_id=task.id,
             team_id=team.id,
             title=f"{task_crd.spec.title} - User",
             bot_ids=bot_ids,
             role=SubtaskRole.USER,
             executor_namespace="",
             executor_name="",
             prompt=user_prompt,
             status=SubtaskStatus.COMPLETED,
             progress=0,
             message_id=next_message_id,
             parent_id=parent_id,
             error_message="",
             completed_at=datetime.now(),
             result=None,
+            # USER subtasks don't need override_model_config as they don't execute with models
         )
         db.add(user_subtask)

-        # Prepare override_model_config if override_model is provided
+        # Prepare override_model_config for ASSISTANT subtasks
         override_model_config = None
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f93f4f0 and 597bbcc.

📒 Files selected for processing (8)
  • backend/app/models/subtask.py (1 hunks)
  • backend/app/schemas/task.py (1 hunks)
  • backend/app/services/adapters/task_kinds.py (7 hunks)
  • frontend/src/apis/tasks.ts (1 hunks)
  • frontend/src/features/tasks/components/ChatArea.tsx (7 hunks)
  • frontend/src/features/tasks/components/ModelSelector.tsx (1 hunks)
  • frontend/src/features/tasks/service/messageService.ts (2 hunks)
  • frontend/src/utils/userPreferences.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/src/features/tasks/components/ModelSelector.tsx (2)
frontend/src/features/layout/hooks/useMediaQuery.ts (1)
  • useMediaQuery (14-43)
frontend/src/apis/client.ts (1)
  • apiClient (105-105)
frontend/src/features/tasks/components/ChatArea.tsx (3)
frontend/src/utils/userPreferences.ts (2)
  • getLastModel (129-136)
  • saveLastModel (118-124)
frontend/src/features/tasks/components/TeamSelector.tsx (1)
  • TeamSelector (27-198)
frontend/src/features/tasks/components/ModelSelector.tsx (1)
  • ModelSelector (25-135)
🔇 Additional comments (18)
frontend/src/apis/tasks.ts (1)

22-22: LGTM! Type definition aligns with backend schema.

The optional override_model field correctly mirrors the backend's TaskCreate.override_model: Optional[str] field, maintaining type consistency across the stack.

frontend/src/utils/userPreferences.ts (1)

14-14: LGTM! Storage key follows existing naming convention.

The LAST_MODEL key is consistent with other storage keys in the module.

frontend/src/features/tasks/components/ChatArea.tsx (7)

12-12: LGTM! Import added correctly.


22-22: LGTM! Preference utilities imported.


51-51: LGTM! State initialization appropriate.

The selectedModel state correctly starts as null to represent no selection.


179-188: LGTM! Model change handler follows existing pattern.

The implementation mirrors handleTeamChange and correctly persists the selection.


207-207: LGTM! Model propagated to message service.


407-424: Clarify the disabled behavior for ModelSelector.

The ModelSelector is rendered with disabled={false} (line 420) while TeamSelector uses disabled={hasMessages} (line 414). This inconsistency means:

  • Team selection is locked when appending messages to existing tasks
  • Model selection remains enabled when appending messages

Is this intentional? The PR description states "ability to change the model when appending messages to existing tasks," suggesting this is by design. However, this could lead to confusion:

  1. Users might expect the model to remain fixed for a task conversation
  2. Changing models mid-conversation could cause inconsistent behavior if the backend doesn't properly handle model switches

Please confirm:

  1. Is allowing model changes on existing tasks the intended behavior?
  2. Does the backend correctly handle model changes between messages in the same task?
  3. Should there be a visual indicator that changing the model mid-task might affect response quality/consistency?

If model changes mid-task are intentional, consider adding a tooltip or help text explaining this capability to users.


119-149: The model restoration logic is properly handled and does not require changes.

The two concerns raised do not reflect actual issues in the code:

  1. Null safety: The optional chaining syntax (bot.agent_config?.private_model) combined with the truthiness check (if (privateModel)) already prevents errors when agent_config or private_model is missing.

  2. Timing/override risk: The effects target mutually exclusive conditions. The first effect runs only when !hasMessages (no task selected), while the second runs only when hasMessages && selectedTaskDetail?.subtasks (task is selected). They cannot both execute in the same render cycle—when a task is selected, hasMessages transitions from false to true, causing the first effect's condition to become false while the second becomes true.

frontend/src/features/tasks/service/messageService.ts (2)

20-20: LGTM! Parameter type correctly defined.


22-22: LGTM! Destructuring includes new parameter.

backend/app/services/adapters/task_kinds.py (6)

233-233: LGTM! Parameter correctly passed to subtask creation.


846-846: LGTM! Function signature updated correctly.

The optional override_model parameter with None default maintains backward compatibility.


850-850: LGTM! Logging added for observability.


916-920: LGTM! Config structure correctly created.

The override config format {"private_model": override_model} aligns with the expected agent_config structure.


964-964: LGTM! Override config correctly applied to pipeline subtasks.


998-998: LGTM! Override config correctly applied to non-pipeline subtasks.

backend/app/models/subtask.py (1)

46-46: Migration script not found for override_model_config column addition.

The model column has been added to backend/app/models/subtask.py:46, and it's actively referenced in backend/app/services/adapters/task_kinds.py, but no corresponding migration script was found. The project uses manual migration scripts (e.g., backend/migrate_encrypt_tokens.py) rather than Alembic. A migration script must be created to add this column to existing database deployments.

Comment on lines +439 to +448
# Apply override_model_config to bot configs if present
subtask_bots = []
for bot_id in subtask.bot_ids:
if bot_id in bots:
bot_dict = bots[bot_id].copy()
# If subtask has override_model_config, use it to override agent_config
if subtask.override_model_config:
bot_dict["agent_config"] = subtask.override_model_config
logger.info(f"Applied override_model_config to bot {bot_id} in subtask {subtask.id}: {subtask.override_model_config}")
subtask_bots.append(bot_dict)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Override replaces entire agent_config instead of merging.

Line 446 replaces the entire agent_config with override_model_config:

bot_dict["agent_config"] = subtask.override_model_config

This means if the original agent_config had other important fields (e.g., temperature, max_tokens, top_p), they will be lost. Only private_model will remain.

Consider merging the override config with the existing config:

                 if bot_id in bots:
                     bot_dict = bots[bot_id].copy()
                     # If subtask has override_model_config, use it to override agent_config
                     if subtask.override_model_config:
-                        bot_dict["agent_config"] = subtask.override_model_config
+                        # Merge override config with existing config to preserve other settings
+                        if bot_dict.get("agent_config"):
+                            bot_dict["agent_config"] = {**bot_dict["agent_config"], **subtask.override_model_config}
+                        else:
+                            bot_dict["agent_config"] = subtask.override_model_config
                         logger.info(f"Applied override_model_config to bot {bot_id} in subtask {subtask.id}: {subtask.override_model_config}")
                     subtask_bots.append(bot_dict)

This preserves other configuration parameters while overriding only the private_model field.

🤖 Prompt for AI Agents
In backend/app/services/adapters/task_kinds.py around lines 439-448, the code
currently replaces the entire bot_dict["agent_config"] with
subtask.override_model_config which discards other fields; instead, ensure
bot_dict has an agent_config dict (create empty dict if missing), make a shallow
copy of it, update that copy with subtask.override_model_config (so override
keys replace existing ones but other keys like temperature/max_tokens/top_p are
preserved), assign the merged dict back to bot_dict["agent_config"], and keep
the info log (optionally logging both original and merged configs or just the
applied override).

Comment on lines +916 to +920
# Prepare override_model_config if override_model is provided
override_model_config = None
if override_model:
override_model_config = {"private_model": override_model}
logger.info(f"Using override_model_config: {override_model_config}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

Add validation for override_model value.

The code creates override_model_config without validating that override_model is a valid model name. This could lead to runtime errors when the executor tries to use an invalid model.

Add validation before creating the config:

         # Prepare override_model_config if override_model is provided
         override_model_config = None
         if override_model:
+            # Validate override_model is not empty
+            if not override_model.strip():
+                raise HTTPException(
+                    status_code=400,
+                    detail="override_model cannot be empty"
+                )
+            # TODO: Consider validating against available models from the models API
             override_model_config = {"private_model": override_model}
             logger.info(f"Using override_model_config: {override_model_config}")

Additionally, verify if there's a models service that can validate model names:


🏁 Script executed:

#!/bin/bash
# Description: Check for model validation services or available models lists

# Search for models API endpoints or services
echo "=== Checking for models API or validation services ==="
rg -n "def.*models" --type=py -g "backend/**" -A3

# Search for model name validation
echo "=== Checking for model name validation ==="
rg -n "validate.*model|model.*valid" --type=py -g "backend/**" -C3

# Look for /models/names endpoint implementation
echo "=== Checking /models/names endpoint ==="
rg -n "/models/names|models/names" --type=py -g "backend/**" -C5

Length of output: 44837


🏁 Script executed:

#!/bin/bash
# Examine context around lines 916-920 in task_kinds.py

echo "=== Code context around lines 916-920 ==="
sed -n '900,930p' backend/app/services/adapters/task_kinds.py

echo -e "\n=== Searching for override_model usage ==="
rg -n "override_model" --type=py -C5 -g "backend/**"

echo -e "\n=== Checking function signature where override_model is used ==="
rg -n "def.*override_model" --type=py -C3 -g "backend/**"

Length of output: 10999



🏁 Script executed:

#!/bin/bash
# Check where override_model_config is used downstream and if validation exists

echo "=== Searching for where override_model_config is processed ==="
rg -n "override_model_config" --type=py -C3 -g "backend/**"

echo -e "\n=== Checking models endpoint for validation capabilities ==="
cat backend/app/api/endpoints/adapter/models.py | head -150

echo -e "\n=== Checking if there's a method to validate model names ==="
rg -n "def.*valid.*model|def.*check.*model" --type=py -C2 -g "backend/**"

Length of output: 8660



🏁 Script executed:

#!/bin/bash
# Check list_model_names implementation and related context

echo "=== Checking list_model_names implementation ==="
rg -n "def list_model_names" --type=py -C15 -g "backend/**"

echo -e "\n=== Checking if validation is done elsewhere for similar parameters ==="
rg -n "HTTPException.*400|raise.*ValueError" --type=py -C2 -g "backend/app/services/adapters/task_kinds.py" | head -50

echo -e "\n=== Check if override_model is ever validated at API level ==="
rg -n "override_model" --type=py backend/app/api/ -C3

Length of output: 5483


Add validation for override_model before creating config.

The code accepts override_model without validation and directly passes it to subtasks, which then override bot configurations. This could send invalid model names to the executor.

At minimum, validate that override_model is not empty/whitespace-only before creating the config. Additionally, consider validating against available models using the list_model_names() service (which exists at /models/names), though this would require accessing the agent_name from the team's bot references.

        # Prepare override_model_config if override_model is provided
        override_model_config = None
        if override_model:
+            if not override_model.strip():
+                raise HTTPException(
+                    status_code=400,
+                    detail="override_model cannot be empty"
+                )
+            # TODO: Consider validating against available models for the shells used in this team
             override_model_config = {"private_model": override_model}
             logger.info(f"Using override_model_config: {override_model_config}")

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In backend/app/services/adapters/task_kinds.py around lines 916 to 920, the code
builds override_model_config whenever override_model is truthy but does not
validate it; change this to first trim and ensure override_model is not empty or
only whitespace before creating override_model_config, log a warning and skip
creating the config if invalid, and (if agent_name/team context is available)
call the models name service (/models/names or list_model_names()) to verify the
override_model exists before using it — if the model is not found, log an error
and do not pass the invalid override_model to subtasks. Ensure logging includes
the problematic value and that behavior is consistent (no config created) when
validation fails.

@qdaxb qdaxb closed this Nov 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants