-
Notifications
You must be signed in to change notification settings - Fork 667
Implement AI "stop" -- in the client, open ai responses/chat, and gemini backends #2704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@sawka has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 8 minutes and 8 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThis pull request adds comprehensive connection architecture docs, front-end connection docs, and multiple code changes: refactors tab-model exports and introduces maybeUseTabModel; conditions AIPanel rendering on active tab; updates AIPanel input to show a Stop Response during streaming; exposes new public fields and adds reloadChatFromBackend and stopResponse on WaveAIModel; adds ChatStore.RemoveMessage; adds client-disconnect partial-message handling across Gemini/OpenAI/OpenAI Chat streaming paths; introduces tool-use removal/update functions and refactors tool-call processing; and exposes SSEHandlerCh.Context(). Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (4)
pkg/aiusechat/gemini/gemini-convertmessage.go (1)
461-506: Consistent implementation with OpenAI backend.The
RemoveToolUseCallfunction correctly:
- Uses
slices.Clonebeforeslices.Deleteto avoid mutating the original slice- Removes the entire message if no parts remain
- Returns
nil(not an error) when tool call is not found, matching the OpenAI backend behaviorOne minor observation: line 500 ignores the error from
PostMessage. Consider logging or returning the error for debugging purposes.🔎 Optional: Handle PostMessage error
if len(updatedMsg.Parts) == 0 { chatstore.DefaultChatStore.RemoveMessage(chatId, chatMsg.MessageId) } else { aiOpts := &uctypes.AIOptsType{ APIType: chat.APIType, Model: chat.Model, APIVersion: chat.APIVersion, } - chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg) + if err := chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg); err != nil { + return err + } }aiprompts/conn-arch.md (1)
9-48: Add language identifier to fenced code block.Per static analysis (markdownlint MD040), fenced code blocks should have a language specified. This ASCII diagram block and similar blocks throughout the file (lines 90, 137, 199, 326, 426) could use a language identifier like
textorplaintextfor consistency.🔎 Example fix
-``` +```text ┌─────────────────────────────────────────────────────────────────┐ │ Block Controllers │aiprompts/fe-conn-arch.md (1)
1-1002: Excellent comprehensive architecture documentation.This documentation provides thorough coverage of Wave Terminal's frontend connection architecture, including:
- Clear architecture diagrams and data flow patterns
- Detailed component responsibilities and interactions
- Code examples that align with the actual codebase
- Connection lifecycle and error handling patterns
- Best practices for view model and RPC usage
The documentation will be valuable for developers working with the connection system.
Optional: Address markdown linting suggestions
The static analysis tools suggest minor markdown style improvements:
- Add language identifiers to plain code blocks (lines 9, 627, 655, 689) - helps with syntax highlighting
- Consider using proper headings instead of bold text for section titles (multiple locations) - improves document structure and navigation
These are purely stylistic and don't affect the technical content.
frontend/app/aipanel/waveai-model.tsx (1)
45-50: Field visibility changes enable new public API.Making these fields public allows the new
stopResponse()method to access the underlying useChat functions. While this exposes internal state, it's a pragmatic approach for implementing the stop functionality.Consideration: If more external access patterns emerge in the future, consider adding getter methods or creating a dedicated interface for external interactions to maintain better encapsulation.
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
aiprompts/conn-arch.md(1 hunks)aiprompts/fe-conn-arch.md(1 hunks)frontend/app/aipanel/aipanel.tsx(2 hunks)frontend/app/aipanel/aipanelinput.tsx(1 hunks)frontend/app/aipanel/waveai-model.tsx(3 hunks)frontend/app/app.tsx(2 hunks)frontend/app/store/tab-model.ts(3 hunks)frontend/app/workspace/workspace.tsx(2 hunks)pkg/aiusechat/chatstore/chatstore.go(2 hunks)pkg/aiusechat/gemini/gemini-backend.go(2 hunks)pkg/aiusechat/gemini/gemini-convertmessage.go(2 hunks)pkg/aiusechat/openai/openai-backend.go(7 hunks)pkg/aiusechat/openaichat/openaichat-backend.go(2 hunks)pkg/aiusechat/openaichat/openaichat-convertmessage.go(2 hunks)pkg/aiusechat/usechat-backend.go(5 hunks)pkg/aiusechat/usechat.go(5 hunks)pkg/web/sse/ssehandler.go(1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.
Applied to files:
pkg/aiusechat/gemini/gemini-backend.gopkg/aiusechat/usechat-backend.gopkg/aiusechat/openaichat/openaichat-convertmessage.gopkg/aiusechat/usechat.go
📚 Learning: 2025-10-21T05:09:26.916Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2465
File: frontend/app/onboarding/onboarding-upgrade.tsx:13-21
Timestamp: 2025-10-21T05:09:26.916Z
Learning: In the waveterm codebase, clientData is loaded and awaited in wave.ts before React runs, ensuring it is always available when components mount. This means atoms.client will have data on first render.
Applied to files:
frontend/app/app.tsx
📚 Learning: 2025-10-14T06:30:54.763Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2430
File: frontend/app/aipanel/aimessage.tsx:137-144
Timestamp: 2025-10-14T06:30:54.763Z
Learning: In `frontend/app/aipanel/aimessage.tsx`, the `AIToolUseGroup` component splits file operation tool calls into separate batches (`fileOpsNeedApproval` and `fileOpsNoApproval`) based on their approval state before passing them to `AIToolUseBatch`. This ensures each batch has homogeneous approval states, making group-level approval handling valid.
Applied to files:
frontend/app/aipanel/aipanelinput.tsx
🧬 Code graph analysis (7)
frontend/app/aipanel/aipanel.tsx (1)
frontend/app/store/tab-model.ts (1)
maybeUseTabModel(70-72)
frontend/app/workspace/workspace.tsx (1)
frontend/app/tab/tabcontent.tsx (1)
TabContent(76-76)
pkg/aiusechat/gemini/gemini-backend.go (2)
pkg/aiusechat/uctypes/uctypes.go (2)
WaveStopReason(255-261)StopKindCanceled(241-241)pkg/aiusechat/gemini/gemini-types.go (2)
GeminiChatMessage(15-20)GeminiMessagePart(43-66)
pkg/aiusechat/usechat-backend.go (3)
pkg/aiusechat/openai/openai-backend.go (1)
RemoveToolUseCall(442-461)pkg/aiusechat/gemini/gemini-convertmessage.go (1)
RemoveToolUseCall(461-506)pkg/aiusechat/openaichat/openaichat-convertmessage.go (1)
RemoveToolUseCall(363-394)
pkg/aiusechat/openaichat/openaichat-backend.go (2)
pkg/aiusechat/uctypes/uctypes.go (2)
WaveStopReason(255-261)StopKindCanceled(241-241)pkg/aiusechat/openaichat/openaichat-types.go (2)
StoredChatMessage(118-122)ChatRequestMessage(23-29)
pkg/aiusechat/gemini/gemini-convertmessage.go (5)
pkg/aiusechat/openai/openai-backend.go (2)
UpdateToolUseData(411-440)RemoveToolUseCall(442-461)pkg/aiusechat/uctypes/uctypes.go (2)
UIMessageDataToolUse(210-221)AIOptsType(270-284)pkg/aiusechat/chatstore/chatstore.go (1)
DefaultChatStore(19-21)pkg/aiusechat/gemini/gemini-types.go (2)
GeminiChatMessage(15-20)GeminiMessagePart(43-66)pkg/wcloud/wcloud.go (1)
APIVersion(37-37)
frontend/app/aipanel/aipanelinput.tsx (2)
frontend/app/element/tooltip.tsx (1)
Tooltip(143-172)frontend/util/util.ts (1)
cn(509-509)
🪛 Biome (2.1.2)
frontend/app/aipanel/waveai-model.tsx
[error] 344-344: This hook is being called conditionally, but all hooks must be called in the exact same order in every component render.
For React to preserve state between calls, hooks needs to be called unconditionally and always in the same order.
See https://reactjs.org/docs/hooks-rules.html#only-call-hooks-at-the-top-level
(lint/correctness/useHookAtTopLevel)
🪛 LanguageTool
aiprompts/fe-conn-arch.md
[style] ~63-~63: The double modal “needed Used” is nonstandard (only accepted in certain dialects). Consider “to be Used”.
Context: ...atom initialized to default if needed - Used by view models to track their connectio...
(NEEDS_FIXED)
[grammar] ~976-~976: Use a hyphen to join words.
Context: ...2000 } // 2 seconds 3. **Batch Related Operations:** typescript ...
(QB_NEW_EN_HYPHEN)
🪛 markdownlint-cli2 (0.18.1)
aiprompts/fe-conn-arch.md
9-9: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
45-45: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
55-55: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
65-65: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
78-78: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
93-93: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
132-132: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
214-214: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
457-457: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
465-465: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
473-473: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
481-481: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
493-493: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
504-504: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
515-515: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
529-529: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
545-545: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
556-556: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
569-569: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
580-580: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
627-627: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
655-655: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
689-689: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
aiprompts/conn-arch.md
9-9: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
90-90: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
137-137: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
199-199: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
326-326: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
426-426: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
- GitHub Check: Build for TestDriver.ai
🔇 Additional comments (33)
pkg/web/sse/ssehandler.go (1)
92-94: LGTM!Clean getter method that exposes the internal context for cancellation propagation. Since
ctxis set once at construction viaMakeSSEHandlerChand never modified, this is safe to expose without synchronization.pkg/aiusechat/chatstore/chatstore.go (1)
114-129: LGTM!Well-implemented thread-safe message removal. The use of
slices.DeleteFuncis appropriate, and comparing lengths before/after is a clean way to determine if a removal occurred. The lock is correctly held for the duration of the operation.pkg/aiusechat/openaichat/openaichat-backend.go (2)
134-141: LGTM!Good handling of client disconnects during streaming. Checking
sseHandler.Err()after a decode error correctly identifies when the client has disconnected vs. an actual stream error. Returningnilfor the error (instead of propagating the decode error) is appropriate since client disconnects are expected behavior, not server failures.
267-279: LGTM!Clean helper function that follows the same pattern as
extractPartialGeminiMessagein the Gemini backend. Returningnilfor empty text prevents storing empty partial messages.pkg/aiusechat/gemini/gemini-backend.go (2)
311-318: LGTM!Consistent client-disconnect handling that matches the OpenAI Chat backend implementation. The pattern of checking
sseHandler.Err()after a decode error properly distinguishes client disconnects from actual stream errors.
485-499: LGTM!Clean helper function for constructing partial Gemini messages. The structure correctly uses the Gemini-specific types (
GeminiChatMessagewithGeminiMessagePart) and follows the same nil-for-empty-text pattern as the OpenAI implementation.pkg/aiusechat/gemini/gemini-convertmessage.go (2)
11-11: LGTM on new imports.The
slicesandchatstoreimports are correctly added to support the newUpdateToolUseDataandRemoveToolUseCallfunctions.Also applies to: 16-16
422-459: Shallow copy ofPartsslice shares underlyingGeminiMessagePartstructs.On line 444,
copy(updatedMsg.Parts, chatMsg.Parts)performs a shallow copy of the slice. SinceGeminiMessagePartcontains pointer fields (likeToolUseData *uctypes.UIMessageDataToolUse), modifyingupdatedMsg.Parts[i].ToolUseDataon line 445 would affect the originalchatMsg.Parts[i]if they share the same underlying struct.However, since
GeminiMessagePartis a value type (not a pointer), thecopyactually copies each struct by value, so modifying theToolUseDatapointer field in the copy is safe here. The implementation is correct.aiprompts/conn-arch.md (1)
1-612: Comprehensive architecture documentation.This is well-structured documentation covering the connection system architecture, including block controllers, SSH/WSL connections, shell process execution, and key design principles. The diagrams and code examples effectively illustrate the system design.
pkg/aiusechat/openai/openai-backend.go (7)
391-398: LGTM on newaccumulatedTextfield.The
accumulatedTextfield inopenaiBlockStatecorrectly tracks streamed text content for partial message recovery on client disconnect.
442-461: ConsistentRemoveToolUseCallimplementation.The function correctly follows the same pattern as the Gemini backend - returns
nilwhen tool call is not found (idempotent behavior), and removes the message from chatstore when found.
648-660: Good client disconnect handling.The code correctly detects SSE connection errors and captures partial text before returning a canceled stop reason. This prevents loss of streamed content when the user stops the AI response mid-stream.
674-678: Partial text extraction on cancellation.When
finalStopReason.Kind == StopKindCanceledand no messages were returned, the code correctly extracts partial text from state. This ensures partial responses are preserved when the client disconnects.
687-713: Well-implemented partial text extraction.The
extractPartialTextFromStatefunction correctly:
- Iterates through block states to find text blocks
- Collects accumulated text content
- Returns
nilif no text was accumulated (avoiding empty messages)- Creates a properly structured
OpenAIChatMessagewith a new UUID
727-733: Early disconnect detection in event handler.Checking
sse.Err()at the start ofhandleOpenAIEventprovides an early exit path when the client has disconnected, avoiding unnecessary processing of events that won't reach the client.
838-838: Text accumulation for partial message recovery.The delta text is correctly accumulated in
st.accumulatedTextalongside being sent to the SSE stream. This enables partial message recovery on disconnect.frontend/app/store/tab-model.ts (1)
10-10: LGTM on expanded exports.Making these entities public (
activeTabIdAtom,TabModel,getTabModelByTabId,getActiveTabModel,TabModelContext,useTabModel) aligns with the PR's goal of enabling tab-model context access from various parts of the application.Also applies to: 12-12, 43-43, 52-52, 60-60, 62-62
frontend/app/app.tsx (2)
6-6: LGTM on imports and tabId retrieval.The import of
getTabModelByTabIdandTabModelContextfrom the updated tab-model store is correct. Based on learnings,staticTabIdshould be available at component mount.Also applies to: 35-35
41-43: No null/undefined guard needed for tabId.
atoms.staticTabIdis typed asAtom<string>and always initialized with a valid string frominitOpts.tabId. The type system prevents null/undefined values at compile time, and the current code is type-safe.Likely an incorrect or invalid review comment.
pkg/aiusechat/usechat-backend.go (4)
35-37: LGTM! Clear interface addition for tool call cleanup.The new
RemoveToolUseCallmethod is well-documented and provides a clear mechanism for cleaning up incomplete or canceled tool calls during stop operations. The interface design is consistent with existing methods.
101-103: LGTM! Proper delegation to backend implementation.The implementation correctly delegates to the OpenAI package's
RemoveToolUseCallfunction, maintaining the separation of concerns between the interface layer and backend-specific logic.
159-161: LGTM! Consistent backend delegation.Correctly delegates to the OpenAI Chat backend implementation, following the same pattern as other backend wrappers.
196-198: LGTM! Backend implementations follow established patterns.
- anthropicBackend: Appropriately returns a not-implemented error, consistent with the
UpdateToolUseDatamethod (line 192-194).- geminiBackend: Correctly delegates to the Gemini package implementation.
Both implementations align with their respective backend capabilities and the overall architecture.
Also applies to: 240-242
frontend/app/workspace/workspace.tsx (2)
72-72: LGTM! Appropriate conditional rendering.Rendering the
AIPanelonly when an active tab exists (tabId !== "") is a logical optimization that aligns with the conditional rendering ofTabContenton lines 77-84. This prevents unnecessary panel rendering when there's no active tab context.
81-81: Context is properly provided at application root level.The removal of
TabModelContext.Providerfrom aroundTabContentis correct. The context is established infrontend/app/app.tsxat the top level, wrapping the entire application tree via<TabModelContext.Provider value={getTabModelByTabId(tabId)}>. This properly supplies the correctTabModelfor the current tab to all descendant components, includingBlock,Term, andBlockFramewhich calluseTabModel()ormaybeUseTabModel(). The component hierarchy ensures context flows correctly from the root provider throughAppInner→Workspace→TabContentand its children.frontend/app/aipanel/aipanelinput.tsx (1)
172-200: LGTM! Clean streaming-aware UI controls.The conditional rendering provides clear visual feedback:
- During streaming: Green stop button with square icon (conventional stop indicator)
- Ready state: Accent-colored send button with paper-plane icon
- Tooltips accurately describe each action
- Disabled logic properly preserved from the original implementation
The integration with
model.stopResponse()aligns with the new public API inwaveai-model.tsx.pkg/aiusechat/usechat.go (4)
255-255: Excellent improvement: Using SSE context for tool approval.Changing from
context.Background()tosseHandler.Context()is a critical improvement that ensures tool approval requests are canceled when the client disconnects. This prevents zombie approvals and aligns perfectly with the PR's goal of implementing AI "stop" functionality.Benefits:
- Approval waits are canceled if the client disconnects mid-stream
- Prevents resource leaks from abandoned approval requests
- Provides proper cleanup when users stop AI responses
324-385: Well-designed refactor with proper disconnect handling.The refactored
processAllToolCallsfunction introduces several improvements:Upfront Tool Registration (lines 330-349)
- Emits all
data-tooluseevents immediately for better UI responsiveness- Registers approvals upfront when needed
- Ensures
ToolUseDatais initialized for all callsEarly Exit on Client Disconnect (lines 354-357)
- Checks
sseHandler.Err()before processing each tool- Stops processing if client has disconnected
- Prevents wasted work on tool execution after disconnect
Cleanup and Filtering (lines 362-384)
- Properly unregisters approvals
- Removes incomplete/canceled tool calls via
RemoveToolUseCall- Posts only filtered (completed) results to chat store
- Prevents partial tool results from polluting the chat
This design aligns perfectly with the PR's goal of implementing proper stop behavior during streaming.
440-442: LGTM! Useful debug logging.The added
stopReasonlogging provides valuable debugging information when troubleshooting AI stop behavior. Usinglogutil.DevPrintfkeeps it as debug-level logging that won't clutter production logs.
479-479: LGTM! Call site updated for renamed function.The call site correctly uses the renamed
processAllToolCallsfunction, maintaining consistency with the function rename.frontend/app/aipanel/waveai-model.tsx (3)
327-332: LGTM! Clean utility method for chat reloading.The
reloadChatFromBackendmethod provides a reusable way to fetch chat data and update the empty state. Good code factoring that's used by bothstopResponse()andloadInitialChat().
334-348: Clarify the 500ms delay after stopping.The
stopResponse()method implements the stop functionality well, but the 500ms delay on line 336 appears to be a magic number:await new Promise((resolve) => setTimeout(resolve, 500));Question: What is the purpose of this delay?
- Is it to allow the backend to finish writing partial messages?
- Is it a workaround for race conditions?
- Could this be made more explicit with a named constant or comment?
Consider:
// Wait for backend to persist partial response after stop const STOP_SETTLE_TIME_MS = 500; await new Promise((resolve) => setTimeout(resolve, STOP_SETTLE_TIME_MS));Otherwise, the implementation correctly:
- Stops the streaming response
- Reloads the chat to get the partial results
- Updates the UI with the refreshed messages
- Handles errors appropriately
474-474: LGTM! Good code reuse.Refactoring to use the shared
reloadChatFromBackendmethod reduces code duplication while maintaining the same functionality.
| import { atoms, getSettingsKeyAtom } from "@/app/store/global"; | ||
| import { globalStore } from "@/app/store/jotaiStore"; | ||
| import { useTabModel } from "@/app/store/tab-model"; | ||
| import { maybeUseTabModel } from "@/app/store/tab-model"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "*.ts" -o -name "*.tsx" | head -20Repository: wavetermdev/waveterm
Length of output: 618
🏁 Script executed:
rg "maybeUseTabModel|useTabModel" --type=tsx --type=ts -B 2 -A 5Repository: wavetermdev/waveterm
Length of output: 91
🏁 Script executed:
fd aipanel.tsxRepository: wavetermdev/waveterm
Length of output: 96
🏁 Script executed:
cat -n frontend/app/aipanel/aipanel.tsx | head -100Repository: wavetermdev/waveterm
Length of output: 4887
🏁 Script executed:
cat -n frontend/app/aipanel/aipanel.tsx | sed -n '250,290p'Repository: wavetermdev/waveterm
Length of output: 2567
🏁 Script executed:
rg "export function maybeUseTabModel|function maybeUseTabModel" -A 8 -B 2Repository: wavetermdev/waveterm
Length of output: 315
🏁 Script executed:
rg "maybeUseTabModel" -g "*.ts" -g "*.tsx" -A 3 -B 2Repository: wavetermdev/waveterm
Length of output: 1442
🏁 Script executed:
cat -n frontend/app/store/tab-model.ts | head -50Repository: wavetermdev/waveterm
Length of output: 2011
🏁 Script executed:
rg "TabModelContext" -B 5 -A 3 frontend/app/store/tab-model.tsRepository: wavetermdev/waveterm
Length of output: 526
Fix return type annotation and add null-safety check for tabModel.
The maybeUseTabModel() function has an incorrect return type annotation. It's declared to return TabModel (line 9 import), but the actual implementation returns useContext(TabModelContext) where the context is typed as TabModel | undefined with an undefined default value.
This creates a type safety issue at line 282:
body.tabid = tabModel.tabId;If the component is used outside a TabModelProvider, tabModel will be undefined at runtime, causing a crash.
Required fixes:
- Update
maybeUseTabModel()return type toTabModel | undefined(or useuseTabModel()which properly throws) - Add a null check before accessing
tabModel.tabIdon line 282, or use optional chaining:body.tabid = tabModel?.tabId;
🤖 Prompt for AI Agents
In frontend/app/aipanel/aipanel.tsx around lines 9 and 282: the imported
maybeUseTabModel() is typed as returning TabModel but its implementation
actually returns TabModel | undefined, which lets tabModel be undefined and
causes a runtime crash when accessing tabModel.tabId; update the
maybeUseTabModel() signature in its module to return TabModel | undefined (or
switch callers to use useTabModel() which throws when missing), and in this file
add a null-safety check before using tabModel on line 282 — either guard with if
(tabModel) { body.tabid = tabModel.tabId } or use optional chaining body.tabid =
tabModel?.tabId so you never dereference undefined.
| export function maybeUseTabModel(): TabModel { | ||
| return useContext(TabModelContext); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return type should include undefined.
useContext(TabModelContext) returns TabModel | undefined (since the context default is undefined), but maybeUseTabModel declares return type as TabModel. This can cause type errors for callers who correctly expect the function might return undefined.
🔎 Proposed fix
-export function maybeUseTabModel(): TabModel {
+export function maybeUseTabModel(): TabModel | undefined {
return useContext(TabModelContext);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function maybeUseTabModel(): TabModel { | |
| return useContext(TabModelContext); | |
| } | |
| export function maybeUseTabModel(): TabModel | undefined { | |
| return useContext(TabModelContext); | |
| } |
🤖 Prompt for AI Agents
In frontend/app/store/tab-model.ts around lines 70 to 72, the function
maybeUseTabModel currently declares a return type of TabModel but calls
useContext(TabModelContext) which can return TabModel | undefined; update the
function signature to return TabModel | undefined and propagate that change to
any callers (or add non-null assertions only where safe) so TypeScript reflects
the possible undefined value from the context.
| func RemoveToolUseCall(chatId string, callId string) error { | ||
| chat := chatstore.DefaultChatStore.Get(chatId) | ||
| if chat == nil { | ||
| return fmt.Errorf("chat not found: %s", chatId) | ||
| } | ||
|
|
||
| for _, genMsg := range chat.NativeMessages { | ||
| chatMsg, ok := genMsg.(*StoredChatMessage) | ||
| if !ok { | ||
| continue | ||
| } | ||
| idx := chatMsg.Message.FindToolCallIndex(callId) | ||
| if idx == -1 { | ||
| continue | ||
| } | ||
| updatedMsg := chatMsg.Copy() | ||
| updatedMsg.Message.ToolCalls = slices.Delete(updatedMsg.Message.ToolCalls, idx, idx+1) | ||
| if len(updatedMsg.Message.ToolCalls) == 0 { | ||
| chatstore.DefaultChatStore.RemoveMessage(chatId, chatMsg.MessageId) | ||
| } else { | ||
| aiOpts := &uctypes.AIOptsType{ | ||
| APIType: chat.APIType, | ||
| Model: chat.Model, | ||
| APIVersion: chat.APIVersion, | ||
| } | ||
| chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg) | ||
| } | ||
| return nil | ||
| } | ||
|
|
||
| return nil | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing error handling for PostMessage.
The function ignores the error returned by PostMessage on line 388. While RemoveMessage returns a bool (line 381), PostMessage can return actual errors (e.g., API type/model mismatch). Consider propagating this error to the caller.
🔎 Proposed fix
if len(updatedMsg.Message.ToolCalls) == 0 {
chatstore.DefaultChatStore.RemoveMessage(chatId, chatMsg.MessageId)
} else {
aiOpts := &uctypes.AIOptsType{
APIType: chat.APIType,
Model: chat.Model,
APIVersion: chat.APIVersion,
}
- chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg)
+ if err := chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg); err != nil {
+ return err
+ }
}
return nil🤖 Prompt for AI Agents
In pkg/aiusechat/openaichat/openaichat-convertmessage.go around lines 363 to
394, the call to chatstore.DefaultChatStore.PostMessage ignores its returned
error; update the code to capture the error returned by PostMessage and return
it to the caller (e.g., err := PostMessage(...); if err != nil { return err }),
and also handle the boolean result of RemoveMessage by returning an error if
removal failed so the caller receives failure details instead of silently
proceeding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
pkg/aiusechat/usechat.go (1)
324-328: Defer inside loop will not behave as expected.The
defer activeToolMap.Delete(toolCall.ID)on line 327 is inside aforloop. All deferred calls will execute whenprocessAllToolCallsreturns, not at the end of each iteration. This means all tool IDs remain in the map until the function completes, which is likely the intended behavior for tracking active tools during processing.However, this could be clearer. If the intent is to keep tools marked as active during the entire function execution, consider moving the cleanup outside the loop:
🔎 Alternative: explicit cleanup at end of function
- for _, toolCall := range stopReason.ToolCalls { - activeToolMap.Set(toolCall.ID, true) - defer activeToolMap.Delete(toolCall.ID) - } + for _, toolCall := range stopReason.ToolCalls { + activeToolMap.Set(toolCall.ID, true) + } + defer func() { + for _, toolCall := range stopReason.ToolCalls { + activeToolMap.Delete(toolCall.ID) + } + }()
📜 Review details
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
pkg/aiusechat/gemini/gemini-convertmessage.go(2 hunks)pkg/aiusechat/openaichat/openaichat-convertmessage.go(2 hunks)pkg/aiusechat/usechat.go(6 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.
Applied to files:
pkg/aiusechat/openaichat/openaichat-convertmessage.gopkg/aiusechat/usechat.go
🧬 Code graph analysis (2)
pkg/aiusechat/gemini/gemini-convertmessage.go (4)
pkg/aiusechat/openai/openai-backend.go (2)
UpdateToolUseData(411-440)RemoveToolUseCall(442-461)pkg/aiusechat/uctypes/uctypes.go (2)
UIMessageDataToolUse(210-221)AIOptsType(270-284)pkg/aiusechat/chatstore/chatstore.go (1)
DefaultChatStore(19-21)pkg/aiusechat/gemini/gemini-types.go (2)
GeminiChatMessage(15-20)GeminiMessagePart(43-66)
pkg/aiusechat/openaichat/openaichat-convertmessage.go (5)
pkg/aiusechat/gemini/gemini-convertmessage.go (1)
RemoveToolUseCall(461-508)pkg/aiusechat/openai/openai-backend.go (1)
RemoveToolUseCall(442-461)pkg/aiusechat/chatstore/chatstore.go (1)
DefaultChatStore(19-21)pkg/aiusechat/openaichat/openaichat-types.go (1)
StoredChatMessage(118-122)pkg/aiusechat/uctypes/uctypes.go (1)
AIOptsType(270-284)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Analyze (go)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Build for TestDriver.ai
🔇 Additional comments (7)
pkg/aiusechat/openaichat/openaichat-convertmessage.go (1)
363-396: LGTM! Implementation is consistent with the Gemini backend.The function correctly:
- Validates chat existence
- Finds and removes the tool call using
slices.Delete- Handles the error from
PostMessage(addressing the past review comment)- Returns
nilwhen tool call is not found, which is consistent with theopenai/openai-backend.goversionpkg/aiusechat/gemini/gemini-convertmessage.go (2)
422-459: LGTM! UpdateToolUseData implementation is correct.The function properly:
- Creates a new message with a copied parts slice to avoid mutating the original
- Updates only the specific part's ToolUseData
- Handles PostMessage errors correctly
- Returns a descriptive error if the tool call is not found
461-508: LGTM! RemoveToolUseCall implementation is correct and consistent with the openaichat backend.The function correctly:
- Uses
slices.Clonebeforeslices.Deleteto avoid mutating the original slice- Handles the empty parts case by removing the entire message
- Properly handles PostMessage errors
- Returns
nilwhen tool call is not found (consistent with the openai backends)pkg/aiusechat/usechat.go (4)
255-255: Good improvement: Using SSE handler context for cancellation.Switching from
context.Background()tosseHandler.Context()ensures that tool approval waits are properly cancelled when the client disconnects, preventing potential resource leaks and zombie goroutines.
354-360: Good addition: Early exit on SSE handler error.This correctly stops processing tool calls when the client disconnects or an error occurs, preventing unnecessary work and potential issues with sending data to a closed connection.
362-386: Well-structured cleanup logic for handling partial processing.The cleanup properly:
- Unregisters all tool approvals regardless of processing state
- Removes tool calls that weren't processed or were canceled
- Only posts results for tools that completed successfully
This ensures the chat state remains consistent when processing is interrupted.
442-444: Good addition of debug logging and error handling.The stopReason logging helps with debugging, and the error logging for
PostMessagefailures ensures issues are visible without breaking the flow.Also applies to: 467-469
No description provided.