Skip to content

Conversation

@sawka
Copy link
Member

@sawka sawka commented Dec 19, 2025

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 19, 2025

Warning

Rate limit exceeded

@sawka has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 8 minutes and 8 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between f51674b and 6db7c33.

📒 Files selected for processing (1)
  • pkg/aiusechat/usechat.go (7 hunks)

Walkthrough

This pull request adds comprehensive connection architecture docs, front-end connection docs, and multiple code changes: refactors tab-model exports and introduces maybeUseTabModel; conditions AIPanel rendering on active tab; updates AIPanel input to show a Stop Response during streaming; exposes new public fields and adds reloadChatFromBackend and stopResponse on WaveAIModel; adds ChatStore.RemoveMessage; adds client-disconnect partial-message handling across Gemini/OpenAI/OpenAI Chat streaming paths; introduces tool-use removal/update functions and refactors tool-call processing; and exposes SSEHandlerCh.Context().

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Streaming client-disconnect and partial-message logic across gemini-backend.go, openai-backend.go, and openaichat-backend.go — verify consistency and correctness of accumulated-text capture and returned partial messages.
  • Tool-use lifecycle changes: UpdateToolUseData, RemoveToolUseCall implementations, usechat backend interface change, and processAllToolCalls refactor — ensure idempotence, cleanup, and correct posting behavior.
  • WaveAIModel API expansion and AIPanel UI changes — review state management, concurrency, and UI callbacks (stopResponse, reloadChatFromBackend).
  • Tab-model context and maybeUseTabModel adoption plus workspace/app provider move — ensure no places assume non-null context and lifecycle implications.
  • ChatStore.RemoveMessage and slices usage — confirm locking and slice mutations are safe.
  • SSEHandlerCh.Context() exposure — confirm callers use the context correctly and cancellation semantics align with SSE handling.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning No pull request description was provided by the author, making it impossible to verify if any description relates to the changeset. Add a pull request description explaining the purpose, scope, and impact of the AI 'stop' implementation changes across frontends and backends.
Docstring Coverage ⚠️ Warning Docstring coverage is 42.31% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The PR title directly and clearly describes the main change: implementing AI 'stop' functionality across the client, OpenAI, and Gemini backends.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
pkg/aiusechat/gemini/gemini-convertmessage.go (1)

461-506: Consistent implementation with OpenAI backend.

The RemoveToolUseCall function correctly:

  1. Uses slices.Clone before slices.Delete to avoid mutating the original slice
  2. Removes the entire message if no parts remain
  3. Returns nil (not an error) when tool call is not found, matching the OpenAI backend behavior

One minor observation: line 500 ignores the error from PostMessage. Consider logging or returning the error for debugging purposes.

🔎 Optional: Handle PostMessage error
 		if len(updatedMsg.Parts) == 0 {
 			chatstore.DefaultChatStore.RemoveMessage(chatId, chatMsg.MessageId)
 		} else {
 			aiOpts := &uctypes.AIOptsType{
 				APIType:    chat.APIType,
 				Model:      chat.Model,
 				APIVersion: chat.APIVersion,
 			}
-			chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg)
+			if err := chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg); err != nil {
+				return err
+			}
 		}
aiprompts/conn-arch.md (1)

9-48: Add language identifier to fenced code block.

Per static analysis (markdownlint MD040), fenced code blocks should have a language specified. This ASCII diagram block and similar blocks throughout the file (lines 90, 137, 199, 326, 426) could use a language identifier like text or plaintext for consistency.

🔎 Example fix
-```
+```text
 ┌─────────────────────────────────────────────────────────────────┐
 │                    Block Controllers                             │
aiprompts/fe-conn-arch.md (1)

1-1002: Excellent comprehensive architecture documentation.

This documentation provides thorough coverage of Wave Terminal's frontend connection architecture, including:

  • Clear architecture diagrams and data flow patterns
  • Detailed component responsibilities and interactions
  • Code examples that align with the actual codebase
  • Connection lifecycle and error handling patterns
  • Best practices for view model and RPC usage

The documentation will be valuable for developers working with the connection system.

Optional: Address markdown linting suggestions

The static analysis tools suggest minor markdown style improvements:

  1. Add language identifiers to plain code blocks (lines 9, 627, 655, 689) - helps with syntax highlighting
  2. Consider using proper headings instead of bold text for section titles (multiple locations) - improves document structure and navigation

These are purely stylistic and don't affect the technical content.

frontend/app/aipanel/waveai-model.tsx (1)

45-50: Field visibility changes enable new public API.

Making these fields public allows the new stopResponse() method to access the underlying useChat functions. While this exposes internal state, it's a pragmatic approach for implementing the stop functionality.

Consideration: If more external access patterns emerge in the future, consider adding getter methods or creating a dedicated interface for external interactions to maintain better encapsulation.

📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3084e9d and 91cc8f0.

📒 Files selected for processing (17)
  • aiprompts/conn-arch.md (1 hunks)
  • aiprompts/fe-conn-arch.md (1 hunks)
  • frontend/app/aipanel/aipanel.tsx (2 hunks)
  • frontend/app/aipanel/aipanelinput.tsx (1 hunks)
  • frontend/app/aipanel/waveai-model.tsx (3 hunks)
  • frontend/app/app.tsx (2 hunks)
  • frontend/app/store/tab-model.ts (3 hunks)
  • frontend/app/workspace/workspace.tsx (2 hunks)
  • pkg/aiusechat/chatstore/chatstore.go (2 hunks)
  • pkg/aiusechat/gemini/gemini-backend.go (2 hunks)
  • pkg/aiusechat/gemini/gemini-convertmessage.go (2 hunks)
  • pkg/aiusechat/openai/openai-backend.go (7 hunks)
  • pkg/aiusechat/openaichat/openaichat-backend.go (2 hunks)
  • pkg/aiusechat/openaichat/openaichat-convertmessage.go (2 hunks)
  • pkg/aiusechat/usechat-backend.go (5 hunks)
  • pkg/aiusechat/usechat.go (5 hunks)
  • pkg/web/sse/ssehandler.go (1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/gemini/gemini-backend.go
  • pkg/aiusechat/usechat-backend.go
  • pkg/aiusechat/openaichat/openaichat-convertmessage.go
  • pkg/aiusechat/usechat.go
📚 Learning: 2025-10-21T05:09:26.916Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2465
File: frontend/app/onboarding/onboarding-upgrade.tsx:13-21
Timestamp: 2025-10-21T05:09:26.916Z
Learning: In the waveterm codebase, clientData is loaded and awaited in wave.ts before React runs, ensuring it is always available when components mount. This means atoms.client will have data on first render.

Applied to files:

  • frontend/app/app.tsx
📚 Learning: 2025-10-14T06:30:54.763Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2430
File: frontend/app/aipanel/aimessage.tsx:137-144
Timestamp: 2025-10-14T06:30:54.763Z
Learning: In `frontend/app/aipanel/aimessage.tsx`, the `AIToolUseGroup` component splits file operation tool calls into separate batches (`fileOpsNeedApproval` and `fileOpsNoApproval`) based on their approval state before passing them to `AIToolUseBatch`. This ensures each batch has homogeneous approval states, making group-level approval handling valid.

Applied to files:

  • frontend/app/aipanel/aipanelinput.tsx
🧬 Code graph analysis (7)
frontend/app/aipanel/aipanel.tsx (1)
frontend/app/store/tab-model.ts (1)
  • maybeUseTabModel (70-72)
frontend/app/workspace/workspace.tsx (1)
frontend/app/tab/tabcontent.tsx (1)
  • TabContent (76-76)
pkg/aiusechat/gemini/gemini-backend.go (2)
pkg/aiusechat/uctypes/uctypes.go (2)
  • WaveStopReason (255-261)
  • StopKindCanceled (241-241)
pkg/aiusechat/gemini/gemini-types.go (2)
  • GeminiChatMessage (15-20)
  • GeminiMessagePart (43-66)
pkg/aiusechat/usechat-backend.go (3)
pkg/aiusechat/openai/openai-backend.go (1)
  • RemoveToolUseCall (442-461)
pkg/aiusechat/gemini/gemini-convertmessage.go (1)
  • RemoveToolUseCall (461-506)
pkg/aiusechat/openaichat/openaichat-convertmessage.go (1)
  • RemoveToolUseCall (363-394)
pkg/aiusechat/openaichat/openaichat-backend.go (2)
pkg/aiusechat/uctypes/uctypes.go (2)
  • WaveStopReason (255-261)
  • StopKindCanceled (241-241)
pkg/aiusechat/openaichat/openaichat-types.go (2)
  • StoredChatMessage (118-122)
  • ChatRequestMessage (23-29)
pkg/aiusechat/gemini/gemini-convertmessage.go (5)
pkg/aiusechat/openai/openai-backend.go (2)
  • UpdateToolUseData (411-440)
  • RemoveToolUseCall (442-461)
pkg/aiusechat/uctypes/uctypes.go (2)
  • UIMessageDataToolUse (210-221)
  • AIOptsType (270-284)
pkg/aiusechat/chatstore/chatstore.go (1)
  • DefaultChatStore (19-21)
pkg/aiusechat/gemini/gemini-types.go (2)
  • GeminiChatMessage (15-20)
  • GeminiMessagePart (43-66)
pkg/wcloud/wcloud.go (1)
  • APIVersion (37-37)
frontend/app/aipanel/aipanelinput.tsx (2)
frontend/app/element/tooltip.tsx (1)
  • Tooltip (143-172)
frontend/util/util.ts (1)
  • cn (509-509)
🪛 Biome (2.1.2)
frontend/app/aipanel/waveai-model.tsx

[error] 344-344: This hook is being called conditionally, but all hooks must be called in the exact same order in every component render.

For React to preserve state between calls, hooks needs to be called unconditionally and always in the same order.
See https://reactjs.org/docs/hooks-rules.html#only-call-hooks-at-the-top-level

(lint/correctness/useHookAtTopLevel)

🪛 LanguageTool
aiprompts/fe-conn-arch.md

[style] ~63-~63: The double modal “needed Used” is nonstandard (only accepted in certain dialects). Consider “to be Used”.
Context: ...atom initialized to default if needed - Used by view models to track their connectio...

(NEEDS_FIXED)


[grammar] ~976-~976: Use a hyphen to join words.
Context: ...2000 } // 2 seconds 3. **Batch Related Operations:** typescript ...

(QB_NEW_EN_HYPHEN)

🪛 markdownlint-cli2 (0.18.1)
aiprompts/fe-conn-arch.md

9-9: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


45-45: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


55-55: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


65-65: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


78-78: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


93-93: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


132-132: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


214-214: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


457-457: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


465-465: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


473-473: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


481-481: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


493-493: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


504-504: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


515-515: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


529-529: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


545-545: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


556-556: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


569-569: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


580-580: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


627-627: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


655-655: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


689-689: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

aiprompts/conn-arch.md

9-9: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


90-90: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


137-137: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


199-199: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


326-326: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


426-426: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
  • GitHub Check: Build for TestDriver.ai
🔇 Additional comments (33)
pkg/web/sse/ssehandler.go (1)

92-94: LGTM!

Clean getter method that exposes the internal context for cancellation propagation. Since ctx is set once at construction via MakeSSEHandlerCh and never modified, this is safe to expose without synchronization.

pkg/aiusechat/chatstore/chatstore.go (1)

114-129: LGTM!

Well-implemented thread-safe message removal. The use of slices.DeleteFunc is appropriate, and comparing lengths before/after is a clean way to determine if a removal occurred. The lock is correctly held for the duration of the operation.

pkg/aiusechat/openaichat/openaichat-backend.go (2)

134-141: LGTM!

Good handling of client disconnects during streaming. Checking sseHandler.Err() after a decode error correctly identifies when the client has disconnected vs. an actual stream error. Returning nil for the error (instead of propagating the decode error) is appropriate since client disconnects are expected behavior, not server failures.


267-279: LGTM!

Clean helper function that follows the same pattern as extractPartialGeminiMessage in the Gemini backend. Returning nil for empty text prevents storing empty partial messages.

pkg/aiusechat/gemini/gemini-backend.go (2)

311-318: LGTM!

Consistent client-disconnect handling that matches the OpenAI Chat backend implementation. The pattern of checking sseHandler.Err() after a decode error properly distinguishes client disconnects from actual stream errors.


485-499: LGTM!

Clean helper function for constructing partial Gemini messages. The structure correctly uses the Gemini-specific types (GeminiChatMessage with GeminiMessagePart) and follows the same nil-for-empty-text pattern as the OpenAI implementation.

pkg/aiusechat/gemini/gemini-convertmessage.go (2)

11-11: LGTM on new imports.

The slices and chatstore imports are correctly added to support the new UpdateToolUseData and RemoveToolUseCall functions.

Also applies to: 16-16


422-459: Shallow copy of Parts slice shares underlying GeminiMessagePart structs.

On line 444, copy(updatedMsg.Parts, chatMsg.Parts) performs a shallow copy of the slice. Since GeminiMessagePart contains pointer fields (like ToolUseData *uctypes.UIMessageDataToolUse), modifying updatedMsg.Parts[i].ToolUseData on line 445 would affect the original chatMsg.Parts[i] if they share the same underlying struct.

However, since GeminiMessagePart is a value type (not a pointer), the copy actually copies each struct by value, so modifying the ToolUseData pointer field in the copy is safe here. The implementation is correct.

aiprompts/conn-arch.md (1)

1-612: Comprehensive architecture documentation.

This is well-structured documentation covering the connection system architecture, including block controllers, SSH/WSL connections, shell process execution, and key design principles. The diagrams and code examples effectively illustrate the system design.

pkg/aiusechat/openai/openai-backend.go (7)

391-398: LGTM on new accumulatedText field.

The accumulatedText field in openaiBlockState correctly tracks streamed text content for partial message recovery on client disconnect.


442-461: Consistent RemoveToolUseCall implementation.

The function correctly follows the same pattern as the Gemini backend - returns nil when tool call is not found (idempotent behavior), and removes the message from chatstore when found.


648-660: Good client disconnect handling.

The code correctly detects SSE connection errors and captures partial text before returning a canceled stop reason. This prevents loss of streamed content when the user stops the AI response mid-stream.


674-678: Partial text extraction on cancellation.

When finalStopReason.Kind == StopKindCanceled and no messages were returned, the code correctly extracts partial text from state. This ensures partial responses are preserved when the client disconnects.


687-713: Well-implemented partial text extraction.

The extractPartialTextFromState function correctly:

  1. Iterates through block states to find text blocks
  2. Collects accumulated text content
  3. Returns nil if no text was accumulated (avoiding empty messages)
  4. Creates a properly structured OpenAIChatMessage with a new UUID

727-733: Early disconnect detection in event handler.

Checking sse.Err() at the start of handleOpenAIEvent provides an early exit path when the client has disconnected, avoiding unnecessary processing of events that won't reach the client.


838-838: Text accumulation for partial message recovery.

The delta text is correctly accumulated in st.accumulatedText alongside being sent to the SSE stream. This enables partial message recovery on disconnect.

frontend/app/store/tab-model.ts (1)

10-10: LGTM on expanded exports.

Making these entities public (activeTabIdAtom, TabModel, getTabModelByTabId, getActiveTabModel, TabModelContext, useTabModel) aligns with the PR's goal of enabling tab-model context access from various parts of the application.

Also applies to: 12-12, 43-43, 52-52, 60-60, 62-62

frontend/app/app.tsx (2)

6-6: LGTM on imports and tabId retrieval.

The import of getTabModelByTabId and TabModelContext from the updated tab-model store is correct. Based on learnings, staticTabId should be available at component mount.

Also applies to: 35-35


41-43: No null/undefined guard needed for tabId.

atoms.staticTabId is typed as Atom<string> and always initialized with a valid string from initOpts.tabId. The type system prevents null/undefined values at compile time, and the current code is type-safe.

Likely an incorrect or invalid review comment.

pkg/aiusechat/usechat-backend.go (4)

35-37: LGTM! Clear interface addition for tool call cleanup.

The new RemoveToolUseCall method is well-documented and provides a clear mechanism for cleaning up incomplete or canceled tool calls during stop operations. The interface design is consistent with existing methods.


101-103: LGTM! Proper delegation to backend implementation.

The implementation correctly delegates to the OpenAI package's RemoveToolUseCall function, maintaining the separation of concerns between the interface layer and backend-specific logic.


159-161: LGTM! Consistent backend delegation.

Correctly delegates to the OpenAI Chat backend implementation, following the same pattern as other backend wrappers.


196-198: LGTM! Backend implementations follow established patterns.

  • anthropicBackend: Appropriately returns a not-implemented error, consistent with the UpdateToolUseData method (line 192-194).
  • geminiBackend: Correctly delegates to the Gemini package implementation.

Both implementations align with their respective backend capabilities and the overall architecture.

Also applies to: 240-242

frontend/app/workspace/workspace.tsx (2)

72-72: LGTM! Appropriate conditional rendering.

Rendering the AIPanel only when an active tab exists (tabId !== "") is a logical optimization that aligns with the conditional rendering of TabContent on lines 77-84. This prevents unnecessary panel rendering when there's no active tab context.


81-81: Context is properly provided at application root level.

The removal of TabModelContext.Provider from around TabContent is correct. The context is established in frontend/app/app.tsx at the top level, wrapping the entire application tree via <TabModelContext.Provider value={getTabModelByTabId(tabId)}>. This properly supplies the correct TabModel for the current tab to all descendant components, including Block, Term, and BlockFrame which call useTabModel() or maybeUseTabModel(). The component hierarchy ensures context flows correctly from the root provider through AppInnerWorkspaceTabContent and its children.

frontend/app/aipanel/aipanelinput.tsx (1)

172-200: LGTM! Clean streaming-aware UI controls.

The conditional rendering provides clear visual feedback:

  • During streaming: Green stop button with square icon (conventional stop indicator)
  • Ready state: Accent-colored send button with paper-plane icon
  • Tooltips accurately describe each action
  • Disabled logic properly preserved from the original implementation

The integration with model.stopResponse() aligns with the new public API in waveai-model.tsx.

pkg/aiusechat/usechat.go (4)

255-255: Excellent improvement: Using SSE context for tool approval.

Changing from context.Background() to sseHandler.Context() is a critical improvement that ensures tool approval requests are canceled when the client disconnects. This prevents zombie approvals and aligns perfectly with the PR's goal of implementing AI "stop" functionality.

Benefits:

  • Approval waits are canceled if the client disconnects mid-stream
  • Prevents resource leaks from abandoned approval requests
  • Provides proper cleanup when users stop AI responses

324-385: Well-designed refactor with proper disconnect handling.

The refactored processAllToolCalls function introduces several improvements:

Upfront Tool Registration (lines 330-349)

  • Emits all data-tooluse events immediately for better UI responsiveness
  • Registers approvals upfront when needed
  • Ensures ToolUseData is initialized for all calls

Early Exit on Client Disconnect (lines 354-357)

  • Checks sseHandler.Err() before processing each tool
  • Stops processing if client has disconnected
  • Prevents wasted work on tool execution after disconnect

Cleanup and Filtering (lines 362-384)

  • Properly unregisters approvals
  • Removes incomplete/canceled tool calls via RemoveToolUseCall
  • Posts only filtered (completed) results to chat store
  • Prevents partial tool results from polluting the chat

This design aligns perfectly with the PR's goal of implementing proper stop behavior during streaming.


440-442: LGTM! Useful debug logging.

The added stopReason logging provides valuable debugging information when troubleshooting AI stop behavior. Using logutil.DevPrintf keeps it as debug-level logging that won't clutter production logs.


479-479: LGTM! Call site updated for renamed function.

The call site correctly uses the renamed processAllToolCalls function, maintaining consistency with the function rename.

frontend/app/aipanel/waveai-model.tsx (3)

327-332: LGTM! Clean utility method for chat reloading.

The reloadChatFromBackend method provides a reusable way to fetch chat data and update the empty state. Good code factoring that's used by both stopResponse() and loadInitialChat().


334-348: Clarify the 500ms delay after stopping.

The stopResponse() method implements the stop functionality well, but the 500ms delay on line 336 appears to be a magic number:

await new Promise((resolve) => setTimeout(resolve, 500));

Question: What is the purpose of this delay?

  • Is it to allow the backend to finish writing partial messages?
  • Is it a workaround for race conditions?
  • Could this be made more explicit with a named constant or comment?

Consider:

// Wait for backend to persist partial response after stop
const STOP_SETTLE_TIME_MS = 500;
await new Promise((resolve) => setTimeout(resolve, STOP_SETTLE_TIME_MS));

Otherwise, the implementation correctly:

  • Stops the streaming response
  • Reloads the chat to get the partial results
  • Updates the UI with the refreshed messages
  • Handles errors appropriately

474-474: LGTM! Good code reuse.

Refactoring to use the shared reloadChatFromBackend method reduces code duplication while maintaining the same functionality.

import { atoms, getSettingsKeyAtom } from "@/app/store/global";
import { globalStore } from "@/app/store/jotaiStore";
import { useTabModel } from "@/app/store/tab-model";
import { maybeUseTabModel } from "@/app/store/tab-model";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -name "*.ts" -o -name "*.tsx" | head -20

Repository: wavetermdev/waveterm

Length of output: 618


🏁 Script executed:

rg "maybeUseTabModel|useTabModel" --type=tsx --type=ts -B 2 -A 5

Repository: wavetermdev/waveterm

Length of output: 91


🏁 Script executed:

fd aipanel.tsx

Repository: wavetermdev/waveterm

Length of output: 96


🏁 Script executed:

cat -n frontend/app/aipanel/aipanel.tsx | head -100

Repository: wavetermdev/waveterm

Length of output: 4887


🏁 Script executed:

cat -n frontend/app/aipanel/aipanel.tsx | sed -n '250,290p'

Repository: wavetermdev/waveterm

Length of output: 2567


🏁 Script executed:

rg "export function maybeUseTabModel|function maybeUseTabModel" -A 8 -B 2

Repository: wavetermdev/waveterm

Length of output: 315


🏁 Script executed:

rg "maybeUseTabModel" -g "*.ts" -g "*.tsx" -A 3 -B 2

Repository: wavetermdev/waveterm

Length of output: 1442


🏁 Script executed:

cat -n frontend/app/store/tab-model.ts | head -50

Repository: wavetermdev/waveterm

Length of output: 2011


🏁 Script executed:

rg "TabModelContext" -B 5 -A 3 frontend/app/store/tab-model.ts

Repository: wavetermdev/waveterm

Length of output: 526


Fix return type annotation and add null-safety check for tabModel.

The maybeUseTabModel() function has an incorrect return type annotation. It's declared to return TabModel (line 9 import), but the actual implementation returns useContext(TabModelContext) where the context is typed as TabModel | undefined with an undefined default value.

This creates a type safety issue at line 282:

body.tabid = tabModel.tabId;

If the component is used outside a TabModelProvider, tabModel will be undefined at runtime, causing a crash.

Required fixes:

  1. Update maybeUseTabModel() return type to TabModel | undefined (or use useTabModel() which properly throws)
  2. Add a null check before accessing tabModel.tabId on line 282, or use optional chaining: body.tabid = tabModel?.tabId;
🤖 Prompt for AI Agents
In frontend/app/aipanel/aipanel.tsx around lines 9 and 282: the imported
maybeUseTabModel() is typed as returning TabModel but its implementation
actually returns TabModel | undefined, which lets tabModel be undefined and
causes a runtime crash when accessing tabModel.tabId; update the
maybeUseTabModel() signature in its module to return TabModel | undefined (or
switch callers to use useTabModel() which throws when missing), and in this file
add a null-safety check before using tabModel on line 282 — either guard with if
(tabModel) { body.tabid = tabModel.tabId } or use optional chaining body.tabid =
tabModel?.tabId so you never dereference undefined.

Comment on lines +70 to +72
export function maybeUseTabModel(): TabModel {
return useContext(TabModelContext);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Return type should include undefined.

useContext(TabModelContext) returns TabModel | undefined (since the context default is undefined), but maybeUseTabModel declares return type as TabModel. This can cause type errors for callers who correctly expect the function might return undefined.

🔎 Proposed fix
-export function maybeUseTabModel(): TabModel {
+export function maybeUseTabModel(): TabModel | undefined {
     return useContext(TabModelContext);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function maybeUseTabModel(): TabModel {
return useContext(TabModelContext);
}
export function maybeUseTabModel(): TabModel | undefined {
return useContext(TabModelContext);
}
🤖 Prompt for AI Agents
In frontend/app/store/tab-model.ts around lines 70 to 72, the function
maybeUseTabModel currently declares a return type of TabModel but calls
useContext(TabModelContext) which can return TabModel | undefined; update the
function signature to return TabModel | undefined and propagate that change to
any callers (or add non-null assertions only where safe) so TypeScript reflects
the possible undefined value from the context.

Comment on lines 363 to 394
func RemoveToolUseCall(chatId string, callId string) error {
chat := chatstore.DefaultChatStore.Get(chatId)
if chat == nil {
return fmt.Errorf("chat not found: %s", chatId)
}

for _, genMsg := range chat.NativeMessages {
chatMsg, ok := genMsg.(*StoredChatMessage)
if !ok {
continue
}
idx := chatMsg.Message.FindToolCallIndex(callId)
if idx == -1 {
continue
}
updatedMsg := chatMsg.Copy()
updatedMsg.Message.ToolCalls = slices.Delete(updatedMsg.Message.ToolCalls, idx, idx+1)
if len(updatedMsg.Message.ToolCalls) == 0 {
chatstore.DefaultChatStore.RemoveMessage(chatId, chatMsg.MessageId)
} else {
aiOpts := &uctypes.AIOptsType{
APIType: chat.APIType,
Model: chat.Model,
APIVersion: chat.APIVersion,
}
chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg)
}
return nil
}

return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing error handling for PostMessage.

The function ignores the error returned by PostMessage on line 388. While RemoveMessage returns a bool (line 381), PostMessage can return actual errors (e.g., API type/model mismatch). Consider propagating this error to the caller.

🔎 Proposed fix
 		if len(updatedMsg.Message.ToolCalls) == 0 {
 			chatstore.DefaultChatStore.RemoveMessage(chatId, chatMsg.MessageId)
 		} else {
 			aiOpts := &uctypes.AIOptsType{
 				APIType:    chat.APIType,
 				Model:      chat.Model,
 				APIVersion: chat.APIVersion,
 			}
-			chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg)
+			if err := chatstore.DefaultChatStore.PostMessage(chatId, aiOpts, updatedMsg); err != nil {
+				return err
+			}
 		}
 		return nil
🤖 Prompt for AI Agents
In pkg/aiusechat/openaichat/openaichat-convertmessage.go around lines 363 to
394, the call to chatstore.DefaultChatStore.PostMessage ignores its returned
error; update the code to capture the error returned by PostMessage and return
it to the caller (e.g., err := PostMessage(...); if err != nil { return err }),
and also handle the boolean result of RemoveMessage by returning an error if
removal failed so the caller receives failure details instead of silently
proceeding.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
pkg/aiusechat/usechat.go (1)

324-328: Defer inside loop will not behave as expected.

The defer activeToolMap.Delete(toolCall.ID) on line 327 is inside a for loop. All deferred calls will execute when processAllToolCalls returns, not at the end of each iteration. This means all tool IDs remain in the map until the function completes, which is likely the intended behavior for tracking active tools during processing.

However, this could be clearer. If the intent is to keep tools marked as active during the entire function execution, consider moving the cleanup outside the loop:

🔎 Alternative: explicit cleanup at end of function
-	for _, toolCall := range stopReason.ToolCalls {
-		activeToolMap.Set(toolCall.ID, true)
-		defer activeToolMap.Delete(toolCall.ID)
-	}
+	for _, toolCall := range stopReason.ToolCalls {
+		activeToolMap.Set(toolCall.ID, true)
+	}
+	defer func() {
+		for _, toolCall := range stopReason.ToolCalls {
+			activeToolMap.Delete(toolCall.ID)
+		}
+	}()
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 91cc8f0 and f51674b.

📒 Files selected for processing (3)
  • pkg/aiusechat/gemini/gemini-convertmessage.go (2 hunks)
  • pkg/aiusechat/openaichat/openaichat-convertmessage.go (2 hunks)
  • pkg/aiusechat/usechat.go (6 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/openaichat/openaichat-convertmessage.go
  • pkg/aiusechat/usechat.go
🧬 Code graph analysis (2)
pkg/aiusechat/gemini/gemini-convertmessage.go (4)
pkg/aiusechat/openai/openai-backend.go (2)
  • UpdateToolUseData (411-440)
  • RemoveToolUseCall (442-461)
pkg/aiusechat/uctypes/uctypes.go (2)
  • UIMessageDataToolUse (210-221)
  • AIOptsType (270-284)
pkg/aiusechat/chatstore/chatstore.go (1)
  • DefaultChatStore (19-21)
pkg/aiusechat/gemini/gemini-types.go (2)
  • GeminiChatMessage (15-20)
  • GeminiMessagePart (43-66)
pkg/aiusechat/openaichat/openaichat-convertmessage.go (5)
pkg/aiusechat/gemini/gemini-convertmessage.go (1)
  • RemoveToolUseCall (461-508)
pkg/aiusechat/openai/openai-backend.go (1)
  • RemoveToolUseCall (442-461)
pkg/aiusechat/chatstore/chatstore.go (1)
  • DefaultChatStore (19-21)
pkg/aiusechat/openaichat/openaichat-types.go (1)
  • StoredChatMessage (118-122)
pkg/aiusechat/uctypes/uctypes.go (1)
  • AIOptsType (270-284)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Analyze (go)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Build for TestDriver.ai
🔇 Additional comments (7)
pkg/aiusechat/openaichat/openaichat-convertmessage.go (1)

363-396: LGTM! Implementation is consistent with the Gemini backend.

The function correctly:

  1. Validates chat existence
  2. Finds and removes the tool call using slices.Delete
  3. Handles the error from PostMessage (addressing the past review comment)
  4. Returns nil when tool call is not found, which is consistent with the openai/openai-backend.go version
pkg/aiusechat/gemini/gemini-convertmessage.go (2)

422-459: LGTM! UpdateToolUseData implementation is correct.

The function properly:

  1. Creates a new message with a copied parts slice to avoid mutating the original
  2. Updates only the specific part's ToolUseData
  3. Handles PostMessage errors correctly
  4. Returns a descriptive error if the tool call is not found

461-508: LGTM! RemoveToolUseCall implementation is correct and consistent with the openaichat backend.

The function correctly:

  1. Uses slices.Clone before slices.Delete to avoid mutating the original slice
  2. Handles the empty parts case by removing the entire message
  3. Properly handles PostMessage errors
  4. Returns nil when tool call is not found (consistent with the openai backends)
pkg/aiusechat/usechat.go (4)

255-255: Good improvement: Using SSE handler context for cancellation.

Switching from context.Background() to sseHandler.Context() ensures that tool approval waits are properly cancelled when the client disconnects, preventing potential resource leaks and zombie goroutines.


354-360: Good addition: Early exit on SSE handler error.

This correctly stops processing tool calls when the client disconnects or an error occurs, preventing unnecessary work and potential issues with sending data to a closed connection.


362-386: Well-structured cleanup logic for handling partial processing.

The cleanup properly:

  1. Unregisters all tool approvals regardless of processing state
  2. Removes tool calls that weren't processed or were canceled
  3. Only posts results for tools that completed successfully

This ensures the chat state remains consistent when processing is interrupted.


442-444: Good addition of debug logging and error handling.

The stopReason logging helps with debugging, and the error logging for PostMessage failures ensures issues are visible without breaking the flow.

Also applies to: 467-469

@sawka sawka merged commit 27ac131 into main Dec 20, 2025
7 checks passed
@sawka sawka deleted the sawka/ai-stop-2 branch December 20, 2025 00:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants