fix(cli): pass new messages as steering instructions during active execution#25683
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the user experience of the CLI by addressing issues where user input was blocked or combined while the AI was streaming a response. The changes enable immediate interruption of ongoing streams for new input, giving users direct control to redirect conversations. Additionally, queued messages are now processed individually and sequentially, preventing multiple inputs from being merged into a single query. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
…ecution When the user submits a message while the AI is already running a task, instead of silently queuing the message to be sent after completion, immediately inject it as a steering instruction into the active execution. This means the AI can receive corrections, additions, or clarifications mid-task and incorporate them into its ongoing work — without waiting for the current response to finish or discarding the active context. Also fix useMessageQueue to process queued messages sequentially (one at a time) instead of combining all queued messages into a single merged query. Update QueuedMessageDisplay hint text to clarify the queue only appears during initialization (not during normal mid-task steering).
4c8c731 to
e2ad12e
Compare
There was a problem hiding this comment.
Code Review
This pull request introduces logic to interrupt a running agent and immediately submit a new message, along with improvements to the message queueing system to handle state transitions more reliably. The review identified a security concern where the new interruption logic bypasses mandatory permission checks and confirmation dialogs. Additionally, it was noted that cancelOngoingRequest should be used instead of the current ref call for proper stream cancellation, and this function needs to be added to the dependency array of the associated hook.
I am having trouble creating individual review comments. Click here to see my feedback.
packages/cli/src/ui/AppContainer.tsx (1371-1377)
The newly introduced logic for interrupting a running agent (lines 1371-1377) bypasses the checkPermissions call and the PermissionConfirmationRequest dialog. This allows user-initiated commands that require permission to be executed without mandatory confirmation, potentially leading to unintended data disclosure to the LLM. Additionally, the current call to cancelHandlerRef.current(false) is ineffective in interrupting the active stream; cancelOngoingRequest() should be used instead to ensure proper stream cancellation and UI state updates.
cancelOngoingRequest();
References
- Maintain consistency with existing UI behavior across components, including permission check patterns.
- Cancellation methods like abort() should not take arguments, which applies to the suggested use of cancelOngoingRequest().
packages/cli/src/ui/AppContainer.tsx (1434)
Since cancelOngoingRequest is now used within handleFinalSubmit, it should be added to the dependency array to ensure the callback remains current.
clearQueue,
cancelOngoingRequest,
…nd mode badge - Full rounded border always visible (top/bottom/left/right) regardless of terminal color mode - Replace '>' with '❯' prompt symbol - Show animated GeminiSpinner in prompt area while AI is responding - Mode badge (Shell mode / Plan mode / YOLO mode / Accepting edits) shown at right edge of input - Improved placeholder: 'Ask anything • @file to attach • /commands'
…to-activate - Compact input: horizontal lines top/bottom, ❯ symbol, no border box - Remove ▀▀▄▄ half-line rows from user messages and status separator - Elapsed timer on tool execution (shows after 3s) - Add DEBIAN_FRONTEND=noninteractive to non-interactive shell env - activate-skill: skip confirmation dialog, activate directly - Fix: queue messages when agent running instead of losing via hint injection
🛑 Action Required: Evaluation ApprovalSteering changes have been detected in this PR. To prevent regressions, a maintainer must approve the evaluation run before this PR can be merged. Maintainers:
Once approved, the evaluation results will be posted here automatically. |
Problem
When a user submits a new message while Gemini is running a task, the message is silently queued and only sent after the current response completes. Multiple queued messages get merged into one combined query. This blocks the user from providing corrections, additions, or additional instructions mid-task.
Solution
Instead of queuing messages during active execution, immediately inject them as steering instructions into the running task. The AI can then incorporate these instructions into its ongoing work without losing context.
Changes
packages/cli/src/ui/AppContainer.tsxhandleFinalSubmit: whenisAgentRunning && !isSlash, callhandleHintSubmitto inject the message as a mid-execution instructionisModelSteeringEnabled) still takes priority — this new branch catches the remaining casespackages/cli/src/ui/hooks/useMessageQueue.ts\n\ninto one queryisIdle && !isMcpOrConfigReady)packages/cli/src/ui/components/QueuedMessageDisplay.tsxBehavior After Fix
Test plan