Skip to content

fix(cli): pass new messages as steering instructions during active execution#25683

Open
rushikeshsakharleofficial wants to merge 4 commits intogoogle-gemini:mainfrom
rushikeshsakharleofficial:fix/interrupt-on-submit
Open

fix(cli): pass new messages as steering instructions during active execution#25683
rushikeshsakharleofficial wants to merge 4 commits intogoogle-gemini:mainfrom
rushikeshsakharleofficial:fix/interrupt-on-submit

Conversation

@rushikeshsakharleofficial
Copy link
Copy Markdown

@rushikeshsakharleofficial rushikeshsakharleofficial commented Apr 20, 2026

Problem

When a user submits a new message while Gemini is running a task, the message is silently queued and only sent after the current response completes. Multiple queued messages get merged into one combined query. This blocks the user from providing corrections, additions, or additional instructions mid-task.

Solution

Instead of queuing messages during active execution, immediately inject them as steering instructions into the running task. The AI can then incorporate these instructions into its ongoing work without losing context.

Changes

packages/cli/src/ui/AppContainer.tsx

  • Add unconditional steering branch in handleFinalSubmit: when isAgentRunning && !isSlash, call handleHintSubmit to inject the message as a mid-execution instruction
  • Existing model steering guard (isModelSteeringEnabled) still takes priority — this new branch catches the remaining cases

packages/cli/src/ui/hooks/useMessageQueue.ts

  • Fix queue processing to be sequential (one message at a time) instead of combining all queued messages with \n\n into one query
  • Queue is still used only for the initialization-phase case (isIdle && !isMcpOrConfigReady)

packages/cli/src/ui/components/QueuedMessageDisplay.tsx

  • Update hint text to clarify the queue is only active during initialization

Behavior After Fix

Scenario Before After
New message during active task Queued, sent after completion Injected immediately as steering instruction
Multiple messages during task Combined into one query Each injected as separate steering instruction
Message during MCP init Queued (unchanged) Queued (unchanged)
Model steering explicitly enabled Steered (unchanged) Steered (unchanged)

Test plan

  • Type a message while Gemini is running a multi-step task — verify the instruction is picked up and incorporated
  • Type a correction mid-task ("actually, skip that step") — verify it affects the ongoing execution
  • Verify slash commands still work correctly during execution
  • Verify messages during MCP initialization still queue correctly

@rushikeshsakharleofficial rushikeshsakharleofficial requested a review from a team as a code owner April 20, 2026 06:42
@google-cla
Copy link
Copy Markdown

google-cla bot commented Apr 20, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the user experience of the CLI by addressing issues where user input was blocked or combined while the AI was streaming a response. The changes enable immediate interruption of ongoing streams for new input, giving users direct control to redirect conversations. Additionally, queued messages are now processed individually and sequentially, preventing multiple inputs from being merged into a single query.

Highlights

  • Stream Interruption: Implemented functionality to immediately cancel an active AI response stream and process new user input, enhancing responsiveness.
  • Sequential Queue Processing: Modified the message queue to process messages one at a time instead of combining multiple queued inputs into a single query.
  • UI Clarity: Updated the hint text in the queued message display to better explain its purpose during initialization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

…ecution

When the user submits a message while the AI is already running a task,
instead of silently queuing the message to be sent after completion,
immediately inject it as a steering instruction into the active execution.

This means the AI can receive corrections, additions, or clarifications
mid-task and incorporate them into its ongoing work — without waiting for
the current response to finish or discarding the active context.

Also fix useMessageQueue to process queued messages sequentially (one at a
time) instead of combining all queued messages into a single merged query.

Update QueuedMessageDisplay hint text to clarify the queue only appears
during initialization (not during normal mid-task steering).
@rushikeshsakharleofficial rushikeshsakharleofficial changed the title fix(cli): interrupt current stream on new user input instead of queuing fix(cli): pass new messages as steering instructions during active execution Apr 20, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces logic to interrupt a running agent and immediately submit a new message, along with improvements to the message queueing system to handle state transitions more reliably. The review identified a security concern where the new interruption logic bypasses mandatory permission checks and confirmation dialogs. Additionally, it was noted that cancelOngoingRequest should be used instead of the current ref call for proper stream cancellation, and this function needs to be added to the dependency array of the associated hook.

I am having trouble creating individual review comments. Click here to see my feedback.

packages/cli/src/ui/AppContainer.tsx (1371-1377)

security-high high

The newly introduced logic for interrupting a running agent (lines 1371-1377) bypasses the checkPermissions call and the PermissionConfirmationRequest dialog. This allows user-initiated commands that require permission to be executed without mandatory confirmation, potentially leading to unintended data disclosure to the LLM. Additionally, the current call to cancelHandlerRef.current(false) is ineffective in interrupting the active stream; cancelOngoingRequest() should be used instead to ensure proper stream cancellation and UI state updates.

        cancelOngoingRequest();
References
  1. Maintain consistency with existing UI behavior across components, including permission check patterns.
  2. Cancellation methods like abort() should not take arguments, which applies to the suggested use of cancelOngoingRequest().

packages/cli/src/ui/AppContainer.tsx (1434)

high

Since cancelOngoingRequest is now used within handleFinalSubmit, it should be added to the dependency array to ensure the callback remains current.

      clearQueue,
      cancelOngoingRequest,

@gemini-cli gemini-cli bot added the status/need-issue Pull requests that need to have an associated issue. label Apr 20, 2026
…nd mode badge

- Full rounded border always visible (top/bottom/left/right) regardless of terminal color mode
- Replace '>' with '❯' prompt symbol
- Show animated GeminiSpinner in prompt area while AI is responding
- Mode badge (Shell mode / Plan mode / YOLO mode / Accepting edits) shown at right edge of input
- Improved placeholder: 'Ask anything  •  @file to attach  •  /commands'
…to-activate

- Compact input: horizontal lines top/bottom, ❯ symbol, no border box
- Remove ▀▀▄▄ half-line rows from user messages and status separator
- Elapsed timer on tool execution (shows after 3s)
- Add DEBIAN_FRONTEND=noninteractive to non-interactive shell env
- activate-skill: skip confirmation dialog, activate directly
- Fix: queue messages when agent running instead of losing via hint injection
@rushikeshsakharleofficial rushikeshsakharleofficial requested a review from a team as a code owner April 20, 2026 08:54
@github-actions
Copy link
Copy Markdown

🛑 Action Required: Evaluation Approval

Steering changes have been detected in this PR. To prevent regressions, a maintainer must approve the evaluation run before this PR can be merged.

Maintainers:

  1. Go to the Workflow Run Summary.
  2. Click the yellow 'Review deployments' button.
  3. Select the 'eval-gate' environment and click 'Approve'.

Once approved, the evaluation results will be posted here automatically.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

status/need-issue Pull requests that need to have an associated issue.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant