Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4db4665051
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| cx.spawn(async move { | ||
| responder.respond_with_result(agent.on_initialize(req).await)?; | ||
| Ok(()) | ||
| })?; |
There was a problem hiding this comment.
Keep initialize ordered before session creation
Offloading InitializeRequest to cx.spawn removes the dispatch loop’s per-connection ordering, so a pipelined NewSessionRequest can run before on_initialize sets client_fs_capabilities/client_terminal (these are later read with defaults during agent setup). In that race, the new session is built with false capability defaults and ACP developer tool permissions can be downgraded incorrectly for that session; previously this could not happen because initialize ran inline before the next message was handled.
Useful? React with 👍 / 👎.
| cx.spawn(async move { | ||
| match agent.handle_custom_request(&req.method, req.params).await { | ||
| Ok(json) => responder.respond(json)?, |
There was a problem hiding this comment.
Serialize custom request handlers that persist config
Spawning every custom request here allows config-mutating custom methods to execute concurrently (on_upsert_config, on_remove_config, on_upsert_secret, etc.), but those paths each call load_config() and then write immediately; since Config locking is per-instance, overlapping requests are no longer serialized and can lose updates when two writes race. Before this change, these custom requests were processed sequentially on the dispatch loop, which avoided this write race.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
This seemed solved in the old REST backend, but not in the SACP server. The fix was a bit bigger so i did it in a separate branch: #8893
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1a5934143d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| .if_notification(|notif: CancelNotification| async { | ||
| let agent = agent.clone(); | ||
| cx.spawn(async move { | ||
| agent.on_cancel(notif).await?; | ||
| Ok(()) |
There was a problem hiding this comment.
Keep cancel handling ordered with message dispatch
Spawning CancelNotification breaks the per-connection ordering that previously ensured cancel was applied before later messages on the same stream. on_cancel cancels the session’s current cancel_token, while each on_prompt replaces that token; if a client sends cancel and then quickly starts another prompt, this detached task can run late and cancel the new prompt instead of the in-flight one. Keeping cancel inline (or keying cancel to a specific turn) avoids this wrong-target cancellation race.
Useful? React with 👍 / 👎.
| let cx_spawn = cx.clone(); | ||
| cx.spawn(async move { | ||
| let cx = cx_spawn; | ||
| let value_id = req.value.as_value_id() |
There was a problem hiding this comment.
Serialize config mutation requests on the dispatch path
SetSessionConfigOptionRequest is now offloaded via cx.spawn, so provider/mode/model mutations no longer execute in request order. When clients pipeline config updates (for example provider then model), update_provider/on_set_mode/on_set_model can race and whichever task finishes last wins, producing stale final session config relative to request order. This handler used to be serialized by the dispatch loop, so this introduces nondeterministic state updates.
Useful? React with 👍 / 👎.
matt2e
left a comment
There was a problem hiding this comment.
This speeds up app launch for me (now ~0.5s for the list of sessions to show, where as it used to take 4+ seconds).
Added some comments above about some smaller fixes
| cx.spawn(async move { | ||
| agent.on_cancel(notif).await?; | ||
| Ok(()) | ||
| })?; |
There was a problem hiding this comment.
Looking at the codex code review, this was flagged as a race (if cancellation request and a request to send a message come in quick succession, then cancellation can get confused and cancel the second request rather than the original on_cancel
I expected the app to make the cancellation call, then wait for the session to become idle, then send the new message...
Goose 1 does work this way.
Goose 2 doesn't... It sets the frontend state to idle for the session immediately. If we want to fix that then thats seems outside the scope of this PR...
I think this is unlikely to be a problem in real life, but I think it's reasonable to make this safe by not spawning for cancellation calls since they are always fast (its just an in-memory switch) and it fixes the race.
| cx.spawn(async move { | |
| agent.on_cancel(notif).await?; | |
| Ok(()) | |
| })?; | |
| agent.on_cancel(notif).await?; |
There was a problem hiding this comment.
Yeah, I think that sounds reasonable. I'll make cancel blocking
…(5 PRs) - BerriAI/litellm#26764 merge-after-nits: multipart user_config/tags JSON decode - BerriAI/litellm#26769 request-changes: FuturMix add bundles unrelated deletions - google-gemini/gemini-cli#26184 merge-after-nits: deleteSession fail-loud on missing file - QwenLM/qwen-code#3737 merge-after-nits: drop strip-thoughts helpers, preserve reasoning_content - aaif-goose/goose#8781 needs-discussion: ACP per-connection inline→spawn ordering question
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b8870d0bd2
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| Err(e) => { | ||
| responder.respond_with_error(e)?; | ||
| let cx_spawn = cx.clone(); | ||
| cx.spawn(async move { |
There was a problem hiding this comment.
Serialize legacy mode/model mutation requests per session
Spawning SetSessionModeRequest and SetSessionModelRequest lets these state-mutating handlers run concurrently, so pipelined updates can complete out of order and leave the session on stale settings (for example, an earlier mode/model request finishing after a later one and overwriting it). This is a regression from the previous inline dispatch behavior, which preserved request order for these mutations on a connection.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
I do think this change will likely uncover some concurrency issues. But in general we should be protecting the resources themselves (the agent state), not blocking here at the request level most of the time. The exceptions for initialize and cancel make sense but I don't think we should block at the request handling level elsewhere.
| cx.spawn(async move { | ||
| responder.respond(agent.on_close_session(&req.session_id.0).await?)?; |
There was a problem hiding this comment.
Keep close-session handling ordered before subsequent requests
CloseSessionRequest is now detached via cx.spawn, so a client that pipelines close_session and then another request for the same session can have the follow-up request execute before on_close_session removes the session entry. In that case the server may still process operations against a session the client already asked to close, producing nondeterministic lifecycle behavior that did not occur when close ran inline.
Useful? React with 👍 / 👎.
* main: (24 commits) fix: copy and content improvements in goose2 (#8886) feat: make ollama host configurable in goose2 (#8912) polish sidebar navigation and project icons (#8896) fix: model picker stays usable during provider loading (#8900) feat: update provider row after saving credentials (#8914) feat: support google model inventory refresh (#8913) chore: Added goose 2 UI refactor review skill (#8903) blog: goose with peekaboo (#8884) blog: Built-in Local Inference blogpost. (#8808) perf: parallelize provider resolution and eagerly init SQLite pool (#8899) refactor: update goose2 credential management behind provider-scoped ACP/core API (#8887) fix: handle acp requests concurrently (#8781) build: set LLAMA_STATIC_CRT for Windows CUDA (#8901) perf: deduplicate _goose/providers/list RPC call at startup (#8873) chore: add a bit more instructions in the release pr (#8890) chore: disable spellcheck in model search (#8889) add skills to the chat composer (#8881) mergeable configs + cleanup (#8378) refactor: agent provider to use explicit type states (#8879) [goose2] MCP Apps: hydrate and replay app payloads in Goose2 (#8632) ...
this was slowing things down quite a bit