You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Verdict mix: 4 merge-as-is (sst/opencode #25053 — clean `testEffect`+`TestClock` migration that retires a real CI timing race in 64 lines net; openai/codex #20305 — small policy narrowing with 67 lines of regression locking deny-list edges; BerriAI/litellm #26851 — minimal env-ref blocklist at proxy ingress with 114 lines of regression covering both the metadata-set and pre-call paths; BerriAI/litellm #26849 — proper SSRF guard with DNS-pin against rebinding and 245 lines of regression covering RFC1918/loopback/IPv6-ULA/redirect-to-private cases), 2 merge-after-nits (sst/opencode #25057 the env-var read sits at module top-level and should move into the factory so a test harness can stub it, and the new file ships with no unit test; block/goose #8926 the 44-file mixed-concern PR should be split into 3-4 stacked PRs — UI rewrite, SDK-bindings surface, `extensionUsage`/`extensionCategories` lib pair, and `acp/server.rs` protocol change — and the new `useExtensionsSettings` hook duplicates a state shape already living in `ExtensionsSettings.tsx` that should be lifted), 0 request-changes, 2 needs-discussion (openai/codex #20309 the new `PluginAvailabilityStatus` enum crosses 4 schema files including `app-server-protocol/v2.rs:4657-4672` and a new `PluginSummary` field at `:4679-4682` — this is a wire-protocol change that should land as a contract-only PR first with explicit version-bump notes, then the 49-file implementation move follows in a second PR; google-gemini/gemini-cli #26240 the 8-script rewrite changes on-disk output from JSON to CSV without a deprecation window, no `--format` flag, and the bot's downstream consumers in the same repo are not updated in this diff — needs either a parallel-output transition window or a coordinated downstream PR before merge). Repo coverage: 5 distinct repos (QwenLM/qwen-code skipped this drip — every recent merged candidate was already in prior drips). Two of the eight (litellm #26851 env-ref leak, litellm #26849 SSRF on OAuth discovery) close real cross-trust-boundary holes that affected production before the fix.
4382
+
4383
+
### W18 drip-195 (2026-04-30)
4384
+
4385
+
8 fresh PRs across 5 tracked upstream repos (2 sst/opencode, 2 openai/codex, 2 BerriAI/litellm, 1 google-gemini/gemini-cli, 1 block/goose; QwenLM/qwen-code skipped — every recent merged candidate was already covered in prior drips). Theme-of-the-drip: "trust-boundary fixes ship next to API-shape extensions" — three of the eight close real cross-trust holes on shipping management/auth surfaces (litellm #26854 closes a horizontal-privilege-escalation primitive in `_validate_team_member_add_permissions` where the available-team self-join `or`-clause let any low-trust caller add *another* user as `role=admin` to a team — fix gates the bypass to caller-self with `role="user"` only, and removes the same loophole from `update_team_member_permissions`; litellm #26845 plugs a budget-admission race where two concurrent requests would both pass admission against the same cached spend counter and overshoot the limit by up to one call's cost — adds a reservation step + a fail-closed `require_cache_warm` path on `SpendCounterReseed.coalesced` that uses `redis.async_increment` then mirrors the post-increment value into in-memory cache atomically; sst/opencode #25044 tightens the system-prompt + tool-description guidance so the skill tool only loads on explicit user request or clear domain match, mirrored verbatim across `system.ts:71-74`, `registry.ts:251-260`, and `skill.txt:1-7` with a regression that locks a synthetic generic-overlap skill description). Two more are clean API/wire-shape extensions with proper schema-level discipline (codex #20299 extracts the stateless `EventMsg → ServerNotification` projection out of `bespoke_event_handling.rs` into a new 807-line `event_mapping.rs` in the protocol crate, with a `is_file_change_output: bool` flag flagged for enum-ification; codex #20314 adds optional `environment_id: Option<String>` to `ShellToolCallParams`/`ShellCommandToolCallParams` with `serde(default, skip_serializing_if)` for backward compat, plus an `include_environment_id: bool` gate on the model-facing tool schema so the multi-env routing surface is opt-in per tool, locked by both an in-process rust integration test and an external 371-line probe). The remaining three are mid-size cleanups (sst/opencode #25052 `testEffect` migration of the workspace-adaptor plugin test mirroring drip-194's #25053 trigger.test.ts shape — sequential `Effect.promise` writes vs the sibling's `Effect.all({concurrency: 2})` pattern flagged as nit; gemini-cli #26247 fixes a real config-time bug where `{{HOME}}`-style template vars in MCP stdio config `command`/`args`/`cwd` were being passed raw to `spawn`, with a two-stage expander that handles `{{VAR}}` then `${VAR}`/`$VAR` and falls back to the literal placeholder on missing var so misconfigs fail loud; goose #8925 wires recipe discovery into the ACP server via a new `AvailableCommandsUpdate` notification on `new_session`/`load_session` — flagged because the PR title says "discovery / execution" but only discovery ships, and the per-session `std::fs::read_to_string` walk over every recipe file is sync-in-async without caching).
4386
+
4387
+
| PR | Title | File |
4388
+
| --- | --- | --- |
4389
+
| [#25052](https://github.com/sst/opencode/pull/25052) | test: use testEffect for plugin workspace adaptor | [2026-W18/drip-195/sst-opencode-pr-25052.md](2026-W18/drip-195/sst-opencode-pr-25052.md) |
| [#20299](https://github.com/openai/codex/pull/20299) | Add a helper to map item event to ServerNotifications | [2026-W18/drip-195/openai-codex-pr-20299.md](2026-W18/drip-195/openai-codex-pr-20299.md) |
4392
+
| [#20314](https://github.com/openai/codex/pull/20314) | Gate multi-environment process tool model surface | [2026-W18/drip-195/openai-codex-pr-20314.md](2026-W18/drip-195/openai-codex-pr-20314.md) |
4393
+
| [#26854](https://github.com/BerriAI/litellm/pull/26854) | chore(team): close authz bypass via the available-team check | [2026-W18/drip-195/BerriAI-litellm-pr-26854.md](2026-W18/drip-195/BerriAI-litellm-pr-26854.md) |
| [#26247](https://github.com/google-gemini/gemini-cli/pull/26247) | fix: expand template vars in MCP stdio config | [2026-W18/drip-195/google-gemini-gemini-cli-pr-26247.md](2026-W18/drip-195/google-gemini-gemini-cli-pr-26247.md) |
4396
+
| [#8925](https://github.com/block/goose/pull/8925) | Added recipe discovery / execution to ACP server | [2026-W18/drip-195/block-goose-pr-8925.md](2026-W18/drip-195/block-goose-pr-8925.md) |
4397
+
4398
+
Verdict mix: 3 merge-as-is (sst/opencode #25052 — clean `testEffect` migration matching the drip-194 #25053 sibling pattern with one parallelization nit; BerriAI/litellm #26854 — minimal-blast-radius patch that gates the available-team self-join bypass to caller-self+role=user only, with companion fix at `update_team_member_permissions` removing the same loophole; google-gemini/gemini-cli #26247 — surgical two-stage `{{VAR}}` + `${VAR}`/`$VAR` expander applied to `command`/`args`/`cwd` with literal-placeholder fallback that fails loud on missing vars), 5 merge-after-nits (sst/opencode #25044 prompt-only fix needs a runtime check at the dispatcher to catch the bug class even when prompt drift erodes the guidance, verbatim three-site duplication of the new clauses should be lifted to a constant, regression assertion on full-sentence prose is brittle and a complementary "old guidance is absent" assertion would lock the replacement intent; openai/codex #20299 `is_file_change_output: bool` should be enum-shaped upfront for forward-compat, no unit test for the new `item_event_to_server_notification` helper itself, `pub use protocol::event_mapping::*` re-export at `lib.rs:17` leaks every future helper into the public API forever and should be narrowed; openai/codex #20314 same boolean-parameter smell on `include_environment_id`, `UnifiedExecRequest::environment` mandatory field with `expect("turn environment")` panic in test fixture is a foot-gun, two `ToolError::Rejected("exec_command is unavailable")` call sites deleted means the only pre-existing user-visible signal moves earlier and behavior parity should be confirmed, no CHANGELOG entry for the model-facing tool-surface schema change; BerriAI/litellm #26845 reservation handle stored as `Optional[Dict[str, Any]]` defeats type-checking and should be a `BudgetReservation` Pydantic model, reservation TTL must be ≥ max streaming-response latency or budget overruns silently, no CHANGELOG / config flag for operators rolling out the new fail-closed semantics, +2012 LOC PR should split the `spend_counter_reseed.py require_cache_warm` change as a stand-alone preparatory PR; block/goose #8925 title oversells "discovery / execution" when only discovery ships, three silent-skip failure modes in `build_available_commands_from_slash_commands` need diagnostic logging, per-session sync `std::fs::read_to_string` walk over every recipe is avoidable latency without caching, zero new tests on +95 LOC of new wire-format-emitting code), 0 request-changes, 0 needs-discussion. Repo coverage: 5 distinct repos. Three of the eight (litellm #26854 horizontal-privilege-escalation in team-member-add, litellm #26845 budget-admission race that allows concurrent overshoot, sst/opencode #25044 skill-load over-firing on generic workflow overlap) close real production-affecting trust/safety holes; the codex pair (#20299, #20314) is the same architectural pattern as drip-194 #20309 (extract stateless contract from stateful dispatcher into the protocol crate) and reads as part of an ongoing app-server cleanup.
0 commit comments