feat(tool/bluebubbles): group management tool — rename, add/remove participants, icon, leave#2532
feat(tool/bluebubbles): group management tool — rename, add/remove participants, icon, leave#2532maxtongwang wants to merge 82 commits intozeroclaw-labs:mainfrom
Conversation
Adds @claude mention support on PRs using anthropics/claude-code-action@v1. Mirrors the workflow from maxtongwang/openclaw. Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds use_sticky_comment: true to claude-review workflow so all review updates are consolidated into a single comment instead of flooding the PR with multiple comments. Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds BlueBubblesChannel — iMessage via self-hosted BlueBubbles macOS server. - Webhook mode: receives new-message events at POST /bluebubbles - Sender normalization: strips imessage:/sms:/auto: prefixes, lowercases emails - fromMe caching: FIFO VecDeque+HashMap cache with reply context injection - Optional inbound webhook auth: Authorization: Bearer via webhook_secret config - Outbound send: POST /api/v1/message/text with ?password= query param + tempGuid - Health check: GET /api/v1/ping (matches OpenClaw probeBlueBubbles) - constant_time_eq() for secret comparison; manual Debug redacts password Closes #1 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add a second BlueBubbles endpoint for a personal Apple ID that stores incoming iMessages to shared memory but never calls the LLM or sends a reply. This enables multi-tenant iMessage setups where one ZeroClaw process handles both a bot account (POST /bluebubbles) and a personal account (POST /bluebubbles-personal) from two BlueBubbles instances. Changes: - src/config/schema.rs: add bluebubbles_personal: Option<BlueBubblesConfig> to ChannelsConfig (reuses existing BlueBubblesConfig struct) - src/gateway/mod.rs: add bluebubbles_personal field to AppState, wire construction, register POST /bluebubbles-personal route, and implement handle_bluebubbles_personal_webhook handler Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- [build] rustc-wrapper = "sccache" in .cargo/config.toml — caches compiled crates across clean builds; sccache auto-starts on first use - [profile.dev] split-debuginfo = "unpacked" in Cargo.toml — skips dsymutil on macOS, eliminating post-link debug-symbol packaging delay Note: mold 2.40.4 dropped Mach-O support so it cannot be used as a macOS linker; split-debuginfo is the primary link-time win on macOS. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Convert Discord-style markdown in LLM responses to BB Private API
attributedBody segments for iMessage rendering:
- **bold** → {bold: true}
- *italic* → {italic: true}
- ~~strikethrough~~ → {strikethrough: true}
- __underline__ → {underline: true}
Markers nest arbitrarily; plain-text fallback strips markers.
Update system prompt to instruct LLM to use rich markdown + emoji.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add start_typing/stop_typing to BlueBubblesChannel using BB Private API
POST /api/v1/chat/{chatGuid}/typing. Indicator refreshes every 4s since
BB typing events expire in ~5s. Gateway handler starts indicator before
LLM call and stops it (both success and error paths) before sending reply.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- markdown_to_attributed_body: convert **bold**, *italic*, ~~strike~~, __underline__, `code` (→bold), # headers (→bold), ``` blocks (plain) into BB Private API attributedBody segments - start_typing/stop_typing: background loop refreshing BB typing indicator every 4s while LLM processes; aborted on response - EFFECT_MAP + extract_effect(): LLM can append [EFFECT:name] tags (slam, loud, gentle, invisible-ink, confetti, balloons, fireworks, lasers, love, celebration, echo, spotlight); stripped from content, passed as effectId to BB Private API - Updated channel_delivery_instructions with full style + effect guide - 38 unit tests passing Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add ignore_senders to BlueBubblesConfig and BlueBubblesChannel - Personal webhook handler skips storing messages where sender is in ignore_senders - Configure ignore_senders = ["tongtong901005@gmail.com"] in config.toml - Fix release-fast profile: lto=false, codegen-units=16, opt-level=2 for fast local builds with sccache support Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Direct LLM to use web_fetch on canonical sources (ESPN, wttr.in, etc.) for sports, weather, news — not just web_search snippets - Remove "be concise" instruction that was cutting the tool loop short - Instruct bot to complete full research before replying - Keep all text style and effect guidance Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…l endpoint Port all three CodeRabbit improvements from feat/channel-bluebubbles to fork main and extend them consistently to the bluebubbles_personal endpoint: - typing_handles: replace single Mutex<Option<JoinHandle>> with Mutex<HashMap<String, JoinHandle>> keyed by chat GUID so concurrent conversations do not cancel each other's typing indicators - is_sender_ignored(): new method checked before is_sender_allowed() in parse_webhook_payload so ignore_senders always takes precedence over the allowlist; removes the now-redundant inline ignore check from the personal handler since parse_webhook_payload handles it upstream - Secret wiring: add bluebubbles and bluebubbles_personal password + webhook_secret to decrypt_channel_secrets and encrypt_channel_secrets - Debug impl: add ignore_senders field to BlueBubblesConfig debug output - Tests: add three unit tests covering ignore_senders exact match, precedence over allowlist, and empty-list no-op behaviour (41 total) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Conflicts resolved: - .gitignore: keep fork's test-config.toml entry alongside upstream result - Cargo.lock: accept upstream (no new deps from BB channel) - src/config/schema.rs: keep BlueBubblesConfig + bluebubbles/personal fields alongside upstream GitHubConfig; wire both into encrypt/decrypt - src/gateway/mod.rs: keep BB + personal handlers alongside upstream GitHub handler; update run_gateway_chat_with_tools calls to 3-arg signature; add missing BB fields to new GitHub test fixtures Also fixes: - src/providers/cursor.rs: add missing quota_metadata field (pre-existing upstream bug now caught by stricter ChatResponse struct) - BB handler: update sanitize_gateway_response to 3-arg form matching upstream API change (adds leak_guard parameter) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
BlueBubbles channel now downloads and transcribes audio attachments when `[transcription]` is configured. iMessage sends voice memos in Core Audio Format (CAF) which Groq Whisper does not accept natively; CAF files are converted to WAV via ffmpeg before upload. Changes: - transcription.rs: add `convert_caf_to_wav` — detects CAF by extension, writes to temp files (seekable input required), runs `ffmpeg -ar 16000 -ac 1`, cleans up on all exit paths; returns Ok(None) for non-CAF so callers reuse original bytes - bluebubbles.rs: add `transcription` field, `with_transcription` builder, `AudioAttachment` struct, `extract_audio_attachments`, `download_attachment` (percent-encoding inline), `download_and_transcribe`, and public async `parse_webhook_payload_with_transcription`; gracefully falls back to `<media:audio>` placeholder when transcription is disabled, ffmpeg is absent, or the API call fails - gateway/mod.rs: wire `config.transcription.clone()` into both BB channel constructions; replace synchronous `parse_webhook_payload` call with async `parse_webhook_payload_with_transcription` in both webhook handlers No new Cargo dependencies — uses tokio::process and tokio::fs already present. No new config keys — wired through the existing `[transcription]` block. Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Mirrors OpenClaw approach: read attachment from `original_path` in BB webhook payload (local filesystem, no REST download) and shell out to the `whisper` CLI. Python whisper handles CAF (iMessage voice memos) natively via ffmpeg — no pre-conversion step needed. - Remove convert_caf_to_wav + ffmpeg + Groq HTTP download path - Add transcribe_audio_local: resolves whisper in PATH and common Homebrew/system paths, runs with --model turbo --output_format txt, reads <tmpdir>/<stem>.txt, cleans up on all exit paths - extract_audio_attachments: use originalPath instead of guid - transcribe_local: read file directly, no REST API call - Fix pre-existing build break: add OauthConfig/GoogleOauthConfig schema types + re-export so oauth.rs compiles Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
BB webhook attachments do not include originalPath; extract guid instead
and download bytes via /api/v1/attachment/{guid}/download, then write to
a temp file and pass to the local whisper CLI for transcription.
BB converts CAF→MP3 before webhooking so the download is typically
audio/mpeg — no pre-conversion needed.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two fixes:
1. Instead of REST API download, read the BB-converted MP3 directly from
the local filesystem at the predictable path:
~/Library/Application Support/bluebubbles-server/Convert/{guid}/{name}.mp3
BB converts CAF→MP3 before sending the webhook; since ZeroClaw runs on
the same Mac as BB, no network call is needed.
2. Spawn webhook processing in background (tokio::spawn) so the handler
returns 200 immediately. BB times out webhooks at 30 s, but whisper
transcription takes longer — caused 408 timeouts with the previous
synchronous approach.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
BB's transferName in webhooks is inconsistent — sometimes includes .mp3
already, causing triple-extension paths. Instead of constructing the
filename from transferName, scan the Convert/{guid}/ directory for the
first audio file (mp3/m4a/aac/wav/ogg). This is robust to any BB
filename convention.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace incorrect \!escaped.contains('&') assertion — HTML entity encoding uses & characters (&, <, etc.), so this always failed. Assert contains("&") instead.
Replaces broken filesystem-based audio transcription with BB REST API download. Adds dual-backend transcription (local whisper CLI primary, Groq API fallback), inbound tapback surfacing via updated-message webhooks, and outbound reactions via add_reaction/remove_reaction on the Channel trait. Key changes: - download_attachment_bytes: 25 MB cap (Content-Length + body), 30s HTTP timeout - resolve_whisper_bin: OnceLock cache — binary search runs once at startup - parse_tapback_event: surfaces ❤️/👍/👎/😂/‼️ /❓ as [Tapback] system messages - add_reaction/remove_reaction: BB Private API /api/v1/message/react - 58 tests, all green
fetch_with_http_provider used Policy::none() but returned Ok(redirected_url) instead of fetching the target. Replace response with the fetched redirect target and fall through to shared body processing.
Brings in upstream commits through 3726d82. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> # Conflicts: # .cargo/config.toml # .gitignore # Cargo.toml # src/channels/bluebubbles.rs # src/channels/mod.rs # src/config/mod.rs # src/config/schema.rs # src/gateway/mod.rs
…nc parse Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ng indicator (#14) - Replace slow Python whisper (~30–90 s) with whisper-cli (whisper.cpp, Metal-accelerated, ~1–2 s) for local audio transcription. - Pre-convert all non-WAV audio to 16 kHz mono WAV via ffmpeg; the brew build of whisper-cli only reliably reads WAV. - Use OnceLock for binary/model path resolution to avoid repeated lookups. - Start typing indicator before transcription runs so users see feedback immediately rather than waiting for the full whisper phase. Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…k reactions, and whisper-cli fast-path Extends the BlueBubbles iMessage channel with the following capabilities on top of the existing base implementation: - REST API audio download: fetches voice memo attachments directly from the BlueBubbles server via its REST API, enabling reliable access without filesystem access. - Tapback reactions: parses and surfaces reaction events (heart, thumbs up, thumbs down, ha ha, !!, ?) so the agent can understand user sentiment and react appropriately. - Rich text / attributed body: parses Apple's attributedBody format to extract plain text from styled iMessage content, improving message fidelity for formatted messages and inline attachments. - Typing indicator before transcription: sends a typing indicator immediately when a voice memo is detected, giving users real-time feedback before the potentially slower transcription step. - Local whisper-cli fast-path (whisper.cpp): prefers the locally installed `whisper-cli` (from whisper.cpp, e.g. via `brew install whisper-cpp`) over the Groq API when available. On Apple Silicon with Metal acceleration this cuts transcription latency from 30-90 s to ~1-2 s. WAV files are passed directly; non-WAV audio is converted via ffmpeg before passing to whisper-cli. Falls back to the Groq API automatically when whisper-cli is not installed. - Stderr diagnostics: surfaces whisper-cli stderr output in agent logs when transcription fails, making it easier to debug environment issues. Files changed: - src/channels/bluebubbles.rs: AudioAttachment struct, REST download, tapback parsing, rich text extraction, typing indicator wiring - src/channels/transcription.rs: local whisper-cli fast-path with ffmpeg pre-conversion and graceful Groq API fallback - src/gateway/mod.rs: wire .with_transcription() into channel construction; call parse_webhook_payload_with_transcription (async)
- Redact reqwest password URL from error messages with `.without_url()` in download_attachment_bytes, add_reaction, and remove_reaction - Replace racy `as_nanos()` temp-file names with `uuid::Uuid::new_v4()` in bluebubbles.rs (audio download) and transcription.rs (ffmpeg/whisper) - Gate typing indicator on fromMe/allowlist checks before `start_typing()` to avoid leaking presence for ignored/disallowed senders - Add `tokio::time::timeout(120s)` + `kill_on_drop(true)` for ffmpeg, whisper-cli, and python whisper subprocesses to bound execution time - Add Intel Homebrew model paths (`/usr/local/opt/whisper-cpp/...`) to MODELS array alongside Apple Silicon paths - Move transcription+LLM loop into `tokio::spawn` background task and return 202 ACCEPTED immediately to stay within gateway timeout budget Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/gateway/mod.rs (1)
2336-2349:⚠️ Potential issue | 🔴 CriticalRequire an auth boundary when BlueBubbles webhook secret is unset.
If
bluebubbles_webhook_secretis not configured, this endpoint currently accepts unauthenticated requests that can still trigger background work. Gate non-loopback traffic at minimum (same pattern used in/webhook).🔐 Suggested fix
async fn handle_bluebubbles_webhook( State(state): State<AppState>, + ConnectInfo(peer_addr): ConnectInfo<SocketAddr>, headers: HeaderMap, body: Bytes, ) -> impl IntoResponse { @@ + if state.bluebubbles_webhook_secret.is_none() + && !is_loopback_request(Some(peer_addr), &headers, state.trust_forwarded_headers) + { + return ( + StatusCode::UNAUTHORIZED, + Json(serde_json::json!({ + "error": "Unauthorized — configure BlueBubbles webhook secret for non-local access" + })), + ); + } + // Verify Authorization: Bearer <webhook_secret> if configured if let Some(ref expected) = state.bluebubbles_webhook_secret {As per coding guidelines:
src/{gateway,security,tools,runtime}/**/*.rs: Deny-by-default for access and exposure boundaries; keep network/filesystem/shell scope as narrow as possible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/mod.rs` around lines 2336 - 2349, The endpoint currently allows unauthenticated requests when state.bluebubbles_webhook_secret is None; instead implement a deny-by-default boundary: update the BlueBubbles webhook handler (the block referencing state.bluebubbles_webhook_secret, headers, and constant_time_eq) so that if bluebubbles_webhook_secret is None you still gate requests to only accept loopback/local traffic (use the same non-loopback check pattern used by the /webhook handler—inspect the request's remote address or forwarded header and reject non-loopback clients), returning UNAUTHORIZED/forbidden JSON for disallowed remote addresses; preserve the existing constant-time token check when a secret is configured.
♻️ Duplicate comments (2)
src/channels/bluebubbles.rs (1)
639-646:⚠️ Potential issue | 🟠 MajorEnforce attachment size cap before growing the buffer.
Line 644 appends chunk bytes first, then checks size at Line 645. A large chunk can overshoot memory limits before the guard trips.
🔧 Proposed fix
- buf.extend_from_slice(&chunk); - if buf.len() > MAX_BYTES { + if buf.len().saturating_add(chunk.len()) > MAX_BYTES { anyhow::bail!("BB attachment too large (>{MAX_BYTES} bytes)"); } + buf.extend_from_slice(&chunk);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/channels/bluebubbles.rs` around lines 639 - 646, The loop that reads attachment chunks currently extends buf before checking size, risking oversized growth; change the logic in the resp.chunk().await loop to check if buf.len() + chunk.len() > MAX_BYTES and bail (e.g., via anyhow::bail!) before calling buf.extend_from_slice(&chunk), referencing the resp.chunk() call, the buf variable, and the MAX_BYTES constant so the guard runs prior to growing the buffer.src/gateway/mod.rs (1)
2358-2375:⚠️ Potential issue | 🟠 MajorPlease add explicit tests for 202-accepted and 503-exhausted BlueBubbles branches.
I don’t see coverage in this module for the new semaphore admission behavior (
ACCEPTEDvsSERVICE_UNAVAILABLE). This is a critical failure-mode boundary.I can draft two focused
tokio::testcases for these branches if you want.
Based on learnings: Applies to src/gateway/**/*.rs : Include threat/risk notes and rollback strategy. Add/update tests or validation evidence for failure modes and boundaries. Keep observability useful but non-sensitive.Also applies to: 2484-2487
🧹 Nitpick comments (3)
docs/config-reference.md (1)
1233-1254: Localized docs sync needed for new BlueBubbles config section.This PR adds the
[channels_config.bluebubbles]section with new runtime-contract content (dm_policy, group_policy, group_allow_from, send_read_receipts). Per coding guidelines, localized docs for supported locales (en, zh-CN, ja, ru, fr, vi, el) should be updated in the same PR, or an explicit deferral note with follow-up PR reference should be added.Consider adding a deferral note at the top of this section indicating which locale files need updating.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/config-reference.md` around lines 1233 - 1254, The new configuration section [channels_config.bluebubbles] introduces runtime-contract keys (dm_policy, group_policy, group_allow_from, send_read_receipts) but localized docs were not updated; add or update the corresponding localized files for supported locales (en, zh-CN, ja, ru, fr, vi, el) to include this section and these keys, or insert a deferral note at the top of the [channels_config.bluebubbles] section listing the exact locale files that will be updated in a follow-up PR; reference the section name and the specific keys (dm_policy, group_policy, group_allow_from, send_read_receipts) in the note so translators know what to update.src/channels/transcription.rs (2)
298-328: Consider making whisper binary paths configurable.The hardcoded paths (
/opt/homebrew/bin/whisper-cli,/usr/local/bin/whisper-cli, etc.) work well for standard Homebrew installations but may miss non-standard installations. A future enhancement could allow users to specify custom paths via config.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/channels/transcription.rs` around lines 298 - 328, resolve_whisper_cpp currently uses hardcoded BINS and MODELS arrays which miss non-standard installs; change resolve_whisper_cpp to first read user-provided paths (e.g., from environment variables like WHISPER_CLI_PATHS and WHISPER_MODEL_PATHS or a runtime config struct) and parse them into lists, then fall back to the existing hardcoded BINS and MODELS if no user values are provided; update the CACHE initialization to use the combined list when searching for an existing binary and model, and keep the same Some((bin, model)) return shape so callers of resolve_whisper_cpp remain unchanged.
201-206: Minor: Mixed async/sync cleanup in read_to_string error path.Line 203 uses synchronous
std::fs::remove_dir_allinside an async function, while other cleanup paths usetokio::fs::remove_dir_all. This works but is inconsistent.♻️ Optional consistency fix
let txt = tokio::fs::read_to_string(&txt_path).await.map_err(|e| { - let _ = std::fs::remove_dir_all(&out_dir); + // Note: Using blocking cleanup in error path; async cleanup follows in happy path. anyhow::anyhow!("Failed to read whisper-cli transcript output: {e}") })?; + // Cleanup moved outside the error mapping for consistency: let _ = tokio::fs::remove_dir_all(&out_dir).await;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/channels/transcription.rs` around lines 201 - 206, The error path that reads the whisper transcript uses synchronous std::fs::remove_dir_all while the surrounding async function uses tokio fs calls, causing inconsistency; update the error handler for tokio::fs::read_to_string in function/variable context (txt_path, txt, read_to_string, out_dir) to perform async cleanup using tokio::fs::remove_dir_all(&out_dir).await (or spawn_blocking if you intentionally need a sync call) so cleanup is consistent with the other paths and avoids blocking the async runtime.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/channels/bluebubbles.rs`:
- Around line 758-765: The code currently falls back to DM checks when chat GUID
is missing; change this to fail closed by treating unknown/empty chat GUID as
denied or returning an explicit error instead of defaulting to DM. In the block
using Self::extract_chat_guid(...), check the Option/empty result and if
None/empty, call an explicit deny path (e.g., return Err/bail! or set allowed =
false) rather than calling Self::is_dm_allowed; update the same pattern where
extract_chat_guid/is_group_chat/is_group_allowed/is_dm_allowed are used (the
other occurrences around the noted regions) so both tapback and
non-transcription parsing paths behave identically and never broaden permissions
when chat identity is ambiguous.
- Around line 676-681: tmp_path.to_str() can fail and currently returns early,
skipping tokio::fs::remove_file and leaking the temp file; change the flow to
first capture to_str() into an Option<String> (e.g., let path_str =
tmp_path.to_str().map(|s| s.to_owned())), then attempt the transcribe call only
if Some(s) and otherwise create the anyhow error, but do not return
immediately—store the Result from transcribe (or the Err) in a variable, call
tokio::fs::remove_file(&tmp_path).await.ok() to always run cleanup, then return
the stored Result; reference symbols: tmp_path, transcribe_audio_local,
tokio::fs::remove_file.
In `@src/gateway/mod.rs`:
- Around line 2377-2379: The inline comment above the Arc::clone(bluebubbles)
(near variable bb) is stale—replace the phrase "Returning 200" with "Returning
202" and adjust the wording to reflect that the handler returns HTTP 202
(Accepted) to keep the HTTP lifecycle within the timeout budget while the
background task handles transcription, LLM inference, and reply; update only the
comment text near let bb = Arc::clone(bluebubbles) to avoid changing logic.
---
Outside diff comments:
In `@src/gateway/mod.rs`:
- Around line 2336-2349: The endpoint currently allows unauthenticated requests
when state.bluebubbles_webhook_secret is None; instead implement a
deny-by-default boundary: update the BlueBubbles webhook handler (the block
referencing state.bluebubbles_webhook_secret, headers, and constant_time_eq) so
that if bluebubbles_webhook_secret is None you still gate requests to only
accept loopback/local traffic (use the same non-loopback check pattern used by
the /webhook handler—inspect the request's remote address or forwarded header
and reject non-loopback clients), returning UNAUTHORIZED/forbidden JSON for
disallowed remote addresses; preserve the existing constant-time token check
when a secret is configured.
---
Duplicate comments:
In `@src/channels/bluebubbles.rs`:
- Around line 639-646: The loop that reads attachment chunks currently extends
buf before checking size, risking oversized growth; change the logic in the
resp.chunk().await loop to check if buf.len() + chunk.len() > MAX_BYTES and bail
(e.g., via anyhow::bail!) before calling buf.extend_from_slice(&chunk),
referencing the resp.chunk() call, the buf variable, and the MAX_BYTES constant
so the guard runs prior to growing the buffer.
---
Nitpick comments:
In `@docs/config-reference.md`:
- Around line 1233-1254: The new configuration section
[channels_config.bluebubbles] introduces runtime-contract keys (dm_policy,
group_policy, group_allow_from, send_read_receipts) but localized docs were not
updated; add or update the corresponding localized files for supported locales
(en, zh-CN, ja, ru, fr, vi, el) to include this section and these keys, or
insert a deferral note at the top of the [channels_config.bluebubbles] section
listing the exact locale files that will be updated in a follow-up PR; reference
the section name and the specific keys (dm_policy, group_policy,
group_allow_from, send_read_receipts) in the note so translators know what to
update.
In `@src/channels/transcription.rs`:
- Around line 298-328: resolve_whisper_cpp currently uses hardcoded BINS and
MODELS arrays which miss non-standard installs; change resolve_whisper_cpp to
first read user-provided paths (e.g., from environment variables like
WHISPER_CLI_PATHS and WHISPER_MODEL_PATHS or a runtime config struct) and parse
them into lists, then fall back to the existing hardcoded BINS and MODELS if no
user values are provided; update the CACHE initialization to use the combined
list when searching for an existing binary and model, and keep the same
Some((bin, model)) return shape so callers of resolve_whisper_cpp remain
unchanged.
- Around line 201-206: The error path that reads the whisper transcript uses
synchronous std::fs::remove_dir_all while the surrounding async function uses
tokio fs calls, causing inconsistency; update the error handler for
tokio::fs::read_to_string in function/variable context (txt_path, txt,
read_to_string, out_dir) to perform async cleanup using
tokio::fs::remove_dir_all(&out_dir).await (or spawn_blocking if you
intentionally need a sync call) so cleanup is consistent with the other paths
and avoids blocking the async runtime.
ℹ️ Review info
Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0296f114-c36c-4eba-a7e9-949be9c85379
📒 Files selected for processing (10)
docs/channels-reference.mddocs/config-reference.mddocs/i18n/vi/channels-reference.mdsrc/channels/bluebubbles.rssrc/channels/transcription.rssrc/config/schema.rssrc/gateway/mod.rssrc/gateway/oauth.rssrc/tools/bluebubbles_group.rssrc/tools/mod.rs
✅ Files skipped from review due to trivial changes (1)
- docs/i18n/vi/channels-reference.md
🚧 Files skipped from review as they are similar to previous changes (3)
- src/tools/mod.rs
- src/config/schema.rs
- src/gateway/oauth.rs
…oclaw-labs#2532 - Fix stale comment 'Returning 200' → 'Returning 202' in BB webhook handler (status line already returns 202, comment was lagging) - Fail closed when chat GUID is absent/empty in group management path: return empty vec instead of falling through to DM policy with unknown context — prevents misrouted policy decisions - Clean up temp audio file before propagating to_str() failure to avoid leaking the temp file when the path contains non-UTF-8 characters
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/gateway/mod.rs (1)
2366-2374:⚠️ Potential issue | 🟠 MajorAdd direct tests for the new BlueBubbles 503/202 boundary behavior.
Line 2366 now gates work with
try_acquire_owned()and Line 2485 returns202 Accepted, but this path still lacks focused tests in the in-file suite for (a) pool exhausted → 503 and (b) slot acquired → 202.Based on learnings: Applies to
src/gateway/**/*.rs— include threat/risk notes and rollback strategy, and add/update tests or validation evidence for failure modes and boundaries.Also applies to: 2484-2487
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/mod.rs` around lines 2366 - 2374, Add focused unit/integration tests around the BlueBubbles webhook handler that exercises the BB_WORKER_SEMAPHORE gate: one test should simulate semaphore exhaustion (make Arc::clone(&*BB_WORKER_SEMAPHORE).try_acquire_owned() fail) and assert the handler returns StatusCode::SERVICE_UNAVAILABLE with the JSON {"error":"worker pool exhausted, retry later"}; another test should simulate a successful permit acquisition and assert the handler returns 202 Accepted (the existing success path). Locate the semaphore symbol BB_WORKER_SEMAPHORE and the try_acquire_owned call in the handler to hook/patch behavior; include short test comments documenting threat/risk notes (e.g., DoS via pool saturation) and a rollback strategy (revert to rejecting new work or increasing pool size) so test evidence and mitigation are captured alongside the assertions.src/channels/bluebubbles.rs (1)
897-906:⚠️ Potential issue | 🟠 MajorFail closed on missing
chat_guidin all policy paths (not only transcription).Line 897-900 and Line 1017-1019 still fall back to
senderwhen chat identity is missing, which can route ambiguous group events through DM policy (is_dm_allowed) and weaken access constraints. This is inconsistent with the fail-closed behavior already implemented at Line 765-768 in the transcription path.🔧 Proposed fix
@@ - let reply_target = Self::extract_chat_guid(data) - .filter(|g| !g.is_empty()) - .unwrap_or_else(|| sender.clone()); + let Some(reply_target) = Self::extract_chat_guid(data).filter(|g| !g.is_empty()) else { + tracing::debug!("BlueBubbles: missing chat guid for tapback; denying ambiguous event"); + return vec![]; + }; @@ - let reply_target = Self::extract_chat_guid(data) - .filter(|g| !g.is_empty()) - .unwrap_or_else(|| sender.clone()); + let Some(reply_target) = Self::extract_chat_guid(data).filter(|g| !g.is_empty()) else { + tracing::debug!("BlueBubbles: missing chat guid; denying ambiguous message"); + return messages; + };As per coding guidelines: "Prefer explicit
bail!/errors for unsupported or unsafe states; never silently broaden permissions/capabilities."Also applies to: 1017-1030
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/channels/bluebubbles.rs` around lines 897 - 906, The code currently falls back to sender when chat identity is missing (reply_target = Self::extract_chat_guid(...).unwrap_or_else(|| sender.clone())), which can cause group events to be evaluated under DM policy; instead fail closed by returning an error or bailing when extract_chat_guid yields None/empty. Replace the unwrap_or_else fallback in the tapback handling (and the similar block around lines 1017-1030) so that if Self::extract_chat_guid(data) returns None or empty you explicitly bail/return an error (do not substitute sender), and then use the validated chat_guid with Self::is_group_chat, Self::is_group_allowed, and Self::is_dm_allowed to decide policy.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/gateway/mod.rs`:
- Around line 562-591: The current initialization of bluebubbles_channel
swallows errors by calling .ok().flatten(), so failures from
BlueBubblesChannel::new(...) become silently treated as "not configured"; remove
the .ok().flatten() fallback and propagate the error instead (e.g. after
.transpose().map_err(|e| { tracing::error!(...); e }) use the ? operator to
return the error), preserving the existing chaining of
BlueBubblesChannel::new(...).map(...).with_policies(...).with_transcription(...)
so misconfiguration surfaces instead of being dropped.
---
Duplicate comments:
In `@src/channels/bluebubbles.rs`:
- Around line 897-906: The code currently falls back to sender when chat
identity is missing (reply_target =
Self::extract_chat_guid(...).unwrap_or_else(|| sender.clone())), which can cause
group events to be evaluated under DM policy; instead fail closed by returning
an error or bailing when extract_chat_guid yields None/empty. Replace the
unwrap_or_else fallback in the tapback handling (and the similar block around
lines 1017-1030) so that if Self::extract_chat_guid(data) returns None or empty
you explicitly bail/return an error (do not substitute sender), and then use the
validated chat_guid with Self::is_group_chat, Self::is_group_allowed, and
Self::is_dm_allowed to decide policy.
In `@src/gateway/mod.rs`:
- Around line 2366-2374: Add focused unit/integration tests around the
BlueBubbles webhook handler that exercises the BB_WORKER_SEMAPHORE gate: one
test should simulate semaphore exhaustion (make
Arc::clone(&*BB_WORKER_SEMAPHORE).try_acquire_owned() fail) and assert the
handler returns StatusCode::SERVICE_UNAVAILABLE with the JSON {"error":"worker
pool exhausted, retry later"}; another test should simulate a successful permit
acquisition and assert the handler returns 202 Accepted (the existing success
path). Locate the semaphore symbol BB_WORKER_SEMAPHORE and the try_acquire_owned
call in the handler to hook/patch behavior; include short test comments
documenting threat/risk notes (e.g., DoS via pool saturation) and a rollback
strategy (revert to rejecting new work or increasing pool size) so test evidence
and mitigation are captured alongside the assertions.
ℹ️ Review info
Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fd271716-819a-4612-8e55-03e10fbe2ea8
📒 Files selected for processing (2)
src/channels/bluebubbles.rssrc/gateway/mod.rs
When ffmpeg conversion times out (kill_on_drop terminates the process), remove the partial output file before propagating the error to prevent disk artifact accumulation in long-running processes.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
… paths; add semaphore tests - `parse_tapback_event`: fail closed when chat GUID is absent — cannot determine DM vs group for tapback policy without it (was: fall back to sender address, treating unknown as DM) - `parse_webhook_payload`: same fail-closed behaviour for consistency with `parse_webhook_payload_with_transcription` (which already failed closed since the transcription PR) - Rename test `bluebubbles_parse_fallback_reply_target_when_no_chats` → `bluebubbles_parse_missing_guid_returns_no_messages` to reflect the updated (deny) contract - Add `bluebubbles_webhook_pool_exhausted_returns_503`: verifies that when `BB_WORKER_SEMAPHORE` is drained the handler returns 503 instead of queuing unbounded work - Add `bluebubbles_webhook_accepted_returns_202`: verifies the normal 202-accepted path when a concurrency slot is available Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/channels/bluebubbles.rs (1)
639-646:⚠️ Potential issue | 🟠 MajorEnforce the attachment size cap before appending each chunk.
At Line 644, the chunk is appended before the cap check at Line 645. A single oversized chunk can still cause an over-limit allocation before bailing.
As per coding guidelines `src/**/*.rs`: “Prefer explicit `bail!`/errors for unsupported or unsafe states; never silently broaden permissions/capabilities; document fallback behavior when fallback is intentional and safe.”Proposed fix
while let Some(chunk) = resp .chunk() .await .map_err(|e| anyhow::anyhow!("BB attachment body read failed: {}", e.without_url()))? { - buf.extend_from_slice(&chunk); - if buf.len() > MAX_BYTES { + if buf.len().saturating_add(chunk.len()) > MAX_BYTES { anyhow::bail!("BB attachment too large (>{MAX_BYTES} bytes)"); } + buf.extend_from_slice(&chunk); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/channels/bluebubbles.rs` around lines 639 - 646, The loop that reads chunks from resp (the while let Some(chunk) = resp.chunk().await ... block) currently extends buf with chunk before checking size, allowing a single oversized chunk to allocate beyond MAX_BYTES; change the logic to check chunk.len() (or buf.len() + chunk.len()) against MAX_BYTES before calling buf.extend_from_slice(&chunk) and bail with an explicit error (e.g., using anyhow::bail!) if adding the chunk would exceed MAX_BYTES to enforce the cap atomically.src/gateway/mod.rs (1)
562-591:⚠️ Potential issue | 🟠 MajorConfigured BlueBubbles init failures are still downgraded to “not configured.”
Line 590 and Line 591 (
.ok().flatten()) convert a real initialization failure intoNone, so a misconfigured BlueBubbles channel can boot without hard failure.Suggested fail-fast adjustment
- let bluebubbles_channel: Option<Arc<BlueBubblesChannel>> = config + let bluebubbles_channel: Option<Arc<BlueBubblesChannel>> = config .channels_config .bluebubbles .as_ref() .map(|bb| { BlueBubblesChannel::new( bb.server_url.clone(), bb.password.clone(), bb.allowed_senders.clone(), bb.ignore_senders.clone(), ) .map(|ch| { Arc::new( ch.with_policies( bb.dm_policy, bb.group_policy, bb.group_allow_from.clone(), bb.send_read_receipts, ) .with_transcription(config.transcription.clone()), ) }) }) .transpose() .map_err(|e| { tracing::error!("Failed to initialize BlueBubbles channel: {e}"); - e - }) - .ok() - .flatten(); + e + })?;As per coding guidelines, prefer explicit
bail!/errors for unsupported or unsafe states and keep error paths obvious/localized.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/mod.rs` around lines 562 - 591, The current chain for bluebubbles_channel swallows initialization errors by calling .transpose().map_err(...).ok().flatten(), turning real failures into None; instead fail-fast by propagating the error. Replace the final .ok().flatten() with the ? operator so the result of .transpose().map_err(|e| { tracing::error!("Failed to initialize BlueBubbles channel: {e}"); e })? yields an Option<Arc<BlueBubblesChannel>> (i.e., change the end of the chain to .transpose().map_err(...)?), and adjust the enclosing function signature to return Result if it doesn't already so initialization failures from BlueBubblesChannel::new (and subsequent with_policies/with_transcription) are not silently downgraded.
🧹 Nitpick comments (1)
src/channels/transcription.rs (1)
477-481: No-op test can be made more meaningful.Line 480 currently only checks “does not panic,” which provides little regression signal. Consider asserting idempotence of the cached resolver result instead.
Based on learnings: Applies to `src/**/*.rs` — “Prefer reproducible commands and locked dependency behavior in CI-sensitive paths; keep tests deterministic (no flaky timing/network dependence without guardrails) (Determinism + Reproducibility).”Proposed improvement
#[test] -fn resolve_whisper_bin_returns_str_or_none() { - // Just assert the function doesn't panic; result depends on local install. - let _ = resolve_whisper_bin(); +fn resolve_whisper_bin_is_idempotent_with_cache() { + let first = resolve_whisper_bin(); + let second = resolve_whisper_bin(); + assert_eq!(first, second, "resolver result should be stable after caching"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/channels/transcription.rs` around lines 477 - 481, The test resolve_whisper_bin_returns_str_or_none should assert idempotence of the resolver instead of only checking for panic: call resolve_whisper_bin() twice and assert the two results are equal (i.e., if first call returns Some(path) or None, the second call must return the same Option<String>), so update the test to capture the first result, call resolve_whisper_bin() again, and compare for equality to detect regressions in caching or nondeterministic behavior while still remaining agnostic to whether Whisper is installed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/channels/bluebubbles.rs`:
- Around line 1503-1521: In add_reaction and remove_reaction, validate that
channel_id and message_id are non-empty before building/sending the request
(e.g., in the async fn add_reaction and async fn remove_reaction); if either is
empty, return early with anyhow::bail! including a clear message like "empty
channel_id" or "empty message_id" so the error is local and descriptive; place
these checks immediately after the function parameters (before calling
emoji_to_bb_tapback or constructing the request body) to prevent sending invalid
requests.
---
Duplicate comments:
In `@src/channels/bluebubbles.rs`:
- Around line 639-646: The loop that reads chunks from resp (the while let
Some(chunk) = resp.chunk().await ... block) currently extends buf with chunk
before checking size, allowing a single oversized chunk to allocate beyond
MAX_BYTES; change the logic to check chunk.len() (or buf.len() + chunk.len())
against MAX_BYTES before calling buf.extend_from_slice(&chunk) and bail with an
explicit error (e.g., using anyhow::bail!) if adding the chunk would exceed
MAX_BYTES to enforce the cap atomically.
In `@src/gateway/mod.rs`:
- Around line 562-591: The current chain for bluebubbles_channel swallows
initialization errors by calling .transpose().map_err(...).ok().flatten(),
turning real failures into None; instead fail-fast by propagating the error.
Replace the final .ok().flatten() with the ? operator so the result of
.transpose().map_err(|e| { tracing::error!("Failed to initialize BlueBubbles
channel: {e}"); e })? yields an Option<Arc<BlueBubblesChannel>> (i.e., change
the end of the chain to .transpose().map_err(...)?), and adjust the enclosing
function signature to return Result if it doesn't already so initialization
failures from BlueBubblesChannel::new (and subsequent
with_policies/with_transcription) are not silently downgraded.
---
Nitpick comments:
In `@src/channels/transcription.rs`:
- Around line 477-481: The test resolve_whisper_bin_returns_str_or_none should
assert idempotence of the resolver instead of only checking for panic: call
resolve_whisper_bin() twice and assert the two results are equal (i.e., if first
call returns Some(path) or None, the second call must return the same
Option<String>), so update the test to capture the first result, call
resolve_whisper_bin() again, and compare for equality to detect regressions in
caching or nondeterministic behavior while still remaining agnostic to whether
Whisper is installed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ba60e587-34f5-484d-8e62-5115356c18b0
📒 Files selected for processing (3)
src/channels/bluebubbles.rssrc/channels/transcription.rssrc/gateway/mod.rs
… fixes - Add `pub mod oauth;` to gateway/mod.rs and register /auth/* routes (oauth.rs was compiled but never wired into the router) - Add OAuthConfig/GoogleOAuthConfig to config schema for [oauth.google] client_id/client_secret - Remove redundant early-return bypass in auth_check: always delegate to PairingGuard.is_authenticated (returns true when pairing disabled) - Fix clippy::ptr_arg (&PathBuf -> &std::path::Path on helper fns) - Remove unused HashMap import and unnecessary OpenOptionsExt imports - Fix Ok(_) -> Ok(()) in remove_file match arm Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
src/gateway/mod.rs (1)
563-592:⚠️ Potential issue | 🟠 MajorFail fast on BlueBubbles init failures instead of silently disabling the channel.
Line 591/Line 592 currently convert initialization errors into
Nonevia.ok().flatten(). That masks misconfiguration and boots without BlueBubbles even when configured. Propagate the error so startup fails loudly and predictably.Suggested fix
- let bluebubbles_channel: Option<Arc<BlueBubblesChannel>> = config + let bluebubbles_channel: Option<Arc<BlueBubblesChannel>> = config .channels_config .bluebubbles .as_ref() .map(|bb| { BlueBubblesChannel::new( bb.server_url.clone(), bb.password.clone(), bb.allowed_senders.clone(), bb.ignore_senders.clone(), ) .map(|ch| { Arc::new( ch.with_policies( bb.dm_policy, bb.group_policy, bb.group_allow_from.clone(), bb.send_read_receipts, ) .with_transcription(config.transcription.clone()), ) }) }) .transpose() .map_err(|e| { tracing::error!("Failed to initialize BlueBubbles channel: {e}"); e - }) - .ok() - .flatten(); + })?;As per coding guidelines, prefer explicit
bail!/errors for unsupported or unsafe states and keep error paths obvious and localized.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/mod.rs` around lines 563 - 592, The BlueBubbles initialization currently swallows errors by calling .ok().flatten(), which hides misconfiguration; change the logic around the bluebubbles_channel creation (the BlueBubblesChannel::new(...) chain that calls .map(|ch| Arc::new(ch.with_policies(...).with_transcription(config.transcription.clone()))) and the .transpose().map_err(...) sequence so that initialization errors are propagated instead of converted to None—remove the final .ok().flatten() and return or propagate the error (e.g., use the ? operator or bail!/Err from the surrounding function) so startup fails when channels_config.bluebubbles is present but initialization fails.src/gateway/oauth.rs (3)
219-220:⚠️ Potential issue | 🔴 CriticalUse per-session PKCE storage to prevent concurrent OAuth flow collisions.
Line 219 writes every start flow to the same
google_pkce.txt, and Line 313 reads that single shared file. Concurrent/auth/googlestarts can overwrite each other and cause false state/verifier mismatches.🔧 Proposed fix (session-scoped PKCE file)
- let pkce_file = pkce_path(&dir, "google"); + let pkce_file = pkce_path(&dir, &format!("google_{state_token}")); let pkce_payload = format!("{state_token}\n{verifier}");- let pkce_file = pkce_path(&dir, "google"); + let callback_state = match query.state.as_deref() { + Some(s) + if !s.is_empty() + && s.chars() + .all(|c| c.is_ascii_alphanumeric() || c == '-' || c == '_') => s, + _ => { + return ( + StatusCode::BAD_REQUEST, + Html(error_page("OAuth Error", "Missing or invalid state")), + ) + .into_response() + } + }; + let pkce_file = pkce_path(&dir, &format!("google_{callback_state}"));Based on learnings: Security-critical surfaces (
src/gateway/,src/security/,src/tools/,src/runtime/) carry high blast radius and should maintain secure-by-default design with pairing, bind safety, limits, and secret handling.Also applies to: 307-314
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/oauth.rs` around lines 219 - 220, The current code writes all PKCE pairs to a shared file ("google_pkce.txt") causing race conditions; change the storage to a per-session PKCE file by incorporating the unique state_token (or a generated session id) into the filename returned by pkce_path so pkce_file becomes session-scoped (e.g., pkce_path(&dir, &format!("google_{}", state_token))); write pkce_payload (state_token + verifier) to that session file when initiating the flow and update the corresponding read logic (the OAuth callback that currently reads the shared file) to read from the same session-specific pkce_path using the incoming state_token; ensure any pkce_path usages in start flow and callback handlers (variables pkce_file, pkce_payload, and the code that reads/verifies verifier) are updated consistently and securely cleaned up after use.
702-735:⚠️ Potential issue | 🟠 MajorAdd handler-level boundary/failure tests for OAuth routes.
Lines 706-735 validate helper functions only. The gateway handlers still need coverage for auth gating, callback state mismatch rejection, and revoke allowlist behavior.
I can draft these three handler-level tests directly in this module if you want.
Based on learnings: Applies tosrc/gateway/**/*.rs: Include threat/risk notes and rollback strategy. Add/update tests or validation evidence for failure modes and boundaries. Keep observability useful but non-sensitive.
128-139:⚠️ Potential issue | 🟠 Major
auth_checkstill weakens access control when pairing is disabled.Line 130 explicitly states disabled pairing accepts any token. That makes OAuth management endpoints effectively unauthenticated in that mode; keep this deny-by-default (or explicitly constrain to loopback-only fallback).
As per coding guidelines
src/{gateway,security,tools,runtime}/**/*.rs: “Deny-by-default for access and exposure boundaries; never log secrets, raw tokens, or sensitive payloads; keep network/filesystem/shell scope as narrow as possible.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/oauth.rs` around lines 128 - 139, auth_check currently lets pairing-disabled mode accept any token; change it to deny-by-default by returning false when pairing is disabled except for an explicit loopback-only fallback. Update fn auth_check(state: &AppState, headers: &HeaderMap) -> bool to first check a pairing-enabled flag (e.g. state.pairing.is_enabled() or state.pairing.enabled()) and if true call state.pairing.is_authenticated(token) as before; if false, only return true when the request originates from loopback (add a remote_addr parameter like &SocketAddr to auth_check or add a helper on AppState/Pairing to validate loopback and use that instead); otherwise return false. Do not log or expose raw tokens while making these changes and reference auth_check and state.pairing.is_authenticated/is_enabled in the implementation.
🧹 Nitpick comments (2)
src/config/schema.rs (1)
14351-14395: Add a regression test for whitespace-only allowlist entries.Validation now guards blank-only entries, but tests here only assert
Vec::is_empty()cases. Add[" "]cases for both DM and group allowlist modes to lock in behavior.As per coding guidelines: "Prefer reproducible commands and locked dependency behavior in CI-sensitive paths; keep tests deterministic."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/schema.rs` around lines 14351 - 14395, Update the two existing tests validation_rejects_bluebubbles_dm_allowlist_with_empty_senders and validation_rejects_bluebubbles_group_allowlist_with_empty_list to also cover whitespace-only entries by setting BlueBubblesConfig.allowed_senders = vec![" ".to_string()] for the DM allowlist case and BlueBubblesConfig.group_allow_from = vec![" ".to_string()] for the group allowlist case, keep dm_policy = BlueBubblesDmPolicy::Allowlist and group_policy = BlueBubblesGroupPolicy::Allowlist respectively, call config.validate() expecting an error, and assert the error string still mentions "dm_policy" or "group_policy" as in the originals so the validation rejects blank-only entries deterministically.src/gateway/oauth.rs (1)
474-487: Keep best-effort email lookup observable (without leaking sensitive data).Lines 475-483 swallow all failures via
.ok()?, which makes production diagnosis difficult. Emit non-sensitive warnings (no token/body logging) on client-init, request, and non-success status failures.🔍 Proposed safe-observability refactor
async fn fetch_google_email(access_token: &str) -> Option<String> { - let client = oauth_http_client().ok()?; - let resp = client + let client = match oauth_http_client() { + Ok(c) => c, + Err(e) => { + tracing::warn!("google userinfo client init failed: {e}"); + return None; + } + }; + let resp = match client .get("https://www.googleapis.com/oauth2/v2/userinfo") .bearer_auth(access_token) .send() .await - .ok()?; + { + Ok(r) => r, + Err(e) => { + tracing::warn!("google userinfo request failed: {e}"); + return None; + } + }; + if !resp.status().is_success() { + tracing::warn!("google userinfo non-success status: {}", resp.status()); + return None; + } let info: serde_json::Value = resp.json().await.ok()?; info.get("email") .and_then(|v| v.as_str()) .map(str::to_string) }Based on learnings: For security/runtime/gateway/tools changes, keep observability useful but non-sensitive.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/oauth.rs` around lines 474 - 487, In fetch_google_email, keep the best-effort behavior but add non-sensitive observability: when oauth_http_client().ok() is None, emit a warning (e.g., tracing::warn!("failed to create oauth HTTP client") ) and return None; after .send().await, if the request Errs or the response status is not success, emit warnings (e.g., tracing::warn!("google userinfo request failed: {}" , err_or_status) ) without logging the access_token or response body; still return None on errors and only parse the JSON on success—ensure all logs avoid printing tokens or full response bodies while retaining status/error codes for diagnosis.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/config/schema.rs`:
- Around line 1101-1118: The GoogleOAuthConfig currently exposes the sensitive
client_secret by deriving Debug and storing it via plain fields; remove or
replace the Debug derive for GoogleOAuthConfig (or implement a custom Debug that
redacts client_secret), change the client_secret field to use a secret
type/wrapper used by the project (or annotate it so it's handled by the
secret-store) and wire it into the existing Config::load_or_init and
Config::save secret-store encryption paths so the secret is not persisted in
plaintext; update OAuthConfig usage to accept the secret wrapper and ensure any
serialization/deserialization uses the secret-store APIs rather than plain serde
persistence.
In `@src/gateway/oauth.rs`:
- Around line 3-5: The module-level documentation (the leading //! doc comments
at the top of oauth.rs) incorrectly states DocuSign JWT credential storage is
implemented; update that top-level comment to either remove the DocuSign mention
or add a one-line “planned / not yet implemented” note and clarify that the
current module only implements Google OAuth routes (the Google OAuth route
handlers in this module). Adjust the wording so the docs accurately reflect
current state (Google only) and, if keeping DocuSign as future work, mark it
clearly as planned work with no implication of current support.
---
Duplicate comments:
In `@src/gateway/mod.rs`:
- Around line 563-592: The BlueBubbles initialization currently swallows errors
by calling .ok().flatten(), which hides misconfiguration; change the logic
around the bluebubbles_channel creation (the BlueBubblesChannel::new(...) chain
that calls .map(|ch|
Arc::new(ch.with_policies(...).with_transcription(config.transcription.clone())))
and the .transpose().map_err(...) sequence so that initialization errors are
propagated instead of converted to None—remove the final .ok().flatten() and
return or propagate the error (e.g., use the ? operator or bail!/Err from the
surrounding function) so startup fails when channels_config.bluebubbles is
present but initialization fails.
In `@src/gateway/oauth.rs`:
- Around line 219-220: The current code writes all PKCE pairs to a shared file
("google_pkce.txt") causing race conditions; change the storage to a per-session
PKCE file by incorporating the unique state_token (or a generated session id)
into the filename returned by pkce_path so pkce_file becomes session-scoped
(e.g., pkce_path(&dir, &format!("google_{}", state_token))); write pkce_payload
(state_token + verifier) to that session file when initiating the flow and
update the corresponding read logic (the OAuth callback that currently reads the
shared file) to read from the same session-specific pkce_path using the incoming
state_token; ensure any pkce_path usages in start flow and callback handlers
(variables pkce_file, pkce_payload, and the code that reads/verifies verifier)
are updated consistently and securely cleaned up after use.
- Around line 128-139: auth_check currently lets pairing-disabled mode accept
any token; change it to deny-by-default by returning false when pairing is
disabled except for an explicit loopback-only fallback. Update fn
auth_check(state: &AppState, headers: &HeaderMap) -> bool to first check a
pairing-enabled flag (e.g. state.pairing.is_enabled() or
state.pairing.enabled()) and if true call state.pairing.is_authenticated(token)
as before; if false, only return true when the request originates from loopback
(add a remote_addr parameter like &SocketAddr to auth_check or add a helper on
AppState/Pairing to validate loopback and use that instead); otherwise return
false. Do not log or expose raw tokens while making these changes and reference
auth_check and state.pairing.is_authenticated/is_enabled in the implementation.
---
Nitpick comments:
In `@src/config/schema.rs`:
- Around line 14351-14395: Update the two existing tests
validation_rejects_bluebubbles_dm_allowlist_with_empty_senders and
validation_rejects_bluebubbles_group_allowlist_with_empty_list to also cover
whitespace-only entries by setting BlueBubblesConfig.allowed_senders = vec!["
".to_string()] for the DM allowlist case and BlueBubblesConfig.group_allow_from
= vec![" ".to_string()] for the group allowlist case, keep dm_policy =
BlueBubblesDmPolicy::Allowlist and group_policy =
BlueBubblesGroupPolicy::Allowlist respectively, call config.validate() expecting
an error, and assert the error string still mentions "dm_policy" or
"group_policy" as in the originals so the validation rejects blank-only entries
deterministically.
In `@src/gateway/oauth.rs`:
- Around line 474-487: In fetch_google_email, keep the best-effort behavior but
add non-sensitive observability: when oauth_http_client().ok() is None, emit a
warning (e.g., tracing::warn!("failed to create oauth HTTP client") ) and return
None; after .send().await, if the request Errs or the response status is not
success, emit warnings (e.g., tracing::warn!("google userinfo request failed:
{}" , err_or_status) ) without logging the access_token or response body; still
return None on errors and only parse the JSON on success—ensure all logs avoid
printing tokens or full response bodies while retaining status/error codes for
diagnosis.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f40a8f9b-a069-4290-8787-528cffe99ee8
📒 Files selected for processing (4)
src/config/schema.rssrc/gateway/mod.rssrc/gateway/oauth.rssrc/onboard/wizard.rs
| /// Google OAuth credentials for browser-based connect flows (`[oauth.google]`). | ||
| #[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Default)] | ||
| pub struct GoogleOAuthConfig { | ||
| /// Google OAuth client ID (from Google Cloud Console). | ||
| #[serde(default)] | ||
| pub client_id: String, | ||
| /// Google OAuth client secret (from Google Cloud Console). | ||
| #[serde(default)] | ||
| pub client_secret: String, | ||
| } | ||
|
|
||
| /// OAuth browser-based connect flows configuration (`[oauth]`). | ||
| #[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Default)] | ||
| pub struct OAuthConfig { | ||
| /// Google OAuth credentials for Gmail and Calendar access. | ||
| #[serde(default)] | ||
| pub google: GoogleOAuthConfig, | ||
| } |
There was a problem hiding this comment.
oauth.google.client_secret is exposed (plaintext persistence + debug leakage risk).
GoogleOAuthConfig introduces a secret field, but Line 1102 derives Debug and the secret is not wired into Config::load_or_init / Config::save secret-store encryption paths. This can leak credentials in logs and store them unencrypted at rest.
🔧 Proposed fix
-#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Default)]
+#[derive(Clone, Serialize, Deserialize, JsonSchema, Default)]
pub struct GoogleOAuthConfig {
/// Google OAuth client ID (from Google Cloud Console).
#[serde(default)]
pub client_id: String,
/// Google OAuth client secret (from Google Cloud Console).
#[serde(default)]
pub client_secret: String,
}
+
+impl std::fmt::Debug for GoogleOAuthConfig {
+ fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
+ f.debug_struct("GoogleOAuthConfig")
+ .field("client_id", &self.client_id)
+ .field("client_secret", &"[REDACTED]")
+ .finish()
+ }
+} decrypt_optional_secret(
&store,
&mut config.composio.api_key,
"config.composio.api_key",
)?;
+ decrypt_secret(
+ &store,
+ &mut config.oauth.google.client_secret,
+ "config.oauth.google.client_secret",
+ )?; encrypt_optional_secret(
&store,
&mut config_to_save.composio.api_key,
"config.composio.api_key",
)?;
+ encrypt_secret(
+ &store,
+ &mut config_to_save.oauth.google.client_secret,
+ "config.oauth.google.client_secret",
+ )?;As per coding guidelines: "Never commit personal or sensitive data in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/schema.rs` around lines 1101 - 1118, The GoogleOAuthConfig
currently exposes the sensitive client_secret by deriving Debug and storing it
via plain fields; remove or replace the Debug derive for GoogleOAuthConfig (or
implement a custom Debug that redacts client_secret), change the client_secret
field to use a secret type/wrapper used by the project (or annotate it so it's
handled by the secret-store) and wire it into the existing Config::load_or_init
and Config::save secret-store encryption paths so the secret is not persisted in
plaintext; update OAuthConfig usage to accept the secret wrapper and ensure any
serialization/deserialization uses the secret-store APIs rather than plain serde
persistence.
…ly disabling Propagate the error from BlueBubblesChannel::new via ? instead of swallowing it with .ok().flatten(). Misconfiguration now causes a loud startup failure rather than silently booting without the BlueBubbles channel. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ent flow collisions Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
theonlyhennygod
left a comment
There was a problem hiding this comment.
LGTM! This PR has gone through 9 rounds of review fixes addressing security concerns. Ready to merge.
Merge Conflict Detected
|
…2532 - bluebubbles: fail-fast validation for add_reaction/remove_reaction empty identifiers - config: custom Debug for GoogleOAuthConfig redacting client_secret - gateway/oauth: fix module docs, remove unused redirect field, log revoke failures - gateway/mod: hash sender PII before logging, fire-and-forget mark_read calls - i18n/vi: add BlueBubbles exception note, update allowed_senders list, add 4.15 section - i18n/el: update allowlist policy section, add BlueBubbles dm_policy exception note
…w-labs#2532 - config/schema: add whitespace-only allowlist regression tests for DM and group policy validation modes (covers the same validation path as empty-list tests but with whitespace-only entries) - gateway/oauth: add non-sensitive observability to fetch_google_email — emit tracing::warn on client init failure, request failure, and non-success HTTP status without logging the access token or response body Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/config/schema.rs (1)
1107-1109:⚠️ Potential issue | 🟠 MajorEncrypt
oauth.google.client_secretin the secret-store lifecycle.Line 1109 introduces a credential field, but it is not wired into
Config::load_or_initdecrypt flow orConfig::saveencrypt flow, so it can be persisted plaintext.As per coding guidelines: "Never commit personal or sensitive data (real names, emails, phone numbers, addresses, access tokens, API keys, credentials, IDs, private URLs) in code, docs, tests, fixtures, snapshots, logs, examples, or commit messages."🔧 Proposed fix
@@ decrypt_optional_secret( &store, &mut config.composio.api_key, "config.composio.api_key", )?; + decrypt_secret( + &store, + &mut config.oauth.google.client_secret, + "config.oauth.google.client_secret", + )?; @@ encrypt_optional_secret( &store, &mut config_to_save.composio.api_key, "config.composio.api_key", )?; + if !config_to_save.oauth.google.client_secret.trim().is_empty() { + encrypt_secret( + &store, + &mut config_to_save.oauth.google.client_secret, + "config.oauth.google.client_secret", + )?; + }Also applies to: 1130-1135
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/schema.rs` around lines 1107 - 1109, The new oauth.google.client_secret field in schema.rs is not integrated into the secret-store lifecycle; update Config::load_or_init to decrypt oauth.google.client_secret when loading and update Config::save to encrypt oauth.google.client_secret before persisting (and ensure the same handling for the other Google OAuth fields around lines 1130-1135). Wire the field into the existing secret-store helper functions used for other credentials so client_secret is never written to disk in plaintext, and add any necessary tests to verify encryption on save and decryption on load.
🧹 Nitpick comments (3)
src/gateway/mod.rs (1)
115-120: Consider making the worker pool size configurable.The hardcoded value of 16 concurrent workers is reasonable for most deployments, but high-throughput or resource-constrained environments may benefit from tuning. Consider adding a config option (e.g.,
[channels_config.bluebubbles] max_webhook_workers = 16) in a follow-up.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/gateway/mod.rs` around lines 115 - 120, Replace the hardcoded semaphore capacity with a configurable value: read an integer config like channels_config.bluebubbles.max_webhook_workers during initialization and use it when constructing BB_WORKER_SEMAPHORE (currently defined as std::sync::LazyLock<Arc<tokio::sync::Semaphore>> and created via tokio::sync::Semaphore::new(16)); update the LazyLock initializer to pull the value from your config accessor (with a sensible default) and validate it (>0) before calling Semaphore::new so deployments can tune the pool size without changing code.src/config/schema.rs (1)
395-397: Document[oauth]config contract details inline.Please add explicit compatibility impact plus migration/rollback notes for the new
oauthkey (same style used for other public config keys).As per coding guidelines: "Treat config keys as public contract: document defaults, compatibility impact, and migration/rollback path for schema changes in
src/config/schema.rs."Also applies to: 1129-1135
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/schema.rs` around lines 395 - 397, Add an inline doc comment for the public `oauth` config field (type `OAuthConfig`) that documents the default behavior, explicit compatibility impact, and a migration/rollback path following the same style as the other public config keys in this schema file; update the corresponding `OAuthConfig`/related docs block later in the file (the second occurrence mentioned in the review) to include the same compatibility and migration notes so consumers know how adding or removing the `[oauth]` key will affect runtime behavior and how to migrate or roll back safely.src/channels/bluebubbles.rs (1)
1408-1410: Consider removing.expect()on infallible JSON construction.While
serde_json::json!({...})always produces an object, using.expect()can mask future refactoring bugs. Consider usingif let Some(obj) = body.as_object_mut()for consistency with other code paths.Suggested refactor
- body.as_object_mut() - .expect("serde_json::json!({}) always produces an object") - .insert("effectId".into(), serde_json::Value::String(eid.clone())); + if let Some(obj) = body.as_object_mut() { + obj.insert("effectId".into(), serde_json::Value::String(eid.clone())); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/channels/bluebubbles.rs` around lines 1408 - 1410, Replace the infallible .expect() call on body.as_object_mut() to avoid panics: instead of body.as_object_mut().expect(...).insert("effectId"...), unwrap safely by using an if let Some(obj) = body.as_object_mut() and then call obj.insert("effectId".into(), serde_json::Value::String(eid.clone())); this mirrors other code paths and prevents accidental panics if body construction changes while preserving the insert of effectId from eid.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/gateway/oauth.rs`:
- Around line 49-53: The doc comment for OAuthStartQuery is duplicated; open the
declaration for the struct OAuthStartQuery and remove the redundant "/// Query
params for initiating OAuth." line so there is only a single doc comment above
the struct (keeping the subsequent explanatory sentence about forward
compatibility if desired).
---
Duplicate comments:
In `@src/config/schema.rs`:
- Around line 1107-1109: The new oauth.google.client_secret field in schema.rs
is not integrated into the secret-store lifecycle; update Config::load_or_init
to decrypt oauth.google.client_secret when loading and update Config::save to
encrypt oauth.google.client_secret before persisting (and ensure the same
handling for the other Google OAuth fields around lines 1130-1135). Wire the
field into the existing secret-store helper functions used for other credentials
so client_secret is never written to disk in plaintext, and add any necessary
tests to verify encryption on save and decryption on load.
---
Nitpick comments:
In `@src/channels/bluebubbles.rs`:
- Around line 1408-1410: Replace the infallible .expect() call on
body.as_object_mut() to avoid panics: instead of
body.as_object_mut().expect(...).insert("effectId"...), unwrap safely by using
an if let Some(obj) = body.as_object_mut() and then call
obj.insert("effectId".into(), serde_json::Value::String(eid.clone())); this
mirrors other code paths and prevents accidental panics if body construction
changes while preserving the insert of effectId from eid.
In `@src/config/schema.rs`:
- Around line 395-397: Add an inline doc comment for the public `oauth` config
field (type `OAuthConfig`) that documents the default behavior, explicit
compatibility impact, and a migration/rollback path following the same style as
the other public config keys in this schema file; update the corresponding
`OAuthConfig`/related docs block later in the file (the second occurrence
mentioned in the review) to include the same compatibility and migration notes
so consumers know how adding or removing the `[oauth]` key will affect runtime
behavior and how to migrate or roll back safely.
In `@src/gateway/mod.rs`:
- Around line 115-120: Replace the hardcoded semaphore capacity with a
configurable value: read an integer config like
channels_config.bluebubbles.max_webhook_workers during initialization and use it
when constructing BB_WORKER_SEMAPHORE (currently defined as
std::sync::LazyLock<Arc<tokio::sync::Semaphore>> and created via
tokio::sync::Semaphore::new(16)); update the LazyLock initializer to pull the
value from your config accessor (with a sensible default) and validate it (>0)
before calling Semaphore::new so deployments can tune the pool size without
changing code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8ebea5d5-8efa-46c0-8532-f990f618af41
📒 Files selected for processing (6)
docs/i18n/el/channels-reference.mddocs/i18n/vi/channels-reference.mdsrc/channels/bluebubbles.rssrc/config/schema.rssrc/gateway/mod.rssrc/gateway/oauth.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/i18n/el/channels-reference.md
- docs/i18n/vi/channels-reference.md
src/gateway/oauth.rs
Outdated
| /// Query params for initiating OAuth. | ||
| /// Query params for initiating OAuth. | ||
| /// Currently no query params are used; struct is kept for forward compatibility. | ||
| #[derive(Debug, Deserialize)] | ||
| pub struct OAuthStartQuery {} |
There was a problem hiding this comment.
Duplicate doc comment.
Lines 49-50 contain a duplicate /// Query params for initiating OAuth. comment.
Fix
/// Query params for initiating OAuth.
-/// Query params for initiating OAuth.
/// Currently no query params are used; struct is kept for forward compatibility.
#[derive(Debug, Deserialize)]
pub struct OAuthStartQuery {}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Query params for initiating OAuth. | |
| /// Query params for initiating OAuth. | |
| /// Currently no query params are used; struct is kept for forward compatibility. | |
| #[derive(Debug, Deserialize)] | |
| pub struct OAuthStartQuery {} | |
| /// Query params for initiating OAuth. | |
| /// Currently no query params are used; struct is kept for forward compatibility. | |
| #[derive(Debug, Deserialize)] | |
| pub struct OAuthStartQuery {} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/gateway/oauth.rs` around lines 49 - 53, The doc comment for
OAuthStartQuery is duplicated; open the declaration for the struct
OAuthStartQuery and remove the redundant "/// Query params for initiating
OAuth." line so there is only a single doc comment above the struct (keeping the
subsequent explanatory sentence about forward compatibility if desired).
…ix expect - gateway/oauth: remove duplicate doc line on OAuthStartQuery struct - channels/bluebubbles: replace .expect() on as_object_mut() with if let to avoid panic if body construction ever changes Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Closing — no longer needed. |
Summary
Adds a
bluebubbles_grouptool that lets the LLM agent manage BlueBubbles group chats via the BlueBubbles Private API. Single tool withactiondispatch pattern (consistent with the shell tool'scommandparameter).Depends on #2495 (dm_policy, group_policy, send_read_receipts).
Actions
rename_group/api/v1/chat/{guid}displayNameadd_participant/api/v1/chat/{guid}/participants/addaddressremove_participant/api/v1/chat/{guid}/participants/removeaddressleave_group/api/v1/chat/{guid}/leaveset_group_icon/api/v1/chat/{guid}/iconFiles Changed
src/tools/bluebubbles_group.rs(new) — tool implementationsrc/tools/mod.rs— factory registration (conditionally when BB is configured)Design Notes
BlueBubblesGroupToolholdsserver_url,password, and a sharedreqwest::Clientconfig.channels_config.bluebubbles.is_some()set_group_icondecodes base64 input and POSTs asmultipart/form-datasanitize_api_errorbefore returningLabel Snapshot
feature— new capabilitychannel:bluebubbles— BlueBubbles-specificarea:tools— tool extension pointChange Metadata
src/tools/only; no gateway, config schema, or channel changessrc/tools/mod.rsLinked Issue
Part of BlueBubbles feature parity series (Issue 3 of 7). No Linear key — external contributor.
Validation Evidence
cargo fmt --all -- --checkpassescargo checkpassesSecurity Impact
?password=query param (same as all other BB API calls)addressfield is passed directly to BB API — no shell injection possible (HTTP JSON body)Privacy and Data Hygiene
passwordis redacted inBlueBubblesConfigDebug output (existing fix from feat(channel/bluebubbles): dm_policy, group_policy, and send_read_receipts #2495)Compatibility
i18n Follow-Through
No user-facing strings or docs navigation changes.
Human Verification
Needs manual test: configure BB in
config.toml, ask the agent to rename a group chat.Side Effects
None — tool is registered at startup but takes no action unless explicitly invoked.
Agent Notes
Generated with Claude Code. Follows tool trait and factory registration patterns from
src/tools/shell.rsandsrc/tools/mod.rs.Rollback Plan
Remove the
BlueBubblesGroupToolregistration block insrc/tools/mod.rs(3 lines). No migration needed.Risks
Low — purely additive, behind existing BB configuration gate.
Summary by CodeRabbit
New Features
Configuration / Policy
Documentation
Review Fixes (Round 1)
Addressed CodeRabbit findings on PR #2532 scope:
Avoid panic in tool constructor (Major): Replaced
expect("valid reqwest client config")inBlueBubblesGroupTool::newwithunwrap_or_else(|_| reqwest::Client::new()). Thereqwest::ClientBuilder::build()can theoretically fail if TLS initialization fails; a startup panic is worse than falling back to a default client.Gate registration on non-empty password (Minor): Updated the factory in
src/tools/mod.rsto also validatepassword.trim().is_empty()before registering the group tool. A blank password would cause every BB API call to fail at execution time — it's better to skip registration and surface the misconfiguration early.Review Fixes (Round 2)
Propagated security fixes from base branch
feat/bb-policy-upstream-prvia rebase:Path traversal in
handle_auth_revoke(Critical): Service name from URL path now validated against an explicit allowlist beforetoken_path()construction.Missing OAuth state — CSRF (Critical): State token generated and stored alongside PKCE verifier;
handle_google_callbackvalidates state match, returning 400 on mismatch.(These fixes were applied to
src/gateway/oauth.rswhich this branch inherits through the stacked base branch.)Review Fixes (Round 3)
Addressed CodeRabbit findings on PR #2532 scope:
Transport failures → ToolResult (Major): All five
.send().awaitcalls inBlueBubblesGroupTool::executepreviously used.map_err(|e| anyhow::anyhow\!(...))?'to propagate transport failures as hard errors. Replaced withmatch ... { Ok(r) => r, Err(e) => return Ok(ToolResult { success: false, error: Some(...), ... }) }so network failures are surfaced to the LLM as tool results rather than crashing the call chain. Affected actions: rename_group, add_participant, remove_participant, leave_group, set_group_icon.Rebased on updated
feat/bb-policy-upstream-prwhich includes:Config::validate()Review Fixes (Round 4)
Propagated TOCTOU fixes from base branches via rebase:
TOCTOU on token write / PKCE write / oauth dir creation: Atomic mode-at-creation fixes inherited from
feat/bluebubbles-transcription-tapbackbase branch.actions-source-policy.md + localized docs: Inherited from rebase chain.
Review Fixes (Round 5)
Propagated from base branch
feat/bb-policy-upstream-prvia rebase:PII log removal + whitespace-only allowlist validation: Inherited from Round 8 fixes on
feat/bb-policy-upstream-pr.Review Fixes (Round 6)
Propagated security fixes from base branches via rebase:
Unmasked URL in send() error path + claude-review.yml hardening: Inherited from Round 6 fixes on
feat/bluebubbles-transcription-tapback.Review Fixes (Round 7)
Propagated docs fix from base branch
feat/bluebubbles-transcription-tapbackvia rebase:Chinese command text in English docs:
docs/channels-reference.mdnatural-language approval command examples updated from授权工具 shelltoapprove tool shell.Review Fixes (Round 8)
Propagated from base branch
feat/bluebubbles-transcription-tapbackvia rebase:Transcription temp dir cleanup + error_page title HTML escaping: Inherited from Round 8 fixes on base branch.
Review Fixes (Round 9)
Icon size check on decoded bytes (Correctness):
bluebubbles_group.rs: the 5 MiB limit was checked against the base64 string length (≈ 3.75 MiB decoded). Moved the check afterbase64::decodeto enforce the limit on actual byte count, so a 5 MiB image (≈ 6.7 MiB base64) is correctly accepted.Review Fixes (Round 6)
oauth.rs: propagate HTTP client build failure — no silent timeout lossoauth.rs: validate PKCE state before consuming session file (CSRF fix)oauth.rs: use POST form body for token revocation (RFC 7009 compliance)transcription.rs: explicit fail on non-UTF-8 temp paths, not empty-string passthroughtranscription.rs: clean up partial ffmpeg WAV on conversion failurebluebubbles.rs: remove attachment GUID from error logclaude-review.yml: fix step-level PR condition (!= ''→!= null)Review Fixes (Round 7)
bluebubbles_group.rs: explicitanyhow::Result<Self>constructor — propagates HTTP client build failure instead of silently falling back without timeoutbluebubbles_group.rs: pre-decode icon size check (7 MiB b64 limit) — rejects oversized inputs before allocating memory for decodesrc/tools/mod.rs: registration updated to handleanyhow::ResultconstructorReview Fixes (Round 8)
bluebubbles_group.rs: injectArc<SecurityPolicy>— addedcan_act()check (blocks execution underread_onlyautonomy) andrecord_action()check (enforces per-hour rate limit); matches pattern used byShellTool,FileReadTool, and all other side-effect toolssrc/tools/mod.rs: preserve raw password value (bb.password.clone()instead of.trim().to_string()) — passwords may contain leading/trailing whitespace; emptiness check uses.trim()separatelyReview Fixes (cargo fmt)
cargo fmt --allto files changed in this PR's diff — collapsed single-use iterator chains and reformatted match arm bodies to comply with project rustfmt settings