-
Notifications
You must be signed in to change notification settings - Fork 1.1k
wip: Optimize macOS recording pipeline with M4S muxer and async finalization (+ optimisations) #1464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughAdds a thread-safe FinalizingRecordings tracker and wait-for-ready flow; offloads fragmented remux/finalization to background tasks; replaces legacy macOS segmented muxers with a new fragmented M4S pipeline; introduces audio playhead sync, multi-position decoders, pause-aware timestamps, remux/recovery utilities, and multiple encoder/muxer API changes. Changes
Sequence Diagram(s)sequenceDiagram
participant Editor as Editor
participant App as AppState
participant FR as FinalizingRecordings
participant Rec as RecordingPipeline
participant Remux as RemuxEngine
participant Disk as Storage
Editor->>App: create_editor_instance_impl(path)
App->>FR: wait_for_recording_ready(path)
alt path is finalizing
FR->>FR: is_finalizing(path) -> Receiver
FR->>FR: await Receiver
else not finalizing
App->>Disk: load RecordingMeta(path)
alt needs remux
App->>Rec: start_finalizing(path) -> Receiver
Rec->>Remux: spawn finalize_studio_recording(path) (background)
Remux->>Disk: probe/concatenate/remux (init+segments)
Remux->>Disk: write remuxed output
Remux->>Rec: completion
Rec->>FR: finish_finalizing(path) (send true, remove entry)
FR-->>App: receiver completes
end
end
App-->>Editor: proceed (create editor instance)
sequenceDiagram
participant Capture as ScreenCapture
participant Pipeline as OutputPipeline
participant Encoder as SegmentedVideoEncoder
participant File as SegmentFile
participant Manifest as ManifestWriter
participant Disk as Disk
Capture->>Pipeline: send_video_frame(frame, ts)
Pipeline->>Encoder: queue_frame(frame, ts)
Encoder->>Encoder: assign timestamp, check segment boundary
alt within same segment
Encoder->>Encoder: encode into current encoder
else boundary reached
Encoder->>File: finalize current segment (.m4s.tmp -> .m4s)
File->>Disk: fsync file
Encoder->>Encoder: compute duration & file size
Encoder->>Encoder: append completed segment info
Manifest->>Disk: atomically write in-progress manifest.json
Encoder->>Encoder: create new segment and init segment if needed
end
alt finish called
Encoder->>Encoder: flush encoders, write trailer
Encoder->>Encoder: finalize pending tmp files
Manifest->>Disk: write final manifest.json (is_complete=true)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src-tauri/src/recording.rs (1)
1995-2030: Crash‑recovery remux misses M4S fragmentsThe new macOS M4S muxer writes segments in the standard fragmented MP4 format:
init.mp4plus numbered media segments with.m4sextension. However, the crash-recovery fragment detection has a critical mismatch:
find_fragments_in_dironly accepts.mp4or.m4afiles- The muxer writes
.m4ssegments (per FFmpeg DASH output specification)Result: when recording crashes with fragmented M4S output, the recovery logic sees empty directories and fails with "Could not find fragments to remux", even though valid
.m4ssegments exist on disk.The fix is straightforward—extend
find_fragments_in_dirto accept.m4s:fn find_fragments_in_dir(dir: &Path) -> Vec<PathBuf> { let Ok(entries) = std::fs::read_dir(dir) else { return Vec::new(); }; let mut fragments: Vec<_> = entries .filter_map(|e| e.ok()) .map(|e| e.path()) - .filter(|p| p.extension().is_some_and(|e| e == "mp4" || e == "m4a")) + .filter(|p| p.extension().is_some_and(|e| e == "mp4" || e == "m4a" || e == "m4s")) .collect(); fragments.sort(); fragments }
🧹 Nitpick comments (20)
crates/enc-avfoundation/src/mp4.rs (1)
134-153: LGTM! Solid encoder configuration improvements.The keyframe interval calculation (one keyframe every 2 seconds) and the additional compression properties (disabling frame reordering, setting expected frame rate, and max keyframe interval) are appropriate choices for recording scenarios. These settings improve seek performance, crash recovery, and encoding latency.
Optional: Consider logging the computed keyframe_interval
For easier debugging, you could log the computed keyframe interval alongside the bitrate:
let keyframe_interval = (fps * 2.0) as i32; - debug!("recording bitrate: {bitrate}"); + debug!("recording bitrate: {bitrate}, keyframe_interval: {keyframe_interval}");crates/rendering/src/decoder/avassetreader.rs (2)
286-289: Remove stale#[allow(unused)]annotation.
last_active_frameis used in the cache eviction logic (lines 403-411), making this allow attribute incorrect.🔎 Proposed fix
- #[allow(unused)] - let mut last_active_frame = None::<u32>; + let mut last_active_frame = None::<u32>;
378-381: Consider logging frame decode errors instead of silently continuing.The error from
frame.map_err()is discarded. While continuing is reasonable, logging would aid debugging decode failures.🔎 Proposed fix
for frame in &mut frames { - let Ok(frame) = frame.map_err(|e| format!("read frame / {e}")) else { + let Ok(frame) = frame else { + debug!("frame decode error, skipping"); continue; };crates/recording/examples/memory-leak-detector.rs (1)
110-113: Consider adjusting separator width for precise alignment.The separator line width (50 characters) doesn't precisely match the header column widths. For better visual alignment, consider calculating the exact width based on the column format widths (8 + 14 + 10 + 12 + spacing).
🔎 Proposed fix for precise alignment
- println!("{:-<50}", ""); + println!("{:-<48}", "");crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (3)
21-26: Environment variable fallback should be documented or made more discoverable.The
CAP_MUXER_BUFFER_SIZEenvironment variable provides a tuning knob but isn't documented. Consider making this a configuration option instead, or ensure it's documented in operational guides.
209-248: Consider thread parking instead of busy-wait for cleaner shutdown.The 50ms polling loop works but is less efficient than parking/unparking or using a condition variable. Additionally, if the encoder thread is abandoned at timeout (line 233), calling
finish_with_timestampafterward (line 240) races with the still-running thread, though the mutex provides safety.
600-891: Consider reducing duplication between display and camera muxers.
MacOSFragmentedM4SCameraMuxerduplicates ~290 lines fromMacOSFragmentedM4SMuxerwith only minor differences (thread name, log prefixes,VideoFrametype). Consider extracting shared logic into a generic helper or inner struct to reduce maintenance burden.crates/enc-ffmpeg/src/remux.rs (1)
440-509: Consider extracting shared remuxing logic.The
remux_to_regular_mp4function shares significant code withconcatenate_with_concat_demuxer(lines 106-181), particularly the stream mapping and DTS/PTS adjustment logic. This duplication could be reduced by extracting a common helper.🔎 Example refactor approach
Extract a shared helper that takes an input context and handles the stream mapping and packet writing:
fn remux_packets( ictx: &mut avformat::context::Input, output: &Path, ) -> Result<(), RemuxError> { let mut octx = avformat::output(output)?; // ... shared stream mapping and packet writing logic ... }Then both functions could use this helper with their respective input contexts.
crates/recording/src/recovery.rs (1)
243-247: Extract manifest version to a shared constant instead of hardcoding.The hardcoded
max_supported_versionof4form4s_segmentsshould reference theMANIFEST_VERSIONconstant fromcap-enc-ffmpeg. Since the recording crate already depends on enc-ffmpeg, exportMANIFEST_VERSIONaspuband use it here to prevent version drift if the encoder's version ever changes.crates/enc-ffmpeg/src/mux/segmented_stream.rs (6)
15-45: Consider extracting shared helper functions to avoid duplication.
atomic_write_jsonandsync_fileare duplicated fromsegmented_audio.rs. Consider extracting them to a shared module (e.g.,crates/enc-ffmpeg/src/mux/util.rs) to reduce maintenance burden and ensure consistency.Additionally,
sync_file(lines 40-44) uses nestedifstatements, which is inconsistent with both the coding guidelines and the&&chained syntax already used inatomic_write_json(lines 26-28) within this same file.🔎 Proposed fix for sync_file
fn sync_file(path: &Path) { - if let Ok(file) = std::fs::File::open(path) { - if let Err(e) = file.sync_all() { - tracing::warn!("File fsync failed for {}: {e}", path.display()); - } + if let Ok(file) = std::fs::File::open(path) + && let Err(e) = file.sync_all() + { + tracing::warn!("File fsync failed for {}: {e}", path.display()); } }
232-241: Filesystem check per frame may impact performance.
detect_current_segment_indexcallsnext_segment_path.exists()on every frame, resulting in a syscall per frame (30-60+ per second). Consider caching the detection or using a frame-count/timestamp heuristic to reduce filesystem overhead, checking only periodically (e.g., every N frames or when approaching the expected segment boundary).
243-284: Return typeResult<(), QueueFrameError>is neverErr.This method always returns
Ok(()). Consider simplifying the return type to()unless error paths are planned for future implementation.🔎 Proposed simplification
fn on_segment_completed( &mut self, completed_index: u32, timestamp: Duration, - ) -> Result<(), QueueFrameError> { + ) { // ... body unchanged ... - - Ok(()) }And update the call site in
queue_frame:if new_segment_index > prev_segment_index { - self.on_segment_completed(prev_segment_index, timestamp)?; + self.on_segment_completed(prev_segment_index, timestamp); }
291-372: Consider extracting sharedSegmentEntrymapping logic.The
SegmentEntryconstruction fromVideoSegmentInfois duplicated acrosswrite_manifest,write_in_progress_manifest, andfinalize_manifest. A small helper could reduce repetition.🔎 Example helper
impl VideoSegmentInfo { fn to_segment_entry(&self) -> SegmentEntry { SegmentEntry { path: self.path.file_name() .unwrap_or_default() .to_string_lossy() .into_owned(), index: self.index, duration: self.duration.as_secs_f64(), is_complete: true, file_size: self.file_size, } } }
429-457: Deep nesting reduces readability; consider guard clauses.The nested
if letchains create 5+ levels of indentation. Using earlycontinuestatements can flatten the logic.🔎 Flattened approach
for entry in entries.flatten() { let path = entry.path(); let Some(name) = path.file_name().and_then(|n| n.to_str()) else { continue; }; if !name.starts_with("segment_") || !name.ends_with(".m4s.tmp") { continue; } let final_name = name.trim_end_matches(".tmp"); let final_path = self.base_path.join(final_name); let Ok(metadata) = std::fs::metadata(&path) else { continue; }; if metadata.len() == 0 { continue; } // ... rename logic ... }
580-586:Optionwrapper is unnecessary when always returningSome.
current_encoderandcurrent_encoder_mutalways returnSome(...). Unless this is intentional for API compatibility with other encoder types that may not have a current encoder, consider returning the reference directly.🔎 Proposed simplification
- pub fn current_encoder(&self) -> Option<&H264Encoder> { - Some(&self.encoder) + pub fn current_encoder(&self) -> &H264Encoder { + &self.encoder } - pub fn current_encoder_mut(&mut self) -> Option<&mut H264Encoder> { - Some(&mut self.encoder) + pub fn current_encoder_mut(&mut self) -> &mut H264Encoder { + &mut self.encoder }crates/recording/src/cursor.rs (1)
155-161: Optional: Reduce string clones for better performance.
cursor_idandlast_cursor_idare cloned multiple times (lines 155, 158, 161, 173, 195). Consider usingRc<str>orArc<str>to reduce allocations, especially since cursor IDs are frequently copied into events.Example refactor
+use std::sync::Arc; - let mut last_cursor_id = "default".to_string(); + let mut last_cursor_id: Arc<str> = Arc::from("default"); let cursor_id = if position_changed { last_position = position; if let Some(data) = get_cursor_data() { // ... - let cursor_id = existing_id.id.to_string(); + let cursor_id: Arc<str> = Arc::from(existing_id.id.to_string()); // ... - last_cursor_id = cursor_id.clone(); - cursor_id + last_cursor_id = Arc::clone(&cursor_id); + Arc::clone(&cursor_id) } else { - last_cursor_id.clone() + Arc::clone(&last_cursor_id) } } else { - last_cursor_id.clone() + Arc::clone(&last_cursor_id) };Note:
Arc::cloneonly increments a reference count, avoiding string data copies.apps/desktop/src-tauri/src/lib.rs (2)
93-135: FinalizingRecordings registry is sound; consider guarding against duplicate startsThe registry design (per‑path
watch<bool>with cloned receivers) is straightforward and fits the async finalization flow.One edge case to keep in mind:
start_finalizingalways inserts a fresh channel and overwrites any existing entry for the samePathBuf. If, in future, you ever callstart_finalizingtwice for a given path (e.g. overlapping retries), any code holding the oldReceiverwill never see thetruesignal. If that scenario is possible, consider returning the existing receiver when present instead of creating a new one.
3167-3235: Editor readiness gating is correct; you may want to deduplicate crash‑recovery remux workWiring
create_editor_instance_implthroughwait_for_recording_readycleanly separates three cases:
- Active async finalization (
FinalizingRecordingspresent) → wait onwatch<bool>.- No finalization, but fragmented recording → run
remux_fragmented_recordingsynchronously in a blocking task.- Already‑final/instant recordings → return immediately.
One thing to consider: in the crash‑recovery branch you don’t mark the recording as “finalizing” in
FinalizingRecordings, so two editor windows opened simultaneously on the same crashed project would each kick off an independent remux. If that’s a realistic scenario, you could reuse the same registry here (insert before spawning the remux task and reuse thewatchchannel) so concurrent callers coalesce onto a single remux effort.apps/desktop/src-tauri/src/recording.rs (1)
1369-1448: Async studio finalization flow is well‑structured; consider surfacing failures into meta/statusThe new
needs_fragment_remuxpath plusfinalize_studio_recordingbackground task nicely decouple:
- Immediate user‑visible behaviour (open editor or overlay, play stop sound).
- Long‑running remux + project config + screenshot work in the background.
- Coordination via
FinalizingRecordingsso later editor opens can wait instead of duplicating work.One gap is error propagation: if
finalize_studio_recording(or its internalremux_fragmented_recording) fails, you currently just log and still callfinish_finalizing, leaving the on‑diskRecordingMetastatus unchanged. From the user’s point of view this can look like a “completed” studio recording that may be missing media or config.If you want failures here to be diagnosable and possibly block opening in the editor, consider updating the meta to a
StudioRecordingStatus::Failed { error }(similar to the crash‑handling path inhandle_recording_end) or emitting a dedicated event/notification when background finalization fails.Also applies to: 1644-1708
crates/editor/src/playback.rs (1)
679-686: Audio/video sync correction works conceptually but may be biased by prefill and output latencyThe new sync loop between video and audio is a solid idea:
- Video publishes its playhead over
playhead_rx.- The audio callback compares that to
audio_renderer.current_playhead().- When either drift exceeds
SYNC_THRESHOLD_SECSor video seeks by more than the threshold, you reseed the audio renderer tovideo_playhead + initial_compensation_secs.However,
AudioPlaybackBuffer::current_playhead()is based onelapsed_samples, which is incremented every time you render samples – including during the initial prefill and any subsequent top‑up of the ring buffer. It also doesn’t account for the device’s output latency beyond the staticinitial_compensation_secsyou add when reseeding.In practice this means:
- Right after prefill,
audio_playheadcan already be ahead of the video playhead by roughly “prefill duration + output latency”.- The sync logic may treat that constant offset as drift and repeatedly reseed, especially around startup or after buffer refills, even though what the user hears is still aligned once latency is considered.
You might get a more stable and accurate sync signal by incorporating the latency estimate into the comparison, for example:
- Use something like
effective_audio_playhead = audio_renderer.current_playhead() - latency_corrector.current_secs().unwrap_or_default()when computingdrift.- Or at least subtract an approximation derived from your headroom (
headroom_for_stream / sample_rate) ifLatencyCorrectordoesn’t expose a suitable metric.That would make
driftcloser to “what the user is actually hearing vs. the video playhead”, and should reduce unnecessary reseeks while still catching real desyncs and explicit scrubs.Also applies to: 757-772, 891-903, 923-949
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (33)
apps/desktop/src-tauri/src/lib.rsapps/desktop/src-tauri/src/recording.rscrates/editor/src/audio.rscrates/editor/src/playback.rscrates/enc-avfoundation/src/lib.rscrates/enc-avfoundation/src/mp4.rscrates/enc-avfoundation/src/segmented.rscrates/enc-ffmpeg/src/lib.rscrates/enc-ffmpeg/src/mux/fragmented_mp4.rscrates/enc-ffmpeg/src/mux/mod.rscrates/enc-ffmpeg/src/mux/segmented_audio.rscrates/enc-ffmpeg/src/mux/segmented_stream.rscrates/enc-ffmpeg/src/remux.rscrates/enc-ffmpeg/src/video/h264.rscrates/recording/Cargo.tomlcrates/recording/examples/memory-leak-detector.rscrates/recording/src/capture_pipeline.rscrates/recording/src/cursor.rscrates/recording/src/output_pipeline/core.rscrates/recording/src/output_pipeline/fragmented.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/output_pipeline/macos_segmented_ffmpeg.rscrates/recording/src/output_pipeline/mod.rscrates/recording/src/recovery.rscrates/recording/src/sources/screen_capture/macos.rscrates/recording/src/studio_recording.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/decoder/ffmpeg.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/layers/camera.rscrates/rendering/src/layers/display.rscrates/rendering/src/lib.rscrates/rendering/src/yuv_converter.rs
💤 Files with no reviewable changes (6)
- crates/enc-avfoundation/src/segmented.rs
- crates/enc-avfoundation/src/lib.rs
- crates/enc-ffmpeg/src/mux/fragmented_mp4.rs
- crates/enc-ffmpeg/src/lib.rs
- crates/recording/src/output_pipeline/macos_segmented_ffmpeg.rs
- crates/recording/src/output_pipeline/fragmented.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
crates/editor/src/audio.rscrates/recording/src/cursor.rscrates/recording/src/capture_pipeline.rscrates/enc-ffmpeg/src/mux/mod.rscrates/rendering/src/decoder/ffmpeg.rscrates/rendering/src/decoder/avassetreader.rscrates/enc-ffmpeg/src/mux/segmented_audio.rscrates/enc-ffmpeg/src/remux.rscrates/rendering/src/lib.rscrates/rendering/src/layers/camera.rscrates/recording/src/output_pipeline/mod.rscrates/recording/src/output_pipeline/core.rscrates/editor/src/playback.rsapps/desktop/src-tauri/src/lib.rscrates/enc-avfoundation/src/mp4.rscrates/recording/src/recovery.rsapps/desktop/src-tauri/src/recording.rscrates/recording/examples/memory-leak-detector.rscrates/enc-ffmpeg/src/video/h264.rscrates/enc-ffmpeg/src/mux/segmented_stream.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/mod.rscrates/recording/src/sources/screen_capture/macos.rscrates/recording/src/studio_recording.rscrates/rendering/src/yuv_converter.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
crates/editor/src/audio.rscrates/recording/src/cursor.rscrates/recording/src/capture_pipeline.rscrates/enc-ffmpeg/src/mux/mod.rscrates/rendering/src/decoder/ffmpeg.rscrates/rendering/src/decoder/avassetreader.rscrates/enc-ffmpeg/src/mux/segmented_audio.rscrates/enc-ffmpeg/src/remux.rscrates/rendering/src/lib.rscrates/rendering/src/layers/camera.rscrates/recording/src/output_pipeline/mod.rscrates/recording/src/output_pipeline/core.rscrates/editor/src/playback.rsapps/desktop/src-tauri/src/lib.rscrates/enc-avfoundation/src/mp4.rscrates/recording/src/recovery.rsapps/desktop/src-tauri/src/recording.rscrates/recording/examples/memory-leak-detector.rscrates/enc-ffmpeg/src/video/h264.rscrates/enc-ffmpeg/src/mux/segmented_stream.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/rendering/src/layers/display.rscrates/rendering/src/decoder/mod.rscrates/recording/src/sources/screen_capture/macos.rscrates/recording/src/studio_recording.rscrates/rendering/src/yuv_converter.rs
🧠 Learnings (3)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1219
File: crates/enc-avfoundation/src/mp4.rs:350-373
Timestamp: 2025-10-17T05:58:22.586Z
Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/recording/src/capture_pipeline.rscrates/rendering/src/decoder/ffmpeg.rscrates/rendering/src/decoder/avassetreader.rscrates/enc-ffmpeg/src/remux.rscrates/rendering/src/layers/camera.rscrates/recording/src/output_pipeline/mod.rscrates/recording/src/output_pipeline/core.rscrates/editor/src/playback.rsapps/desktop/src-tauri/src/lib.rscrates/enc-avfoundation/src/mp4.rscrates/enc-ffmpeg/src/video/h264.rscrates/enc-ffmpeg/src/mux/segmented_stream.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/sources/screen_capture/macos.rscrates/recording/src/studio_recording.rs
📚 Learning: 2025-10-17T05:58:22.586Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1219
File: crates/enc-avfoundation/src/mp4.rs:350-373
Timestamp: 2025-10-17T05:58:22.586Z
Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.
Applied to files:
crates/enc-ffmpeg/src/mux/segmented_audio.rscrates/recording/src/output_pipeline/core.rscrates/enc-avfoundation/src/mp4.rscrates/recording/src/recovery.rscrates/enc-ffmpeg/src/mux/segmented_stream.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/sources/screen_capture/macos.rscrates/recording/src/studio_recording.rs
🧬 Code graph analysis (8)
crates/recording/src/cursor.rs (1)
crates/cursor-capture/src/position.rs (1)
get(12-20)
crates/recording/src/capture_pipeline.rs (1)
crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (2)
default(161-167)default(617-623)
crates/rendering/src/decoder/avassetreader.rs (1)
crates/rendering/src/decoder/mod.rs (12)
new_with_arc(210-223)data(454-456)new_nv12_with_arc(240-259)y_stride(532-534)uv_stride(536-538)new_yuv420p_with_arc(282-301)format(466-468)width(458-460)height(462-464)new(87-89)new(111-118)new(195-208)
crates/rendering/src/layers/camera.rs (1)
apps/desktop/src/utils/tauri.ts (1)
XY(518-518)
crates/editor/src/playback.rs (4)
crates/recording/src/studio_recording.rs (1)
watch(550-550)crates/media-info/src/lib.rs (1)
fps(265-267)crates/rendering/src/project_recordings.rs (1)
fps(50-52)crates/audio/src/latency.rs (1)
initial_compensation_secs(141-143)
apps/desktop/src-tauri/src/lib.rs (1)
crates/project/src/meta.rs (5)
path(127-129)path(312-314)path(363-365)load_for_project(131-137)studio_meta(177-182)
crates/enc-ffmpeg/src/mux/segmented_stream.rs (2)
crates/enc-ffmpeg/src/mux/segmented_audio.rs (3)
atomic_write_json(11-33)sync_file(35-41)queue_frame(167-193)crates/enc-ffmpeg/src/video/h264.rs (2)
new(52-60)queue_frame(362-387)
crates/recording/src/sources/screen_capture/macos.rs (1)
crates/recording/src/sources/screen_capture/mod.rs (1)
pixel_format(237-237)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (78)
crates/recording/src/sources/screen_capture/macos.rs (7)
27-39: LGTM!Clean implementation of runtime configurables with sensible defaults. The pattern of parsing environment variables with fallback defaults is idiomatic.
47-49: Pixel format change is consistent.The NV12 format returned by
pixel_format()aligns with thecv::PixelFormat::_420Vsetting on line 166. This is a more efficient format for video encoding pipelines compared to BGRA.Also applies to: 166-166
85-88: LGTM!Proper initialization of the drop counter with
Arc<AtomicU64>for thread-safe sharing across the capture pipeline.
148-155: LGTM!Queue depth calculation scales appropriately with FPS and uses idiomatic
.clamp()for bounds checking. Good observability with structured logging.
211-219: Good addition of drop tracking.Using
try_sendwith drop counting provides visibility into backpressure situations without blocking the capture callback. This is a significant improvement over silently dropping frames.
366-381: LGTM!The
drop_counterfield is consistently added to bothVideoSourceConfigandVideoSourcestructs, properly propagating the tracking through the pipeline.
458-501: Well-implemented monitoring with proper safeguards.Good implementation:
- Uses
saturating_subfor delta calculations per coding guidelines- Drop rate formula correctly handles the ratio of drops to total attempts
- 5% threshold provides actionable warnings without noise
- Cancellation token ensures cleanup on stop
The spawned task's
JoinHandlebeing dropped is acceptable here since it's a fire-and-forget monitoring task tied to the cancellation token lifecycle.crates/enc-avfoundation/src/mp4.rs (2)
82-90: LGTM! Useful diagnostic logging added.The structured logging provides comprehensive encoder initialization information that will be valuable for debugging and monitoring.
548-550: The extern declarations forAVVideoAllowFrameReorderingKey,AVVideoExpectedSourceFrameRateKey, andAVVideoMaxKeyFrameIntervalKeyare correctly typed as&'static ns::String. These AVFoundation keys have been available since macOS 10.10, which is well below any reasonable minimum deployment target for a modern macOS application. No issues detected.crates/rendering/src/layers/display.rs (1)
92-94: LGTM - Practical threshold for frame timing comparison.The change from
f32::EPSILONto0.001(1ms) is a sensible tolerance for frame deduplication. This threshold is consistent with the same logic inCameraLayer::prepare.crates/rendering/src/decoder/ffmpeg.rs (3)
190-190: LGTM - First-frame tracking for improved fallback behavior.This complements the existing
last_sent_framepattern and provides a meaningful fallback when playback seeks to a position before any frames have been sent.
279-285: LGTM - Efficient first-frame capture.Processing the first frame immediately and caching it avoids redundant decoding when used as a fallback. The pattern correctly converts the
cache_frametoProcessedto avoid re-processing later.
369-373: LGTM - Improved fallback avoids unnecessary black frames.Using the first decoded frame as a fallback provides a better user experience than immediately showing a black frame when no prior frames have been sent.
crates/rendering/src/lib.rs (2)
1694-1705: LGTM - Recording time propagation to camera layer.The
recording_timeis correctly propagated fromsegment_framesto the camera layer'spreparemethod, aligning with the updated signature incamera.rs.
1707-1718: LGTM - Consistent recording time propagation to camera_only layer.Same pattern applied consistently to the
camera_onlylayer.crates/rendering/src/yuv_converter.rs (5)
136-158: LGTM - Bind group cache for YUV conversion optimization.The
BindGroupCachestruct provides an efficient mechanism to avoid recreating bind groups on every frame. The invalidation logic correctly clears cached bind groups when dimensions change, and integration withensure_texture_size(line 626) ensures cache coherence when textures are reallocated.
160-200: LGTM - NV12 bind group caching with dimension tracking.The dimension check provides an additional safety layer beyond the explicit invalidation in
ensure_texture_size. Theunwrap()on line 199 is safe since the preceding block guarantees the slot is populated.
202-248: LGTM - YUV420P bind group caching follows consistent pattern.The implementation mirrors the NV12 caching logic, maintaining consistency across both YUV formats.
694-716: LGTM - Cache integration in convert_nv12.The cache is correctly used with
allocated_widthandallocated_heightas the key dimensions, ensuring consistency with the texture allocation tracking.
853-876: LGTM - Cache integration in convert_yuv420p.Consistent cache usage pattern applied to YUV420P conversion path.
crates/rendering/src/layers/camera.rs (4)
17-17: LGTM - Semantic improvement from pointer to time-based tracking.Using
last_recording_timeis more semantically meaningful than pointer comparison for frame deduplication.
62-68: LGTM - Updated signature aligns with lib.rs changes.The fourth parameter
f32for recording time is correctly extracted from the tuple and used for frame change detection.
73-77: LGTM - Consistent timing threshold with DisplayLayer.The 0.001 (1ms) threshold matches the implementation in
DisplayLayer::prepare, ensuring uniform frame deduplication behavior across layers.
124-150: LGTM - macOS IOSurface path with graceful fallback.The macOS-specific path correctly:
- Attempts zero-copy IOSurface conversion first
- Falls back to plane-based NV12 conversion if IOSurface is unavailable
- Uses
#[cfg(not(target_os = "macos"))]for non-macOS platformscrates/rendering/src/decoder/avassetreader.rs (7)
23-28: LGTM - Clean encapsulation of frame data with Arc.The
FrameDatastruct consolidates ownership of frame bytes with stride metadata, enabling efficient sharing viaArc.
40-66: LGTM - Arc-based constructors used correctly.The destructuring and use of
Arc::cloneto increment reference counts is appropriate for shared ownership.
294-297: LGTM - Clean request batching structure.Simple struct to accumulate pending frame requests for batch processing.
348-348: VerifyBACKWARD_SEEK_TOLERANCEis appropriate.The hardcoded tolerance of 120 frames determines when a seek triggers a decoder reset. At 30fps this is 4 seconds, at 60fps it's 2 seconds. Consider whether this value should be derived from fps or made configurable.
350-351: LGTM - Correct use ofsaturating_sub.Good adherence to the coding guidelines to prevent underflow panics. As per coding guidelines,
saturating_subis used instead of-for arithmetic on unsigned types.
455-462: Requests for uncached frames are silently dropped.After the frame iteration loop completes, any remaining requests for frames not in cache have their senders dropped without a response. The caller will receive a channel error rather than an explicit "frame not found" or fallback frame. Verify this is intentional behavior.
289-289: Remove the unusedfirst_ever_framevariable.The
first_ever_framevariable is declared at line 289 and populated at lines 392-393 but never actually used. Unlike the similar pattern inffmpeg.rswhich reads this value later, here it serves no purpose and should be removed to eliminate dead code.crates/rendering/src/decoder/mod.rs (6)
210-223: LGTM - Consistent Arc-based constructor for RGBA format.Follows the same pattern as
new()but accepts pre-wrappedArc<Vec<u8>>.
240-259: LGTM - Arc-based NV12 constructor.Properly mirrors
new_nv12with Arc-wrapped data.
282-301: LGTM - Arc-based YUV420p constructor.Properly mirrors
new_yuv420pwith Arc-wrapped data.
323-341: LGTM - Arc-based IOSurface constructor for macOS.Combines Arc-wrapped data with IOSurface backing for efficient GPU access.
343-360: LGTM - Zero-copy constructor for IOSurface-backed frames.Creates a frame with empty data buffer, relying entirely on
iosurface_backingfor pixel data. Callers should useiosurface_backing()rather thandata()for these frames.
550-550: Verify impact of reducingFRAME_CACHE_SIZEfrom 750 to 60.This is a 12.5x reduction in cache capacity. At 30fps, the cache now holds ~2 seconds of frames instead of ~25 seconds. This may impact seeking smoothness, especially for longer seeks or slower decode paths. Ensure this was tested with real-world playback scenarios.
crates/recording/Cargo.toml (2)
92-93: libproc placement is correct for example code usage. Thelibproc = "0.14"dependency in[target.'cfg(target_os = "macos")'.dev-dependencies]is appropriate because libproc is used only incrates/recording/examples/memory-leak-detector.rs, which is example code. Dev-dependencies are the standard location for dependencies required by examples, tests, and benchmarks but not part of the production library. No changes needed.Likely an incorrect or invalid review comment.
65-65: Thelibc = "0.2"dependency is appropriately specified and will automatically receive security patches. The latest version is 0.2.178 (released December 2, 2025), and this version constraint allows all 0.2.x releases, ensuring automatic access to patch updates. No security vulnerabilities are known for the Rust libc crate.crates/recording/examples/memory-leak-detector.rs (9)
27-29: LGTM!The memory metric conversions from bytes to megabytes are correct, and using
footprint_mbas the primary metric aligns with the PR's goal to improve memory reporting accuracy on macOS.
51-51: LGTM!The use of
footprint_mbas the primary metric withresident_mbas a fallback is appropriate. The.unwrap_or()usage follows coding guidelines for simple values.
85-89: LGTM!The baseline output clearly shows both Footprint and RSS metrics, consistent with the new reporting approach.
125-129: LGTM!The timeline row formatting matches the header columns correctly.
136-137: LGTM!The summary labels correctly reflect the shift to Footprint as the primary metric.
258-258: LGTM!The increased sampling frequency (1 second instead of 5 seconds) provides more granular memory tracking, which is beneficial for detecting leaks. This change will produce more frequent output but improves monitoring precision.
269-272: LGTM!The runtime output correctly displays Footprint first, then RSS, consistent with the new metric priority.
341-344: LGTM!The camera-only test output correctly shows both Footprint and RSS metrics along with frame and queue statistics.
21-24: Verify thatlibprocdependency is properly declared and API fields are available.The
libproccrate (version 0.14) is correctly declared inCargo.tomlat line 93. TheRUsageInfoV4struct provides the fieldsri_resident_sizeandri_phys_footprintused at lines 23-24. Thestd::process::id() as i32cast is safe on POSIX systems where PID values remain well belowi32::MAX. The code is properly guarded with#[cfg(target_os = "macos")], ensuring it only compiles on macOS where thepidrusageAPI is available.crates/recording/src/capture_pipeline.rs (1)
8-9: LGTM!The import and usage of
MacOSFragmentedM4SMuxerwithMacOSFragmentedM4SMuxerConfig::default()are consistent with the new M4S-based fragmented muxer implementation. The change is a straightforward type substitution for the fragmented capture path.Also applies to: 86-90
crates/recording/src/output_pipeline/mod.rs (1)
4-5: LGTM!The module consolidation from separate
fragmentedandmacos_segmented_ffmpegmodules into a singlemacos_fragmented_m4smodule is clean. The re-exports are correctly gated with#[cfg(target_os = "macos")].Also applies to: 10-11
crates/recording/src/studio_recording.rs (3)
14-18: LGTM!The updated imports correctly bring in the new
MacOSFragmentedM4SCameraMuxerandMacOSFragmentedM4SCameraMuxerConfigtypes for the camera path, while retaining the non-fragmentedAVFoundationCameraMuxertypes.
846-852: Verify the intentional FPS reduction for fragmented mode.The
max_fpsis set to 60 for fragmented mode versus 120 for non-fragmented. This may be intentional for performance or file size optimization in the segmented pipeline, but it's a notable behavioral change that could affect recording quality.
885-908: LGTM!The camera pipeline correctly switches between
MacOSFragmentedM4SCameraMuxerfor fragmented mode andAVFoundationCameraMuxerfor non-fragmented mode, consistent with the display pipeline changes.crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (6)
28-75: LGTM!
PauseTrackercorrectly useschecked_subandchecked_addto prevent underflow/overflow panics, returning descriptive errors for timestamp regressions and offset overflows.
77-136: LGTM!
FrameDropTrackerproperly guards against division by zero and provides useful observability with threshold-based warnings.
457-490: LGTM!
FramePoolproperly manages frame reuse. Theunwrap()calls are safe:get_framealways ensures the frame exists before returning, andtake_framehas anunwrap_or_elsefallback.
573-598: Verify unsafe slice construction relies on cidre API guarantees.The
plane_datamethod constructs a slice from a raw pointer usingplane_bytes_per_row * plane_heightfor the length. This assumes the buffer is contiguous with no gaps at row ends. Verify that cidre'splane_base_addressand dimension methods guarantee this layout.
492-563: LGTM!The
fill_frame_from_sample_buffunction correctly handles pixel format conversion for_420V(4:2:0 with proper UV subsampling),_32_BGRA, and_2VUYformats, with appropriate plane copying and error handling for unsupported formats.
434-455: LGTM!The
copy_plane_datahelper efficiently handles three cases: matching strides with row width, matching source/dest strides, and row-by-row copy for stride mismatches. This avoids unnecessary per-row copies when possible.crates/enc-ffmpeg/src/mux/segmented_audio.rs (1)
370-390: LGTM - Improved segment duration calculation.The
effective_end_timestamplogic correctly ensures the final duration accounts for the latest frame timestamp, preventing underreporting when the last frame's timestamp exceeds the provided end timestamp. The use ofsaturating_subis appropriate.crates/enc-ffmpeg/src/mux/mod.rs (1)
5-5: LGTM - Module reorganization aligns with the PR objectives.The replacement of
fragmented_mp4withsegmented_streamcorrectly reflects the shift to the new segmented DASH muxing architecture.crates/recording/src/recovery.rs (3)
45-49: LGTM - Clean struct for fragment info with init segment.The
FragmentsInfostruct provides a clear abstraction for returning fragments alongside an optional init segment.
300-345: LGTM - Comprehensive fragment validation with init-aware paths.The validation correctly differentiates between M4S segments (requiring init segment for decoding) and standard video/media fragments, with appropriate error logging for each failure case.
481-532: LGTM - Init-aware recovery with proper cleanup.The recovery logic correctly handles:
- Single fragment optimization only when no init segment exists (line 481)
- M4S concatenation with init segment when present (lines 494-514)
- Cleanup of both fragments and init segments after processing
crates/enc-ffmpeg/src/remux.rs (2)
368-398: LGTM - Proper temporary file handling for M4S probing.The function correctly:
- Creates a temporary combined file from init + segment
- Probes decode capability using existing infrastructure
- Cleans up the temporary file regardless of outcome
400-438: LGTM - Correct M4S concatenation approach.The function properly:
- Validates all inputs exist before processing
- Concatenates init segment with all media segments at the byte level
- Remuxes to a standard MP4 for broad compatibility
- Cleans up the intermediate combined file
crates/enc-ffmpeg/src/video/h264.rs (3)
123-139: LGTM - Helpful encoder selection logging.The differentiated logging for hardware vs software encoders provides valuable diagnostic information. Using
error!level for the software encoder fallback appropriately draws attention to the performance impact.
389-417: LGTM - Efficient frame reuse pattern.The
queue_frame_reusablemethod correctly:
- Lazily allocates the converted frame only when needed via
get_or_insert_with- Reuses the same buffer across multiple calls, reducing allocation pressure
- Maintains the same encoding semantics as
queue_frameThis is a valuable optimization for high-frequency encoding pipelines.
178-185: LGTM - Comprehensive encoder configuration logging.The structured debug logging provides clear visibility into pixel format decisions, which will be valuable for diagnosing encoding issues.
crates/enc-ffmpeg/src/mux/segmented_stream.rs (3)
47-119: LGTM!The struct definitions, manifest types, and error handling hierarchy are well-designed. The use of
thiserrorwith properFromconversions provides good ergonomics for error propagation.
374-422: LGTM!The finish methods handle encoder flush and trailer writing gracefully, logging warnings rather than failing hard. The fallback logic for computing end timestamps is robust. Based on learnings, this approach aligns with the project's pattern of graceful degradation during finalization.
460-538: LGTM!The orphan recovery logic is thorough: it handles incomplete segment files from crashes, skips corrupt tiny files, and provides reasonable duration estimates. Sorting at the end ensures consistent manifest ordering.
crates/recording/src/cursor.rs (2)
164-180: LGTM: Move events correctly gated by position changes.The logic appropriately generates cursor move events only when the position actually changes, avoiding redundant events. The position transformation chain (relative→normalized→cropped) is correct.
106-106: No issues with this change. The 16ms sleep interval aligns with standard 60fps video frame rates and is appropriate for cursor recording. At this polling frequency (62.5Hz), motion is captured adequately while reducing CPU overhead compared to the previous 10ms (100Hz) interval. Since this is recording-based polling—not real-time display rendering—high-refresh-rate display responsiveness is not a concern.crates/recording/src/output_pipeline/core.rs (1)
485-531: Bounded drain on cancellation looks good; semantics are consistent with existing pipelineThe new
drain_timeout,max_drain_frames, andskippedhandling give you a bounded amount of post‑cancel mux work while still fully draining the channel, and the logging around drained vs skipped frames should be very helpful when diagnosing stalls. The behaviour aroundfirst_txand muxer errors remains consistent with the pre‑change code (errors no longer abort the drain loop, which is preferable here).No correctness issues stand out in this hunk.
crates/editor/src/audio.rs (1)
146-148: current_playhead accessor matches existing AudioRenderer semantics
elapsed_samples_to_playheadcomputes playhead time fromelapsed_samples, andAudioPlaybackBuffer::current_playheadis a thin, correct wrapper around that. The method itself is fine; any nuances about what “elapsed” means for sync (e.g. including prefilled samples) are addressed in the playback logic that consumes this API.Also applies to: 280-283
crates/editor/src/playback.rs (2)
33-38: Prefetch and frame‑cache tuning plus playback‑position tracking look consistentThe new constants and the introduction of
playback_position_tx/playback_position_rxgive the prefetcher a clear notion of “how far ahead/behind” to decode, with:
MAX_PREFETCH_AHEADbounding forward prefetch relative to the current frame.PREFETCH_BEHINDandprefetched_behindlimiting backward prefetch work.FRAME_CACHE_SIZEandPREFETCH_BUFFER_SIZEenforced via pruning logic.The arithmetic around frame numbers and durations is safe (saturating where needed), and the watch channel wiring is coherent. No correctness issues spotted here.
Also applies to: 123-140, 191-205, 253-305
336-347: Video playhead publishing via watch channel is wired correctlyThe
audio_playhead_tx/audio_playhead_rxchannel is set up cleanly:
- Initial value matches the starting video playhead.
audio_playhead_tx.send(frame_number as f64 / fps_f64)is called both on normal frame advancement and when you skip ahead to catch up.AudioPlaybackreceives a clonedplayhead_rx, so the audio thread stays updated without additional synchronization primitives.This gives the audio path a simple, monotonic view of the video timeline.
Also applies to: 424-437, 630-655
| unsafe { | ||
| let opts = output.as_mut_ptr(); | ||
|
|
||
| let set_opt = |key: &str, value: &str| { | ||
| let k = CString::new(key).unwrap(); | ||
| let v = CString::new(value).unwrap(); | ||
| ffmpeg::ffi::av_opt_set((*opts).priv_data, k.as_ptr(), v.as_ptr(), 0); | ||
| }; | ||
|
|
||
| set_opt("init_seg_name", INIT_SEGMENT_NAME); | ||
| set_opt("media_seg_name", "segment_$Number%03d$.m4s"); | ||
| set_opt( | ||
| "seg_duration", | ||
| &config.segment_duration.as_secs().to_string(), | ||
| ); | ||
| set_opt("use_timeline", "0"); | ||
| set_opt("use_template", "1"); | ||
| set_opt("single_file", "0"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find segment_duration definition and usage
rg "segment_duration" -n --type rust -B 2 -A 2Repository: CapSoftware/Cap
Length of output: 22036
Use as_secs_f64() to preserve segment duration precision.
FFmpeg's seg_duration option accepts fractional values, but line 164 uses as_secs().to_string() which truncates sub-second precision. For example, a 2.5-second segment duration would be passed to FFmpeg as "2" instead of "2.5". Use as_secs_f64() instead, which returns the complete duration including fractional nanosecond precision. The codebase already uses as_secs_f64() elsewhere for logging (line 262), so this is a straightforward fix.
🤖 Prompt for AI Agents
In crates/enc-ffmpeg/src/mux/segmented_stream.rs around lines 151 to 169 the
seg_duration is being set using config.segment_duration.as_secs().to_string(),
which drops sub-second precision; change this to use
config.segment_duration.as_secs_f64().to_string() so fractional seconds (e.g.,
2.5) are preserved when passed to FFmpeg, keeping the rest of the CString
creation and av_opt_set calls unchanged.
|
|
||
| let mut last_flush = Instant::now(); | ||
| let flush_interval = Duration::from_secs(CURSOR_FLUSH_INTERVAL_SECS); | ||
| let mut last_cursor_id = "default".to_string(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Invalid initialization of last_cursor_id with incompatible type.
last_cursor_id is initialized to "default", but the type system expects cursor IDs to be stringified u32 values (see Cursor.id: u32 at line 17 and .to_string() conversions at lines 127, 129). If get_cursor_data() returns None before any cursor is captured, cursor events will reference the non-existent "default" cursor ID.
🔎 Proposed fix
Option 1: Use Option<String> and handle None properly:
- let mut last_cursor_id = "default".to_string();
+ let mut last_cursor_id: Option<String> = None;Then update the logic to handle None:
let cursor_id = if position_changed {
last_position = position;
if let Some(data) = get_cursor_data() {
// ... existing logic ...
last_cursor_id = cursor_id.clone();
+ Some(cursor_id)
- cursor_id
} else {
- last_cursor_id.clone()
+ last_cursor_id.clone()
}
} else {
- last_cursor_id.clone()
+ last_cursor_id.clone()
};
+
+ let Some(cursor_id) = cursor_id else {
+ continue;
+ };Option 2: Initialize from prev_cursors if available, or skip events until first valid cursor is captured.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In crates/recording/src/cursor.rs around line 103, last_cursor_id is incorrectly
initialized to the literal "default" which is incompatible with expected numeric
cursor IDs and can cause events to reference a non-existent cursor; change the
design to either (a) make last_cursor_id an Option<String> (initialize to None)
and update subsequent logic to handle None by skipping/emitting no events until
a real cursor id (u32.to_string()) is captured, or (b) initialize last_cursor_id
from prev_cursors if available (use the most recent cursor id converted to
string) and otherwise skip processing until a valid cursor is obtained — ensure
all comparisons and event emissions use the Option-aware or validated value so
no "default" id is ever emitted.
| let cursor_id = if position_changed { | ||
| last_position = position; | ||
| if let Some(data) = get_cursor_data() { | ||
| let mut hasher = DefaultHasher::default(); | ||
| data.image.hash(&mut hasher); | ||
| let id = hasher.finish(); | ||
|
|
||
| let cursor_id = if let Some(existing_id) = response.cursors.get(&id) { | ||
| existing_id.id.to_string() | ||
| } else { | ||
| let cursor_id = response.next_cursor_id.to_string(); | ||
| let file_name = format!("cursor_{cursor_id}.png"); | ||
| let cursor_path = cursors_dir.join(&file_name); | ||
|
|
||
| if let Ok(image) = image::load_from_memory(&data.image) { | ||
| let rgba_image = image.into_rgba8(); | ||
|
|
||
| if let Err(e) = rgba_image.save(&cursor_path) { | ||
| error!("Failed to save cursor image: {}", e); | ||
| } else { | ||
| info!("Saved cursor {cursor_id} image to: {:?}", file_name); | ||
| response.cursors.insert( | ||
| id, | ||
| Cursor { | ||
| file_name, | ||
| id: response.next_cursor_id, | ||
| hotspot: data.hotspot, | ||
| shape: data.shape, | ||
| }, | ||
| ); | ||
| response.next_cursor_id += 1; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| cursor_id | ||
| }; | ||
| last_cursor_id = cursor_id.clone(); | ||
| cursor_id | ||
| } else { | ||
| last_cursor_id.clone() | ||
| } | ||
| } else { | ||
| "default".to_string() | ||
| last_cursor_id.clone() | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Major: Cursor shape changes are missed when the cursor position is stationary.
The logic only calls get_cursor_data() when position_changed is true (line 119). This means cursor shape changes while the mouse is stationary won't be captured. For example, when hovering over a link and the cursor changes from arrow to hand without moving the mouse, the shape change will be missed.
Consider this approach
Add periodic cursor shape checks even when position hasn't changed, or check cursor shape independently of position:
let position = cap_cursor_capture::RawCursorPosition::get();
let position_changed = position != last_position;
let cursor_data = get_cursor_data();
let cursor_id = if let Some(data) = cursor_data {
let mut hasher = DefaultHasher::default();
data.image.hash(&mut hasher);
let id = hasher.finish();
// ... existing deduplication logic ...
} else {
last_cursor_id.clone()
};
if position_changed {
last_position = position;
// ... existing move event logic ...
}This decouples cursor data capture from position changes.
🤖 Prompt for AI Agents
In crates/recording/src/cursor.rs around lines 119-162, get_cursor_data() is
only invoked when position_changed, so cursor shape changes while the mouse is
stationary are missed; call get_cursor_data() unconditionally each iteration,
compute the image hash/id and run the existing deduplication and save logic
based on that data (inserting into response.cursors and bumping
response.next_cursor_id only for new shapes), update last_cursor_id when data
exists, and keep updating last_position only when position_changed so movement
logic remains unchanged.
| let mut hasher = DefaultHasher::default(); | ||
| data.image.hash(&mut hasher); | ||
| let id = hasher.finish(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for how prev_cursors is used and where response.cursors comes from
cd /tmp
find . -name "cursor.rs" -type f 2>/dev/null | head -5Repository: CapSoftware/Cap
Length of output: 41
🏁 Script executed:
# Look for the cursor.rs file and examine the relevant lines
find . -name "cursor.rs" -type fRepository: CapSoftware/Cap
Length of output: 162
🏁 Script executed:
# Search for prev_cursors and response.cursors usage patterns
rg -n "prev_cursors|response\.cursors" --type rust -B 5 -A 5Repository: CapSoftware/Cap
Length of output: 5122
🏁 Script executed:
# Check the full context around lines 95-124 in cursor.rs
fd cursor.rs --type f | head -1 | xargs sed -n '85,135p'Repository: CapSoftware/Cap
Length of output: 1533
Use a deterministic hash function for cursor deduplication; DefaultHasher produces different hash values for the same image within a single recording session.
DefaultHasher is seeded with random keys per instance, so the same cursor image will produce different hashes each time it's hashed. Since response.cursors is initialized with prev_cursors (line 95) and accumulates cursors across segments within a single recording session, identical cursor images appearing in different segments will not be recognized as duplicates. This causes redundant cursor files to be saved during a single recording. Use a deterministic hash function like std::collections::hash_map::DefaultHasher with a fixed seed, or a cryptographic hash (e.g., sha2), to ensure cursor deduplication works correctly.
🤖 Prompt for AI Agents
In crates/recording/src/cursor.rs around lines 122-124, replace the use of
DefaultHasher (which is non-deterministic per process) with a deterministic
hash: compute a stable digest of data.image (for example using the sha2 crate's
Sha256 over the image bytes) and derive the cursor id from that digest (e.g.,
take the first 8 bytes and interpret as a u64 or store the full hex digest) so
identical images across segments yield the same id; update imports accordingly
and ensure the chosen conversion is consistent and documented.
| let pixel_format = match video_config.pixel_format { | ||
| cap_media_info::Pixel::NV12 => ffmpeg::format::Pixel::NV12, | ||
| cap_media_info::Pixel::BGRA => ffmpeg::format::Pixel::BGRA, | ||
| cap_media_info::Pixel::UYVY422 => ffmpeg::format::Pixel::UYVY422, | ||
| _ => ffmpeg::format::Pixel::NV12, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silent fallback to NV12 for unknown pixel formats risks corrupted output.
When video_config.pixel_format doesn't match known variants, the code silently defaults to NV12. This could cause frame data misinterpretation and video corruption. Consider returning an error or at least logging a warning.
🔎 Suggested improvement
let pixel_format = match video_config.pixel_format {
cap_media_info::Pixel::NV12 => ffmpeg::format::Pixel::NV12,
cap_media_info::Pixel::BGRA => ffmpeg::format::Pixel::BGRA,
cap_media_info::Pixel::UYVY422 => ffmpeg::format::Pixel::UYVY422,
- _ => ffmpeg::format::Pixel::NV12,
+ other => {
+ warn!("Unsupported pixel format {:?}, defaulting to NV12", other);
+ ffmpeg::format::Pixel::NV12
+ }
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let pixel_format = match video_config.pixel_format { | |
| cap_media_info::Pixel::NV12 => ffmpeg::format::Pixel::NV12, | |
| cap_media_info::Pixel::BGRA => ffmpeg::format::Pixel::BGRA, | |
| cap_media_info::Pixel::UYVY422 => ffmpeg::format::Pixel::UYVY422, | |
| _ => ffmpeg::format::Pixel::NV12, | |
| }; | |
| let pixel_format = match video_config.pixel_format { | |
| cap_media_info::Pixel::NV12 => ffmpeg::format::Pixel::NV12, | |
| cap_media_info::Pixel::BGRA => ffmpeg::format::Pixel::BGRA, | |
| cap_media_info::Pixel::UYVY422 => ffmpeg::format::Pixel::UYVY422, | |
| other => { | |
| tracing::warn!("Unsupported pixel format {:?}, defaulting to NV12", other); | |
| ffmpeg::format::Pixel::NV12 | |
| } | |
| }; |
🤖 Prompt for AI Agents
In crates/recording/src/output_pipeline/macos_fragmented_m4s.rs around lines 278
to 283, the match on video_config.pixel_format currently falls through to NV12
for unknown variants which can corrupt output; update it to handle unknown
formats explicitly by returning an Err (or propagating a suitable error) or at
minimum logging a warning and failing the encoding path. Replace the silent
default arm with logic that maps known variants to ffmpeg pixels and for _
returns an Err with a clear message (or calls a function to convert the pixel
format that returns Result), and update callers to propagate/handle that Result
so encoding is aborted on unsupported formats.
| if let Ok(mut encoder) = encoder_clone.lock() { | ||
| if let Err(e) = encoder.queue_frame(owned_frame, timestamp) { | ||
| warn!("Failed to encode frame: {e}"); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Silently ignoring poisoned mutex may hide encoder panics.
The if let Ok(mut encoder) = encoder_clone.lock() pattern silently drops frames when the mutex is poisoned. If the encoder panics, this becomes a silent failure. Consider logging when the lock fails.
🔎 Suggested improvement
- if let Ok(mut encoder) = encoder_clone.lock() {
- if let Err(e) = encoder.queue_frame(owned_frame, timestamp) {
- warn!("Failed to encode frame: {e}");
- }
+ match encoder_clone.lock() {
+ Ok(mut encoder) => {
+ if let Err(e) = encoder.queue_frame(owned_frame, timestamp) {
+ warn!("Failed to encode frame: {e}");
+ }
+ }
+ Err(e) => {
+ warn!("Encoder mutex poisoned, dropping frame: {e}");
+ }
}🤖 Prompt for AI Agents
In crates/recording/src/output_pipeline/macos_fragmented_m4s.rs around lines
320-324, the current `if let Ok(mut encoder) = encoder_clone.lock()` silently
ignores a poisoned mutex and drops frames when the encoder panics; change this
to explicitly handle the `Err` case: match on `encoder_clone.lock()`, on `Ok(mut
encoder)` call `encoder.queue_frame(owned_frame, timestamp)` as before, and on
`Err(poisoned)` log a warning including the poison error message and then
recover the inner guard with `poisoned.into_inner()` (or, if recovery is
undesirable, log and return/skip) so that poisoned mutexes are not silently
ignored and you retain diagnostic information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (3)
219-224: Configpresetandoutput_sizefields are still ignored.Both
start_encoderimplementations hardcodeH264Preset::Ultrafastandoutput_size: Noneinstead of usingself.presetandself.output_sizefrom the config. The config fieldspresetandoutput_sizeare defined but never used.🔎 Suggested fix
Store the config values and use them:
let encoder_config = SegmentedVideoEncoderConfig { segment_duration: self.segment_duration, - preset: H264Preset::Ultrafast, + preset: self.preset, bpp: H264EncoderBuilder::QUALITY_BPP, - output_size: None, + output_size: self.output_size, };This requires storing
presetandoutput_sizein the muxer struct duringsetup.Also applies to: 682-687
235-240: Silent fallback to NV12 for unknown pixel formats.Both encoder threads silently default to
NV12for unrecognized pixel formats. This could cause frame data misinterpretation and corrupted video output.🔎 Suggested improvement
let pixel_format = match video_config.pixel_format { cap_media_info::Pixel::NV12 => ffmpeg::format::Pixel::NV12, cap_media_info::Pixel::BGRA => ffmpeg::format::Pixel::BGRA, cap_media_info::Pixel::UYVY422 => ffmpeg::format::Pixel::UYVY422, - _ => ffmpeg::format::Pixel::NV12, + other => { + warn!("Unsupported pixel format {:?}, defaulting to NV12", other); + ffmpeg::format::Pixel::NV12 + } };Also applies to: 698-703
277-281: Silently ignoring poisoned mutex may hide encoder panics.The
if let Ok(mut encoder) = encoder_clone.lock()pattern silently drops frames when the mutex is poisoned. If the encoder panics, this becomes a silent failure with no diagnostic output.🔎 Suggested improvement
- if let Ok(mut encoder) = encoder_clone.lock() { - if let Err(e) = encoder.queue_frame(owned_frame, timestamp) { - warn!("Failed to encode frame: {e}"); - } + match encoder_clone.lock() { + Ok(mut encoder) => { + if let Err(e) = encoder.queue_frame(owned_frame, timestamp) { + warn!("Failed to encode frame: {e}"); + } + } + Err(e) => { + warn!("Encoder mutex poisoned, dropping frame: {e}"); + } }Also applies to: 742-746
🧹 Nitpick comments (6)
crates/timestamp/src/macos.rs (1)
42-53: LGTM with optional optimization suggestion.The implementation correctly handles signed time differences by branching on the comparison to determine sign, which is necessary given the u64 internal representation. The logic is sound.
One optional optimization:
TimeBaseInfo::new()is called in every method (lines 23, 34, 43, 66, 77). If these timestamp methods are called frequently, consider caching the TimeBaseInfo to avoid repeated system calls.Optional: Cache TimeBaseInfo
If performance profiling shows this as a hot path, you could cache the frequency calculation:
use std::sync::OnceLock; static TIME_BASE_FREQ: OnceLock<f64> = OnceLock::new(); fn time_base_freq() -> f64 { *TIME_BASE_FREQ.get_or_init(|| { let info = TimeBaseInfo::new(); info.numer as f64 / info.denom as f64 }) }Then replace
TimeBaseInfo::new()calls withtime_base_freq().crates/recording/src/studio_recording.rs (1)
843-844: Consider makingmax_fpsconfigurable or documenting the rationale.The hardcoded values (60 for fragmented, 120 otherwise) may not be optimal for all hardware configurations. Consider adding this as a configurable option or at minimum documenting why these specific values were chosen.
crates/recording/src/output_pipeline/core.rs (1)
28-87: Consider consolidating with WindowsPauseTrackerimplementation.The
SharedPauseStatelogic here closely mirrors thePauseTrackerincrates/recording/src/output_pipeline/win.rs(lines 32-71 in the relevant snippets). Both implement identical timestamp adjustment logic withpaused_at,offset, and the same error messages. Consider extracting a shared implementation to reduce duplication and ensure consistent behavior across platforms.crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (3)
21-26: Environment variable parsing could fail silently for invalid values.If
CAP_MUXER_BUFFER_SIZEcontains a non-numeric value,parse().ok()silently falls back to the default. Consider logging when an invalid value is provided to help with debugging configuration issues.🔎 Suggested improvement
fn get_muxer_buffer_size() -> usize { - std::env::var("CAP_MUXER_BUFFER_SIZE") - .ok() - .and_then(|s| s.parse().ok()) - .unwrap_or(3) + match std::env::var("CAP_MUXER_BUFFER_SIZE") { + Ok(s) => match s.parse() { + Ok(v) => v, + Err(_) => { + warn!("Invalid CAP_MUXER_BUFFER_SIZE value '{}', using default 3", s); + 3 + } + }, + Err(_) => 3, + } }
95-103: Significant code duplication betweenMacOSFragmentedM4SMuxerandMacOSFragmentedM4SCameraMuxer.The two muxer implementations are nearly identical, differing only in:
- Type name and log messages
VideoFrametype (screen_capture::VideoFramevsNativeCameraFrame)Consider extracting the common logic into a generic implementation or shared helper to reduce maintenance burden and ensure consistent behavior.
Also applies to: 557-565
166-204: Thread join timeout loop could be simplified.The polling loop with
is_finished()+sleep(50ms)is functional but could be cleaner. Consider using a dedicated thread join with timeout pattern or documenting why polling is preferred.Also applies to: 629-667
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
crates/recording/src/capture_pipeline.rscrates/recording/src/output_pipeline/core.rscrates/recording/src/output_pipeline/ffmpeg.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/recovery.rscrates/recording/src/studio_recording.rscrates/timestamp/src/lib.rscrates/timestamp/src/macos.rscrates/timestamp/src/win.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
crates/timestamp/src/lib.rscrates/timestamp/src/win.rscrates/recording/src/output_pipeline/ffmpeg.rscrates/recording/src/recovery.rscrates/timestamp/src/macos.rscrates/recording/src/studio_recording.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/capture_pipeline.rscrates/recording/src/output_pipeline/core.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
crates/timestamp/src/lib.rscrates/timestamp/src/win.rscrates/recording/src/output_pipeline/ffmpeg.rscrates/recording/src/recovery.rscrates/timestamp/src/macos.rscrates/recording/src/studio_recording.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/capture_pipeline.rscrates/recording/src/output_pipeline/core.rs
🧠 Learnings (6)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
📚 Learning: 2025-12-07T14:29:40.743Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.743Z
Learning: Applies to **/*.rs : Use `saturating_sub` instead of `-` for `Duration` to avoid panics (Clippy: `unchecked_duration_subtraction` = deny)
Applied to files:
crates/timestamp/src/lib.rs
📚 Learning: 2025-12-07T14:29:19.180Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.180Z
Learning: Applies to **/*.rs : Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Applied to files:
crates/timestamp/src/win.rscrates/timestamp/src/macos.rs
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/recording/src/output_pipeline/ffmpeg.rscrates/recording/src/studio_recording.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/capture_pipeline.rscrates/recording/src/output_pipeline/core.rs
📚 Learning: 2025-10-17T05:58:22.586Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1219
File: crates/enc-avfoundation/src/mp4.rs:350-373
Timestamp: 2025-10-17T05:58:22.586Z
Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.
Applied to files:
crates/recording/src/output_pipeline/ffmpeg.rscrates/recording/src/recovery.rscrates/recording/src/studio_recording.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/recording/src/capture_pipeline.rscrates/recording/src/output_pipeline/core.rs
📚 Learning: 2025-12-07T14:29:40.743Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.743Z
Learning: Applies to **/*.rs : Never write `let _ = async_fn()` which silently drops futures; await or explicitly handle them (Clippy: `let_underscore_future` = deny)
Applied to files:
crates/recording/src/output_pipeline/macos_fragmented_m4s.rs
🧬 Code graph analysis (5)
crates/timestamp/src/lib.rs (2)
crates/timestamp/src/macos.rs (1)
signed_duration_since_secs(42-53)crates/timestamp/src/win.rs (1)
signed_duration_since_secs(68-72)
crates/recording/src/recovery.rs (1)
crates/enc-ffmpeg/src/remux.rs (8)
concatenate_audio_to_ogg(183-211)concatenate_m4s_segments_with_init(400-438)concatenate_video_fragments(34-62)get_media_duration(349-356)get_video_fps(358-366)probe_m4s_can_decode_with_init(368-398)probe_media_valid(272-274)probe_video_can_decode(276-347)
crates/timestamp/src/macos.rs (2)
crates/timestamp/src/lib.rs (1)
signed_duration_since_secs(48-71)crates/timestamp/src/win.rs (1)
signed_duration_since_secs(68-72)
crates/recording/src/capture_pipeline.rs (2)
crates/recording/src/output_pipeline/ffmpeg.rs (1)
default(211-216)crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (2)
default(113-120)default(575-582)
crates/recording/src/output_pipeline/core.rs (1)
crates/recording/src/output_pipeline/win.rs (3)
new(33-39)adjust(41-72)timestamp(437-439)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (17)
crates/timestamp/src/win.rs (1)
68-72: LGTM!Clean implementation that leverages f64 arithmetic to naturally handle both positive and negative time differences. The use of the cached
perf_freq()helper is efficient.crates/timestamp/src/lib.rs (1)
48-71: LGTM!The implementation properly handles signed duration calculation across all timestamp variants:
Instant: Useschecked_duration_sincewith fallback to reversed calculation for negative differences.SystemTime: Leverages the error case to extract backward duration.- Platform-specific types: Correctly delegates to their respective implementations.
The approach is consistent with existing duration methods and handles both forward and backward time differences correctly.
crates/recording/src/studio_recording.rs (2)
629-630: Verify the switch tosigned_duration_since_secs.The change from
duration_sincetosigned_duration_since_secsallows negative durations. Ensure this is intentional and that downstream consumers ofstart_timehandle negative values correctly, as this could produce negative start times if timestamps are out of order.
870-880: LGTM on shared pause state initialization.The platform-specific handling is appropriate: macOS gets a properly initialized
SharedPauseStatefor fragmented mode while Windows receivesNone. TheAtomicBoolinitialization withfalsecorrectly indicates recording is not paused at start.crates/recording/src/capture_pipeline.rs (2)
48-58: LGTM on trait signature update.The
MakeCapturePipelinetrait correctly addsshared_pause_state: Option<SharedPauseState>parameter, enabling coordinated pause handling across platform implementations.
92-95: LGTM on macOS fragmented pipeline construction.The config properly passes
shared_pause_stateand uses struct update syntax with..Default::default()to maintain forward compatibility.crates/recording/src/output_pipeline/ffmpeg.rs (2)
200-217: LGTM onSegmentedAudioMuxerrefactoring.The transition from tuple struct to named fields improves readability. The config struct with
shared_pause_statefollows the same pattern as the video muxers, maintaining consistency.
256-269: LGTM on pause-aware audio frame handling.The logic correctly:
- Applies timestamp adjustment when pause state exists
- Skips frames (returns
Ok(())) when paused- Uses original timestamp when no pause state is configured
crates/recording/src/output_pipeline/core.rs (2)
50-54: Mutex poison handling returns error but recording continues.When
lock()fails due to poisoning, the error propagates up. This is reasonable for correctness, but consider whether the recording should attempt recovery (e.g., by resetting the pause state) rather than failing the frame entirely.
550-590: LGTM on enhanced drain logic.The improvements add important safeguards:
- 2-second timeout prevents indefinite blocking
- 30-frame cap limits resource usage
- Separate
skippedcounter provides visibility into dropped frames- Logging includes both drained and skipped counts with elapsed time
crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (2)
449-520: LGTM onfill_frame_from_sample_bufimplementation.The function properly:
- Handles multiple pixel formats (
_420V,_32_BGRA,_2VUY)- Uses RAII-based
BaseAddrLockGuardfor safe memory access- Returns typed errors via
SampleBufConversionError- Correctly calculates plane dimensions for each format
530-555: LGTM onBaseAddrLockGuardRAII pattern.The guard correctly implements the lock/unlock pattern with
Dropto ensure the base address is always unlocked, preventing resource leaks even on early returns or panics.crates/recording/src/recovery.rs (5)
31-31: LGTM: Clean structural additions for init segment support.The new optional init segment fields and the
FragmentsInfohelper struct provide a clear foundation for handling M4S fragments with separate initialization segments.Also applies to: 33-33, 45-49
364-368: LGTM: M4S extension handling added correctly.The
is_video_filefunction and probe logic now correctly recognize.m4sfiles as video-capable media.Also applies to: 384-384
481-532: LGTM: Correct concatenation logic for M4S segments with init support.The recovery logic properly handles three cases:
- Single fragment without init → direct move
- Multiple fragments or init present → concatenate with appropriate function
- Cleanup of both fragments and init segments after processing
The branching at lines 494-505 (display) and 549-556 (camera) correctly uses
concatenate_m4s_segments_with_initwhen an init segment exists, with proper fallback to standard concatenation.Also applies to: 536-583
769-839: LGTM: Metadata propagation preserves critical timing information.The enhanced metadata building correctly extracts and propagates
start_timeandfpsfrom original segments when available. This is important for maintaining audio-video synchronization after recovery.The optional chaining at lines 797, 802-808, 816-818, and 826-828 gracefully handles cases where original segment metadata is unavailable, falling back to reasonable defaults.
224-362: Comprehensive M4S fragment discovery with proper validation paths.The refactored
find_complete_fragments_with_initcorrectly handles multiple manifest types and validates M4S segments with/without init segments. The branching logic at lines 300-339 properly usesprobe_m4s_can_decode_with_initwhen an init segment exists, falling back toprobe_video_can_decodeotherwise. The max version of 4 form4s_segmentsaligns with the manifest format created bysegmented_stream.rs, and the probe function correctly combines init and segment data for validation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/rendering/src/layers/camera.rs (1)
58-64: Fix call sites to wrap uniforms in Option.The
preparemethod signature now expectsuniforms: Option<CompositeVideoFrameUniforms>, but callers are passinguniforms.cameraunwrapped. Update both call sites inlib.rs(lines 1694 and 1706) to wrap the uniforms inSome():self.camera.prepare( &constants.device, &constants.queue, Some(uniforms.camera), constants.options.camera_size.and_then(|size| { segment_frames .camera_frame .as_ref() .map(|frame| (size, frame, segment_frames.recording_time)) }), );Apply the same fix to the
camera_only.prepare()call.
🧹 Nitpick comments (5)
crates/editor/examples/decode-benchmark.rs (4)
42-51: Don't use.cloned()on Copy types.The code calls
.cloned()on iterators overf64values, which implementCopy. Use.copied()instead, or simply dereference in the closure.🔎 Proposed fix
let min = self .sequential_decode_times_ms .iter() - .cloned() + .copied() .fold(f64::INFINITY, f64::min); let max = self .sequential_decode_times_ms .iter() - .cloned() + .copied() .fold(f64::NEG_INFINITY, f64::max);Apply the same change to the random access section:
let min = self .random_access_times_ms .iter() - .cloned() + .copied() .fold(f64::INFINITY, f64::min); let max = self .random_access_times_ms .iter() - .cloned() + .copied() .fold(f64::NEG_INFINITY, f64::max);Based on coding guidelines: don't call
.clone()on Copy types; copy them directly.Also applies to: 77-86
125-133: Consider handling NaN values to avoid potential panics.Line 130 uses
.unwrap()onpartial_cmp, which will panic if the data contains NaN values. For a benchmark example this may be acceptable, but consider usingsort_by(|a, b| a.total_cmp(b))(available for f64) for more robust sorting, or document the assumption that inputs are non-NaN.
135-157: Refactor parameter type and error handling.
- Accept
&Pathinstead of&PathBuffor more flexibility (line 135).- Using
-1.0as an error sentinel (line 150) is not idiomatic Rust—consider returningResult<f64, String>.- Decoder creation failures after the first iteration (lines 147-152) are silently ignored, which could skew the average. Consider logging all failures or tracking success count.
🔎 Proposed fix for parameter type
-async fn benchmark_decoder_creation(path: &PathBuf, fps: u32, iterations: usize) -> f64 { +async fn benchmark_decoder_creation(path: &Path, fps: u32, iterations: usize) -> f64 {Callers can pass
&config.video_pathdirectly sincePathBufderefs toPath.Based on coding guidelines: accept
&[T]or&strinstead of&Vec<T>or&String(same principle applies toPath/PathBuf).
159-195: Consider handling frame retrieval results.Lines 171, 188, and 191 ignore the results of
get_frame()calls withlet _frame = .... Ifget_framereturns aResultorOption, failed retrievals could skew timing measurements. Consider checking for errors or at least logging failures to ensure benchmark validity.Additionally, the
_fpsparameter inbenchmark_seek(line 184) is unused. If it's not needed, remove it for clarity; otherwise, document why it's present.crates/rendering/src/layers/camera.rs (1)
133-159: Consider extracting NV12 conversion to a helper method for clarity.The macOS-specific iosurface handling with software fallback is functionally correct but involves complex nested conditionals. Extracting this into a helper method would improve readability.
💡 Optional refactor to improve readability
Consider extracting the conversion logic:
fn try_convert_nv12( &mut self, device: &wgpu::Device, queue: &wgpu::Queue, camera_frame: &DecodedFrame, frame_size: XY<u32>, ) -> bool { #[cfg(target_os = "macos")] if let Some(Ok(_)) = camera_frame.iosurface_backing() .map(|buf| self.yuv_converter.convert_nv12_from_iosurface(device, queue, buf)) { return true; } if let (Some(y_data), Some(uv_data)) = (camera_frame.y_plane(), camera_frame.uv_plane()) { self.yuv_converter .convert_nv12( device, queue, y_data, uv_data, frame_size.x, frame_size.y, camera_frame.y_stride(), camera_frame.uv_stride(), ) .is_ok() } else { false } }Then in the main match arm:
PixelFormat::Nv12 => { if self.try_convert_nv12(device, queue, camera_frame, frame_size) { self.copy_from_yuv_output(device, queue, next_texture, frame_size); } }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
crates/editor/Cargo.tomlcrates/editor/examples/decode-benchmark.rscrates/rendering/src/layers/camera.rscrates/rendering/src/lib.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/rendering/src/lib.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
crates/editor/examples/decode-benchmark.rscrates/rendering/src/layers/camera.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
crates/editor/examples/decode-benchmark.rscrates/rendering/src/layers/camera.rs
🧠 Learnings (3)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
📚 Learning: 2025-12-07T14:29:40.743Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.743Z
Learning: Applies to **/*.rs : Use `rustfmt` and workspace clippy lints for Rust code formatting and linting
Applied to files:
crates/editor/Cargo.toml
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/rendering/src/layers/camera.rs
🧬 Code graph analysis (1)
crates/editor/examples/decode-benchmark.rs (1)
crates/rendering/src/decoder/mod.rs (1)
spawn_decoder(601-763)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (6)
crates/editor/Cargo.toml (1)
9-11: LGTM!The example target is properly configured and aligns with the new benchmark file.
crates/rendering/src/layers/camera.rs (5)
17-17: LGTM! Time-based tracking is cleaner than pointer-based.The refactor from
last_frame_ptr: usizetolast_recording_time: Option<f32>improves frame deduplication by using timestamps instead of memory addresses, which is more reliable and semantically clearer.Also applies to: 53-53
65-71: LGTM! Hidden state logic correctly handles frame persistence.The logic properly hides the camera only when there's no uniforms, or when there's neither new frame data nor a previous frame to display. This allows the last frame to persist when new data isn't available.
73-73: Confirm that early uniforms write is intentional.The uniforms buffer is now written before checking if
frame_dataexists. This means camera positioning/scaling uniforms update even when there's no new frame data. Verify this is the desired behavior to allow independent uniform updates.
82-85: LGTM! Time-based deduplication with appropriate tolerance.The 0.001-second tolerance is suitable for detecting duplicate frame timestamps while accounting for floating-point precision. This won't incorrectly skip distinct frames even at high frame rates (60fps ≈ 16.7ms between frames).
206-206: LGTM! Completes time-based deduplication refactor.Correctly updates
last_recording_timeafter processing a new frame, completing the transition from pointer-based to timestamp-based frame tracking.
| cache_hits: usize, | ||
| cache_misses: usize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused fields lead to misleading output.
The cache_hits and cache_misses fields are never populated anywhere in the code, so they remain at their default value of 0. The cache statistics section in print_report() (lines 106-119) will always display "0 (0.0%)".
Either populate these fields with actual cache metrics from the decoder or remove them from the struct and report.
🤖 Prompt for AI Agents
In crates/editor/examples/decode-benchmark.rs around lines 21-22, the struct
defines cache_hits and cache_misses but these fields are never updated,
producing misleading "0 (0.0%)" output; either remove these fields and strip the
cache statistics block from print_report(), or update the benchmark to collect
real metrics from the decoder (increment cache_hits/cache_misses where cache
lookups occur or read counters from the decoder API) and ensure print_report()
uses those populated values; pick one approach and update the struct, reporting
logic, and any decoder call sites accordingly so reported cache numbers reflect
real data or are omitted entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
crates/rendering/src/decoder/mod.rs (1)
573-582: LGTM: Timeout prevents indefinite blocking.The 500ms timeout is a good defensive measure to prevent the UI from hanging if the decoder stalls. The debug logging provides useful diagnostics.
Optional: Extract timeout as named constant
Consider extracting the timeout value to a named constant for easier tuning and maintainability:
const FRAME_REQUEST_TIMEOUT_MS: u64 = 500; match tokio::time::timeout(std::time::Duration::from_millis(FRAME_REQUEST_TIMEOUT_MS), rx).await { Ok(result) => result.ok(), Err(_) => { debug!( adjusted_time = adjusted_time, "get_frame timed out after {}ms", FRAME_REQUEST_TIMEOUT_MS ); None } }crates/editor/src/playback.rs (1)
954-954: Consider usingtracing::error!for consistency.The audio error callback uses
eprintln!while the rest of the codebase usestracingfor logging. However, this may be intentional if tracing is not safe to call from the audio callback context.crates/video-decode/src/avassetreader.rs (2)
19-19: Accept&Pathinstead of&PathBuffor flexibility.Rust best practices recommend accepting
&Pathinstead of&PathBufin function parameters for greater flexibility. Callers can passPath,PathBuf, or string types that can be converted toPath.🔎 Proposed fix
- pub fn build(path: &PathBuf) -> Result<Self, String> { + pub fn build(path: &Path) -> Result<Self, String> {As per coding guidelines, accept
&[T]or&strinstead of&Vec<T>or&Stringin function parameters.
22-44: Reuse the input context instead of reopening the file.The function opens the video file twice: once at line 22 to extract stream metadata, then again at line 43 to iterate packets. After extracting
stream_index,time_base, andfpsfrom the first input, it's immediately abandoned. Since FFmpeg's packet iterator does not consume the input context, you can reuse the same input for both operations and eliminate the I/O overhead of reopening.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.claude/settings.local.jsoncrates/editor/src/playback.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/lib.rscrates/video-decode/src/avassetreader.rscrates/video-decode/src/lib.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
crates/video-decode/src/avassetreader.rscrates/editor/src/playback.rscrates/video-decode/src/lib.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/lib.rscrates/rendering/src/decoder/avassetreader.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
crates/video-decode/src/avassetreader.rscrates/editor/src/playback.rscrates/video-decode/src/lib.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/lib.rscrates/rendering/src/decoder/avassetreader.rs
🧠 Learnings (2)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/editor/src/playback.rscrates/video-decode/src/lib.rscrates/rendering/src/decoder/avassetreader.rs
🧬 Code graph analysis (2)
crates/editor/src/playback.rs (6)
crates/video-decode/src/avassetreader.rs (1)
fps(90-92)crates/media-info/src/lib.rs (1)
fps(265-267)crates/recording/src/sources/screen_capture/mod.rs (1)
fps(275-277)crates/rendering/src/project_recordings.rs (1)
fps(50-52)crates/editor/src/segments.rs (1)
get_audio_segments(6-37)crates/audio/src/latency.rs (1)
initial_compensation_secs(141-143)
crates/rendering/src/decoder/avassetreader.rs (3)
crates/rendering/src/decoder/ffmpeg.rs (4)
to_decoded_frame(34-52)None(187-187)None(189-189)None(190-190)crates/rendering/src/decoder/mod.rs (11)
new_with_arc(210-223)data(454-456)new_nv12_zero_copy(344-360)y_stride(532-534)uv_stride(536-538)new(87-89)new(111-118)new(195-208)width(458-460)height(462-464)format(466-468)crates/video-decode/src/avassetreader.rs (4)
width(273-275)height(277-279)pixel_format_to_pixel(312-319)frames(267-271)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Clippy (aarch64-apple-darwin, macos-latest)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (27)
.claude/settings.local.json (1)
53-54: LGTM! Permission aligns with new benchmarking workflow.The added permission for the
decode-benchmarkbinary supports the performance optimization work in this PR. The wildcard pattern allows flexible argument passing, which is appropriate for a local development benchmarking tool.crates/rendering/src/decoder/mod.rs (6)
210-223: LGTM: Arc-backed constructor enables efficient data sharing.The
new_with_arcconstructor allows callers to pass pre-wrapped Arc data, avoiding unnecessary cloning or double-wrapping. This is beneficial for performance-sensitive frame handling.
240-259: LGTM: Consistent Arc pattern for NV12 frames.The implementation correctly follows the Arc-backed pattern for NV12 pixel format with appropriate stride parameters.
282-301: LGTM: Completes Arc-backed constructors for all formats.The YUV420P Arc constructor maintains consistency with the other pixel format constructors.
323-341: LGTM: macOS IOSurface constructor with Arc support.The Arc variant correctly combines pre-wrapped data with IOSurface backing for macOS hardware acceleration paths.
343-360: LGTM: Zero-copy constructor for macOS hardware paths.This constructor correctly implements zero-copy semantics by creating an empty data buffer and relying entirely on IOSurface backing. The function name clearly communicates this behavior, and callers can access frame data via
iosurface_backing()instead ofdata().
550-550: Verify the 80% frame cache reduction doesn't impact performance.
FRAME_CACHE_SIZEhas been reduced from 750 to 150 frames. While this significantly reduces memory usage, it may increase cache misses during playback, especially for scrubbing or non-linear playback patterns.Ensure this reduction has been validated against typical editor usage patterns (seeking, scrubbing, frame-by-frame navigation) and doesn't introduce performance regressions.
crates/rendering/src/lib.rs (3)
156-162: LGTM — correct per-segment camera FPS retrieval.This fix properly retrieves the camera FPS from the specific segment being decoded (
inner.segments[segment_i]) rather than potentially using a different segment's metadata. The pattern is consistent with theSingleSegmentcase on lines 157-159.
1706-1716: LGTM — consistent recording_time handling for camera_only layer.The same pattern is correctly applied to the
camera_onlylayer preparation, maintaining consistency with the camera layer changes.
1694-1704: LGTM — recording_time context added for camera layer.The addition of
segment_frames.recording_timeto the tuple enables timing-aware rendering. TheCameraLayer::preparemethod uses this value to detect frame duplication by comparing againstlast_recording_time, preventing redundant texture uploads for the same frame time.crates/editor/src/playback.rs (6)
33-37: LGTM!The adjusted prefetch and cache constants appear reasonable for the playback pipeline optimization.
336-347: LGTM!The watch channel is well-suited for signaling the latest video playhead to the audio thread, as the audio only needs the most recent value rather than every intermediate update.
359-360: Aggressive warmup parameters may cause initial stutter.Reducing
warmup_target_framesto 1 and timeout to 16ms means playback starts almost immediately. If the first frame decode is slow (e.g., on a cold start or with a complex segment), this could cause a brief stutter. Consider whether this trade-off is acceptable for all use cases, especially on lower-end hardware.
632-634: LGTM!Sending the playhead update after frame progression correctly synchronizes the audio with the current video position. Ignoring the send result is acceptable since the audio receiver may have been dropped if playback ended.
679-686: LGTM!The
playhead_rxfield is correctly added to propagate video playhead updates to the audio stream.
925-948: Audio-video synchronization approach is sound.The drift-based correction design effectively handles both gradual drift and sudden jumps:
- Detects drift when exceeded (0.15s threshold)
- Applies latency compensation when resyncing
- Uses non-blocking
watchchannel operationsThe
set_playhead()call within the audio callback is safe: it's only invoked when drift is detected (infrequent), and the underlying operations (resampler.reset(),resampled_buffer.clear()) are lock-free O(1) operations suitable for real-time contexts.crates/video-decode/src/lib.rs (1)
8-8: LGTM: Public API expanded to include KeyframeIndex.The addition of
KeyframeIndexto the public re-exports aligns with the new keyframe indexing functionality introduced inavassetreader.rs. This enables downstream consumers to leverage keyframe-aware seeking capabilities.crates/video-decode/src/avassetreader.rs (4)
72-97: LGTM: Keyframe lookup and accessor methods are well-implemented.The
nearest_keyframe_beforemethod correctly uses binary search to find the closest keyframe before a target frame. The edge cases (empty list, exact match, no earlier keyframe) are properly handled. The accessor methods are straightforward and efficient.
112-122: LGTM: Graceful degradation for keyframe index failures.The constructor appropriately handles keyframe index build failures by logging a warning and continuing without the index. This ensures the decoder remains functional even if keyframe indexing fails, which is good defensive programming.
161-201: LGTM: Keyframe-based seeking improves performance.The
reset()method now leverages the keyframe index to seek to the nearest keyframe before the requested time (line 166-179), then passes thisseek_timetoget_reader_track_output(line 183). This is a standard video decoder optimization—seeking to keyframes is more efficient than seeking to arbitrary frames, as it avoids partial decoding of inter-frame dependencies.The added instrumentation (lines 191-198) provides valuable debugging information for seek operations.
203-205: LGTM: Clean accessor for keyframe index.Simple, idiomatic accessor method that provides read-only access to the optional keyframe index.
crates/rendering/src/decoder/avassetreader.rs (6)
33-91: LGTM: Clean consolidation of frame data.The refactoring consolidates frame metadata into a single
FrameDatastruct, improving code organization. Theto_decoded_frameimplementation correctly:
- Uses
Arc::clonefor efficient data sharing- Selects the appropriate
DecodedFrameconstructor based on pixel format- Leverages zero-copy NV12 path when an image buffer is available (lines 64-71)
232-292: LGTM: Efficient pixel format handling with appropriate fallbacks.The implementation efficiently handles different pixel formats:
- NV12: Retains the image buffer for zero-copy access (lines 236-245)
- RGBA/BGRA/YUV420P: Extracts raw data into owned buffers (lines 246-260)
- Unsupported formats: Provides a black frame fallback (lines 262-276)
The early returns prevent unnecessary processing and the logic is clear.
369-456: LGTM: Well-designed request batching with intelligent caching.The refactored batching logic efficiently handles multiple frame requests:
- Batch collection (lines 370-396): Drains the channel to collect all pending requests
- Cache serving (lines 398-413): Immediately serves frames already in cache
- Smart reset logic (lines 419-441): Determines whether to reset based on seek distance and cache coverage
- Cache retention (line 454): Retains relevant cached frames within the new bounds after reset
The use of
saturating_sub(line 426) correctly prevents underflow. The complexity is justified by the performance benefits of batched processing.
459-536: LGTM: Robust frame iteration with cache management.The frame iteration loop correctly:
- Captures first frame (lines 477-479): Stores the first decoded frame for use as a fallback
- Manages cache size (lines 487-502): Evicts frames based on playback direction (forward/backward)
- Fulfills requests (lines 506-526): Serves matching and past frames from cache as they're decoded
- Respects cache bounds (line 533): Exits when exceeding the cache window or all requests are fulfilled
The logic handles forward and backward playback appropriately by evicting from the opposite end of the cache.
540-581: LGTM: Comprehensive fallback strategy for unfulfilled requests.The fallback hierarchy for unfulfilled requests is well-designed:
- Cache lookup (lines 551-556): First attempts to serve from cache
- Last sent frame (lines 556-564): Falls back to the most recently sent frame
- First ever frame (lines 565-573): Uses the initial frame as a last resort
- Drop request (lines 575-579): Logs when no frame is available
This ensures maximum resilience when the decoder cannot fulfill a request at the exact frame. The debug logging at each stage (lines 407, 512, 519, 554, 563, 572, 578) aids in diagnosing decoder behavior.
23-31: No issues found. The unsafeSendandSyncimplementations forSendableImageBuf(lines 24-25) are correct. The wrapper properly delegates to the underlyingR<cv::ImageBuf>which is reference-counted and thread-safe for read-only access. Theretained()method correctly increments the reference count, and all access to the image buffer after wrapping inArcis immutable. No thread safety violations exist in this implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/video-decode/src/avassetreader.rs (1)
213-226: Potential panic fromunwrap()calls on fallible operations.Lines 214 and 220 use
unwrap()which will panic if the file cannot be opened or if no video stream is found. This is inconsistent with the rest of the constructor which uses proper error handling with?.🔎 Proposed fix
let (pixel_format, width, height) = { - let input = ffmpeg::format::input(&path).unwrap(); + let input = ffmpeg::format::input(&path) + .map_err(|e| format!("Failed to open video: {e}"))?; let input_stream = input .streams() .best(ffmpeg::media::Type::Video) - .ok_or("Could not find a video stream") - .unwrap(); + .ok_or("Could not find a video stream")?;
♻️ Duplicate comments (1)
crates/rendering/src/decoder/avassetreader.rs (1)
229-229: Remove underscore prefix from_processorparameter.This was flagged in a previous review. The parameter is used on line 249 but has an underscore prefix suggesting it's unused.
🧹 Nitpick comments (4)
crates/video-decode/src/avassetreader.rs (2)
53-54: Redundant file open - the input context is opened twice.The file is opened at line 23 and then reopened at line 53. The first input context could be reused for packet iteration, avoiding the overhead of reopening the file.
🔎 Proposed fix
- let input = avformat::input(path) - .map_err(|e| format!("Failed to open video for keyframe scan: {e}"))?; + let mut input = avformat::input(path) + .map_err(|e| format!("Failed to open video for keyframe scan: {e}"))?; let video_stream = input .streams() @@ -50,9 +50,6 @@ let mut keyframes = Vec::new(); - let mut input = - avformat::input(path).map_err(|e| format!("Failed to reopen video for scan: {e}"))?; - for (stream, packet) in input.packets() {
301-308: Consider usingdebug!instead ofinfo!for reset logging.The
info!level logging on every reset operation may be too verbose for production use, especially during scrubbing where resets can happen frequently. Consider usingdebug!level.🔎 Proposed fix
let elapsed = reset_start.elapsed(); - tracing::info!( + tracing::debug!( requested_time = requested_time, seek_time = seek_time, keyframe_frame = ?keyframe_frame, reset_ms = elapsed.as_millis(), has_keyframe_index = self.keyframe_index.is_some(), "AVAssetReader reset completed" );crates/rendering/src/decoder/multi_position.rs (1)
1-1: Unused import:Arcis imported but not used in this module.The
Arctype is imported butMultiPositionDecoderConfig.keyframe_indexusesOption<Arc<KeyframeIndex>>, which is passed in from outside. However,Arcitself is not constructed in this module.#!/bin/bash # Verify if Arc is actually used in this file rg -n '\bArc\b' crates/rendering/src/decoder/multi_position.rscrates/rendering/src/decoder/avassetreader.rs (1)
656-658: Consider making the scrub frame limit configurable or documented.The hardcoded limit of 3 frames during scrubbing (
frames_iterated > 3) affects responsiveness. While reasonable, consider making this a constant for clarity.🔎 Proposed fix
At the top of the file, add:
const SCRUB_MAX_FRAMES_PER_REQUEST: u32 = 3;Then update line 656:
- if is_scrubbing && frames_iterated > 3 { + if is_scrubbing && frames_iterated > SCRUB_MAX_FRAMES_PER_REQUEST { break; }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
crates/rendering/src/decoder/avassetreader.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/decoder/multi_position.rscrates/video-decode/src/avassetreader.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
crates/rendering/src/decoder/multi_position.rscrates/video-decode/src/avassetreader.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/decoder/avassetreader.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
crates/rendering/src/decoder/multi_position.rscrates/video-decode/src/avassetreader.rscrates/rendering/src/decoder/mod.rscrates/rendering/src/decoder/avassetreader.rs
🧠 Learnings (2)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/video-decode/src/avassetreader.rscrates/rendering/src/decoder/avassetreader.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (15)
crates/rendering/src/decoder/mod.rs (5)
19-20: LGTM!The macOS-specific module exposure for multi-position decoder support is properly gated with
#[cfg(target_os = "macos")].
212-225: LGTM!The Arc-based constructors follow the same pattern as existing constructors and provide efficient zero-copy frame creation when the data is already in an
Arc<Vec<u8>>.Also applies to: 242-261, 284-303
345-362: LGTM!The
new_nv12_zero_copyconstructor correctly creates an emptyArc<Vec<u8>>when the image buffer backing is used directly, avoiding unnecessary allocations in the zero-copy path.
552-552: Significant cache size reduction from 750 to 150 frames.This is a 5x reduction in cache size. Verify this aligns with the new multi-decoder pool architecture where each decoder maintains its own position, potentially reducing the need for a large shared cache.
575-584: LGTM on the timeout addition.The 500ms timeout prevents indefinite blocking on frame requests. The debug logging on timeout provides useful diagnostics without being too noisy at debug level.
crates/video-decode/src/avassetreader.rs (2)
87-103: LGTM!The binary search implementation for
nearest_keyframe_beforeandnearest_keyframe_afteris correct and efficient. Edge cases (empty keyframes, exact match, boundary conditions) are properly handled.Also applies to: 105-132
134-151: LGTM!The
get_strategic_positionsmethod correctly distributes decoder positions across keyframes, with appropriate handling for edge cases when there are fewer keyframes than requested positions.crates/rendering/src/decoder/multi_position.rs (3)
85-122: LGTM on decoder selection logic.The two-phase selection (first checking usable decoders within threshold, then falling back to closest) is a sound approach for minimizing seeks while ensuring responsiveness.
175-241: LGTM on ScrubDetector implementation.The exponential weighted moving average (0.7/0.3 smoothing) provides good responsiveness while filtering noise. The cooldown mechanism prevents rapid state oscillation.
243-246: LGTM!The
Defaultimplementation correctly delegates toSelf::new().crates/rendering/src/decoder/avassetreader.rs (5)
24-32: LGTM!The
SendableImageBufwrapper with manualSend/Syncimplementations and theCloneimplementation usingretained()correctly enables cross-thread usage ofcv::ImageBuf.
34-40: LGTM!The
FrameDatastruct consolidates frame data fields cleanly, supporting both Arc-backed data and optional image buffer for zero-copy paths.
415-435: LGTM on decoder selection logic.The
select_best_decodermethod correctly bounds the decoder index usingsaturating_suband updates pool positions after reset, aligning with the coding guidelines.
484-516: LGTM on request batching.The batching of pending requests with
try_recv()and sorting by frame number is an efficient approach for handling burst requests during scrubbing.
683-714: LGTM on fallback frame handling.The cascading fallback strategy (cache → last sent → first ever → drop) provides good resilience during edge cases while logging appropriately at debug level.
| let keyframe_index = primary_decoder.take_keyframe_index(); | ||
| let keyframe_index_arc: Option<Arc<cap_video_decode::avassetreader::KeyframeIndex>> = None; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keyframe_index_arc is always None - the extracted keyframe index is never used.
The keyframe_index is extracted from primary_decoder on line 348 but keyframe_index_arc is hardcoded to None on line 349. This appears to be incomplete code where the extracted index should be wrapped in an Arc and passed to the config.
🔎 Proposed fix
let keyframe_index = primary_decoder.take_keyframe_index();
- let keyframe_index_arc: Option<Arc<cap_video_decode::avassetreader::KeyframeIndex>> = None;
+ let keyframe_index_arc = keyframe_index.map(Arc::new);
let fps = keyframe_index
- .as_ref()
+ keyframe_index_arc.as_ref()
.map(|kf| kf.fps() as u32)
.unwrap_or(30);
let duration_secs = keyframe_index
- .as_ref()
+ keyframe_index_arc.as_ref()
.map(|kf| kf.duration_secs())
.unwrap_or(0.0);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let keyframe_index = primary_decoder.take_keyframe_index(); | |
| let keyframe_index_arc: Option<Arc<cap_video_decode::avassetreader::KeyframeIndex>> = None; | |
| let keyframe_index = primary_decoder.take_keyframe_index(); | |
| let keyframe_index_arc = keyframe_index.map(Arc::new); | |
| let fps = keyframe_index_arc | |
| .as_ref() | |
| .map(|kf| kf.fps() as u32) | |
| .unwrap_or(30); | |
| let duration_secs = keyframe_index_arc | |
| .as_ref() | |
| .map(|kf| kf.duration_secs()) | |
| .unwrap_or(0.0); |
🤖 Prompt for AI Agents
In crates/rendering/src/decoder/avassetreader.rs around lines 348 to 349,
keyframe_index is taken from primary_decoder but keyframe_index_arc is hardcoded
to None; replace that with constructing an Option<Arc<...>> from the extracted
keyframe_index (e.g., if let Some(kf) = keyframe_index { Some(Arc::new(kf)) }
else { None }) so the value is actually wrapped in Arc and assigned to
keyframe_index_arc, and then pass this keyframe_index_arc into the decoder
config where needed; ensure correct ownership/clone semantics and imports for
Arc are used.
| pub fn get_rebalance_positions(&self) -> Vec<f32> { | ||
| if self.access_history.is_empty() { | ||
| return self.positions.iter().map(|p| p.position_secs).collect(); | ||
| } | ||
|
|
||
| let mut hotspots: Vec<(u32, u64)> = self | ||
| .access_history | ||
| .iter() | ||
| .map(|(&frame, &count)| (frame, count)) | ||
| .collect(); | ||
| hotspots.sort_by(|a, b| b.1.cmp(&a.1)); | ||
|
|
||
| let top_hotspots: Vec<f32> = hotspots | ||
| .into_iter() | ||
| .take(MAX_DECODER_POOL_SIZE) | ||
| .map(|(frame, _)| frame as f32 / self.config.fps as f32) | ||
| .collect(); | ||
|
|
||
| if top_hotspots.len() < MAX_DECODER_POOL_SIZE { | ||
| let mut result = top_hotspots; | ||
| let remaining = MAX_DECODER_POOL_SIZE - result.len(); | ||
| let duration = self.config.duration_secs as f32; | ||
| for i in 0..remaining { | ||
| let frac = (i + 1) as f32 / (remaining + 1) as f32; | ||
| result.push(duration * frac); | ||
| } | ||
| result | ||
| } else { | ||
| top_hotspots | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential division by zero if fps is 0.
In get_rebalance_positions at line 149, dividing by self.config.fps could cause a panic if fps is 0. While unlikely in practice, defensive handling would be safer.
🔎 Proposed fix
let top_hotspots: Vec<f32> = hotspots
.into_iter()
.take(MAX_DECODER_POOL_SIZE)
- .map(|(frame, _)| frame as f32 / self.config.fps as f32)
+ .map(|(frame, _)| {
+ if self.config.fps == 0 {
+ 0.0
+ } else {
+ frame as f32 / self.config.fps as f32
+ }
+ })
.collect();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn get_rebalance_positions(&self) -> Vec<f32> { | |
| if self.access_history.is_empty() { | |
| return self.positions.iter().map(|p| p.position_secs).collect(); | |
| } | |
| let mut hotspots: Vec<(u32, u64)> = self | |
| .access_history | |
| .iter() | |
| .map(|(&frame, &count)| (frame, count)) | |
| .collect(); | |
| hotspots.sort_by(|a, b| b.1.cmp(&a.1)); | |
| let top_hotspots: Vec<f32> = hotspots | |
| .into_iter() | |
| .take(MAX_DECODER_POOL_SIZE) | |
| .map(|(frame, _)| frame as f32 / self.config.fps as f32) | |
| .collect(); | |
| if top_hotspots.len() < MAX_DECODER_POOL_SIZE { | |
| let mut result = top_hotspots; | |
| let remaining = MAX_DECODER_POOL_SIZE - result.len(); | |
| let duration = self.config.duration_secs as f32; | |
| for i in 0..remaining { | |
| let frac = (i + 1) as f32 / (remaining + 1) as f32; | |
| result.push(duration * frac); | |
| } | |
| result | |
| } else { | |
| top_hotspots | |
| } | |
| } | |
| pub fn get_rebalance_positions(&self) -> Vec<f32> { | |
| if self.access_history.is_empty() { | |
| return self.positions.iter().map(|p| p.position_secs).collect(); | |
| } | |
| let mut hotspots: Vec<(u32, u64)> = self | |
| .access_history | |
| .iter() | |
| .map(|(&frame, &count)| (frame, count)) | |
| .collect(); | |
| hotspots.sort_by(|a, b| b.1.cmp(&a.1)); | |
| let top_hotspots: Vec<f32> = hotspots | |
| .into_iter() | |
| .take(MAX_DECODER_POOL_SIZE) | |
| .map(|(frame, _)| { | |
| if self.config.fps == 0 { | |
| 0.0 | |
| } else { | |
| frame as f32 / self.config.fps as f32 | |
| } | |
| }) | |
| .collect(); | |
| if top_hotspots.len() < MAX_DECODER_POOL_SIZE { | |
| let mut result = top_hotspots; | |
| let remaining = MAX_DECODER_POOL_SIZE - result.len(); | |
| let duration = self.config.duration_secs as f32; | |
| for i in 0..remaining { | |
| let frac = (i + 1) as f32 / (remaining + 1) as f32; | |
| result.push(duration * frac); | |
| } | |
| result | |
| } else { | |
| top_hotspots | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/rendering/src/decoder/multi_position.rs around lines 134 to 164, the
conversion frame as f32 / self.config.fps as f32 can panic if fps == 0; add a
defensive check at the start of get_rebalance_positions: if self.config.fps ==
0, return a sensible fallback (for example, return self.positions.iter().map(|p|
p.position_secs).collect()) or otherwise handle/log the invalid config and avoid
performing the division; ensure any subsequent code uses the validated nonzero
fps value when converting frames to seconds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
crates/rendering/src/decoder/avassetreader.rs (2)
228-293: Remove underscore prefix from_processorparameter.The parameter
_processorat line 229 has an underscore prefix suggesting it's unused, but it's actually used on line 249. This is a known issue from a previous review.
343-413: Critical: Extracted keyframe index is never used.Lines 348-349 extract
keyframe_indexfrom the primary decoder but then hardcodekeyframe_index_arctoNone. This means:
- The keyframe index is built but discarded
- Strategic positioning in
DecoderPoolManagerwill fall back to evenly-spaced positions- Additional decoders created at line 381 receive
Nonefor their keyframe indexThis appears to be incomplete implementation that defeats the purpose of the keyframe indexing system introduced in the first file.
🔎 Proposed fix
let keyframe_index = primary_decoder.take_keyframe_index(); - let keyframe_index_arc: Option<Arc<cap_video_decode::avassetreader::KeyframeIndex>> = None; + let keyframe_index_arc = keyframe_index.as_ref().map(Arc::new); let fps = keyframe_index - .as_ref() + _arc.as_ref() .map(|kf| kf.fps() as u32) .unwrap_or(30); let duration_secs = keyframe_index - .as_ref() + _arc.as_ref() .map(|kf| kf.duration_secs()) .unwrap_or(0.0);Then pass the
keyframe_index(not the Arc) toDecoderInstance::newat line 381 where it expectsOption<KeyframeIndex>.crates/rendering/src/decoder/multi_position.rs (1)
134-164: Potential division by zero if fps is 0.Line 149 divides by
self.config.fps as f32without checking if fps is zero. While unlikely in practice, this could panic.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
crates/export/src/mp4.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/decoder/multi_position.rscrates/rendering/src/layers/display.rscrates/rendering/src/lib.rscrates/video-decode/src/avassetreader.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/rendering/src/lib.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
crates/rendering/src/layers/display.rscrates/export/src/mp4.rscrates/rendering/src/decoder/avassetreader.rscrates/video-decode/src/avassetreader.rscrates/rendering/src/decoder/multi_position.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
crates/rendering/src/layers/display.rscrates/export/src/mp4.rscrates/rendering/src/decoder/avassetreader.rscrates/video-decode/src/avassetreader.rscrates/rendering/src/decoder/multi_position.rs
🧠 Learnings (3)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1219
File: crates/enc-avfoundation/src/mp4.rs:350-373
Timestamp: 2025-10-17T05:58:22.586Z
Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/export/src/mp4.rscrates/rendering/src/decoder/avassetreader.rscrates/video-decode/src/avassetreader.rs
📚 Learning: 2025-10-17T05:58:22.586Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1219
File: crates/enc-avfoundation/src/mp4.rs:350-373
Timestamp: 2025-10-17T05:58:22.586Z
Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.
Applied to files:
crates/export/src/mp4.rs
🧬 Code graph analysis (2)
crates/rendering/src/decoder/avassetreader.rs (4)
crates/rendering/src/decoder/mod.rs (10)
pts_to_frame(547-550)data(456-458)y_stride(534-536)uv_stride(538-540)new(89-91)new(113-120)new(197-210)width(460-462)height(464-466)format(468-470)crates/rendering/src/decoder/ffmpeg.rs (1)
to_decoded_frame(34-52)crates/video-decode/src/avassetreader.rs (9)
new(183-185)width(393-395)height(397-399)pixel_format(315-317)pixel_format_to_pixel(432-439)path(311-313)keyframe_index(323-325)new_with_keyframe_index(207-267)fps(153-155)crates/rendering/src/decoder/multi_position.rs (2)
config(170-172)is_scrubbing(225-227)
crates/rendering/src/decoder/multi_position.rs (1)
crates/rendering/src/decoder/avassetreader.rs (7)
new(100-102)new(229-293)new(307-323)new(344-413)None(472-472)None(473-473)None(474-474)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (20)
crates/rendering/src/layers/display.rs (1)
83-85: LGTM – Frame deduplication threshold improved.The change from
f32::EPSILONto0.001seconds makes the deduplication more practical by skipping frames with timestamps within 1 millisecond of each other. This aligns the display layer with the camera layer's deduplication behavior and supports the audio-video synchronization improvements in this PR.crates/video-decode/src/avassetreader.rs (4)
13-168: LGTM: KeyframeIndex implementation is solid.The KeyframeIndex implementation provides efficient keyframe lookup with binary search, sensible fallbacks (default fps of 30.0, pts fallback to 0), and strategic position calculation with proper step size handling (
.max(1)prevents zero-step panics).
183-205: LGTM: Constructor delegation pattern is clean.The
new()→new_at_position()→new_with_keyframe_index()delegation chain is well-structured. The warning log when keyframe index building fails provides good observability while allowing graceful degradation.
235-254: LGTM: Keyframe-aware seeking logic is correct.The seek time calculation properly uses the nearest prior keyframe when available, ensuring efficient decoder positioning. Fallback to the requested start_time when no keyframe index exists maintains backward compatibility.
269-301: LGTM: Reset with keyframe-based seeking.The reset method correctly computes seek targets using the keyframe index and updates the current position. The logic mirrors the initialization path and properly handles cases where no keyframe index is available.
crates/rendering/src/decoder/multi_position.rs (4)
10-31: LGTM: DecoderPosition tracking is straightforward.The
DecoderPositionstruct provides clear position tracking with access metrics and timestamp management via thetouch()method.
66-83: LGTM: Safe initial position calculation.The
calculate_initial_positionsmethod properly handles edge cases: uses strategic keyframe positions when available, falls back to evenly-spaced positions, and handles zero/negative duration with a single position at 0.0.
175-232: LGTM: ScrubDetector with proper edge case handling.The scrub detection logic uses exponential smoothing for the request rate, prevents division by zero with
.max(0.001), and implements a cooldown period. The thresholds and rate calculation appear reasonable.
85-122: The concern on line 88 is not applicable. The code performs floating-point multiplication (requested_time * self.config.fps as f32), not division. Rust panics when an integer is divided by zero, but floating-point multiplication by zero safely returns 0.0. No zero-check is needed.Likely an incorrect or invalid review comment.
crates/rendering/src/decoder/avassetreader.rs (9)
24-32: LGTM: SendableImageBuf wrapper enables cross-thread image buffer sharing.The unsafe
SendandSyncimplementations are necessary for Arc-based sharing of cidre ImageBuf references across threads. Theretained()call inclone()properly manages reference counting.
34-40: LGTM: FrameData consolidates per-frame metadata.The FrameData struct cleanly encapsulates frame buffer data with stride information and an optional zero-copy image buffer reference, supporting both copied and zero-copy paths.
51-92: LGTM: ProcessedFrame refactored for Arc-based sharing.The refactored
to_decoded_frame()method properly handles RGBA, NV12 (with zero-copy and Arc paths), and YUV420P formats. The use ofArc::cloneavoids unnecessary copies while maintaining proper ownership.
300-334: LGTM: DecoderInstance wrapper provides per-decoder state.The DecoderInstance struct cleanly wraps the inner decoder with state flags (
is_done,frames_iter_valid) and provides a reset method that properly reinitializes the state.
415-429: LGTM: Decoder selection with safe bounds checking.The
select_best_decodermethod properly bounds the decoder index withsaturating_sub, resets the decoder when needed, and updates the pool manager's position tracking. The min operation at line 419 ensures the index is always valid.
483-529: LGTM: Batch processing with efficient cache lookups.The request batching logic efficiently drains messages from the channel, sorts by frame number, detects scrubbing, and fulfills requests from cache before falling through to decoding. The early-exit when all requests are cached is efficient.
536-548: LGTM: Dynamic cache bounds with scrub awareness.The cache bounds adjust based on scrubbing state—smaller lookahead during scrubbing reduces wasted decoding. The cache is properly pruned on reset to maintain the desired window around the requested frames.
556-644: LGTM: Complex frame processing loop with proper state management.The main decoding loop correctly:
- Iterates frames from the selected decoder
- Updates decoder position tracking
- Manages cache eviction with directional awareness (forward vs backward playback)
- Fulfills pending requests as matching frames arrive
- Implements scrubbing throttling (breaks after 3 frames)
- Updates pool manager position after decoding
The logic for determining which cached frame to remove (lines 589-599) intelligently considers playback direction.
650-665: LGTM: Fallback chain for unfulfilled requests.The final fallback logic properly tries: cached frame → last sent frame → first ever frame → log and drop. This ensures a reasonable experience even when exact frames aren't available.
crates/export/src/mp4.rs (2)
128-169: LGTM! Well-designed cursor-based audio generation.The refactored audio frame generation correctly:
- Calculates sample boundaries per frame using
(n * sample_rate) / fps- Prevents duplicate audio rendering via the early-return check at line 159
- Sets PTS from the cursor position for proper audio timeline alignment
- Handles fractional samples per frame gracefully
This approach improves audio-video synchronization and aligns with the PR's optimization goals.
261-285: LGTM! Excellent test coverage for audio sample calculation.The test validates that the cursor-based audio calculation doesn't lose or duplicate samples due to integer division rounding across frame boundaries. Testing multiple FPS values (24, 30, 60, 90, 120, 144) ensures the algorithm is robust across common frame rates.
| frame.video, | ||
| Duration::from_secs_f32(encoded_frames as f32 / fps as f32), | ||
| ) | ||
| .queue_video_frame(frame.video, Duration::MAX) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "mp4.rs" -path "*/export/*" | head -5Repository: CapSoftware/Cap
Length of output: 85
🏁 Script executed:
cat -n crates/export/src/mp4.rs | sed -n '85,130p'Repository: CapSoftware/Cap
Length of output: 2062
🏁 Script executed:
cat -n crates/export/src/mp4.rs | sed -n '125,175p'Repository: CapSoftware/Cap
Length of output: 2620
🏁 Script executed:
cat -n crates/export/src/mp4.rs | sed -n '175,220p'Repository: CapSoftware/Cap
Length of output: 2294
🏁 Script executed:
find . -name "*.rs" -path "*mp4*" | grep -E "(mp4|encoder)" | head -10Repository: CapSoftware/Cap
Length of output: 157
🏁 Script executed:
cat -n crates/enc-avfoundation/src/mp4.rs | sed -n '1,50p'Repository: CapSoftware/Cap
Length of output: 2201
🏁 Script executed:
grep -n "queue_video_frame" crates/enc-avfoundation/src/mp4.rs | head -5Repository: CapSoftware/Cap
Length of output: 92
🏁 Script executed:
cat -n crates/enc-avfoundation/src/mp4.rs | sed -n '242,290p'Repository: CapSoftware/Cap
Length of output: 1990
🏁 Script executed:
cat -n crates/enc-avfoundation/src/mp4.rs | sed -n '290,330p'Repository: CapSoftware/Cap
Length of output: 1591
🏁 Script executed:
grep -n "Duration::MAX" crates/enc-avfoundation/src/mp4.rsRepository: CapSoftware/Cap
Length of output: 41
🏁 Script executed:
grep -n "queue_video_frame\|Duration::MAX" crates/enc-ffmpeg/src/mux/mp4.rsRepository: CapSoftware/Cap
Length of output: 91
🏁 Script executed:
cat -n crates/enc-ffmpeg/src/mux/mp4.rs | sed -n '87,140p'Repository: CapSoftware/Cap
Length of output: 1956
🏁 Script executed:
grep -rn "Duration::MAX" crates/export/src/mp4.rs crates/enc-avfoundation/src/mp4.rs crates/enc-ffmpeg/src/mux/mp4.rsRepository: CapSoftware/Cap
Length of output: 154
Verify Duration::MAX handling—this parameter is actively used by the encoder and should match frame timing.
The timestamp parameter passed to queue_video_frame is not ignored; it's actively used for session timing initialization, pause/resume offset calculations, and PTS adjustments in lines 258, 263-268, and 270-293 of the MP4Encoder. Passing Duration::MAX while the frame contains correct timing information (frame_number) creates a mismatch that will break synchronization. Either pass the correct frame timestamp or confirm the encoder has been refactored to handle this sentinel value specially.
🤖 Prompt for AI Agents
In crates/export/src/mp4.rs at line ~100, calling
.queue_video_frame(frame.video, Duration::MAX) uses a sentinel that the
MP4Encoder expects to be a real frame timestamp (it's used for session timing
init, pause/resume offsets and PTS adjustments around lines 258, 263–268,
270–293). Replace Duration::MAX with the correct frame timestamp derived from
the frame (e.g., frame.timestamp or compute from frame.frame_number and the
known frame rate) so encoder timing stays consistent, or if the encoder was
intentionally changed to accept a sentinel, update the encoder code and tests to
explicitly handle Duration::MAX throughout the referenced logic paths.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/video-decode/src/avassetreader.rs (1)
211-218: Replaceunwrap()calls with proper error propagation.Multiple
unwrap()calls here could cause panics if the file can't be opened or has no video stream. Since this function returnsResult<Self, String>, these errors should be propagated.🔎 Proposed fix
let (pixel_format, width, height) = { - let input = ffmpeg::format::input(&path).unwrap(); + let input = ffmpeg::format::input(&path) + .map_err(|e| format!("Failed to open video: {e}"))?; let input_stream = input .streams() .best(ffmpeg::media::Type::Video) - .ok_or("Could not find a video stream") - .unwrap(); + .ok_or("Could not find a video stream")?; let decoder = avcodec::Context::from_parameters(input_stream.parameters())
♻️ Duplicate comments (7)
crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (4)
245-249: Silent fallback to NV12 for unknown pixel formats risks corrupted output.When
video_config.pixel_formatdoesn't match known variants, the code silently defaults toNV12. This could cause frame data misinterpretation and video corruption.🔎 Proposed fix
let pixel_format = match video_config.pixel_format { cap_media_info::Pixel::NV12 => ffmpeg::format::Pixel::NV12, cap_media_info::Pixel::BGRA => ffmpeg::format::Pixel::BGRA, cap_media_info::Pixel::UYVY422 => ffmpeg::format::Pixel::UYVY422, - _ => ffmpeg::format::Pixel::NV12, + other => { + error!("Unsupported pixel format {:?}", other); + return Err(anyhow!("Unsupported pixel format: {:?}", other)); + } };
287-291: Silently ignoring poisoned mutex may hide encoder panics.The
if let Ok(mut encoder) = encoder_clone.lock()pattern silently drops frames when the mutex is poisoned. If the encoder panics, this becomes a silent failure.Based on learnings, sequential frame processing requires holding the lock, but error visibility should be preserved.
🔎 Proposed fix
- if let Ok(mut encoder) = encoder_clone.lock() { - if let Err(e) = encoder.queue_frame(owned_frame, timestamp) { - warn!("Failed to encode frame: {e}"); - } + match encoder_clone.lock() { + Ok(mut encoder) => { + if let Err(e) = encoder.queue_frame(owned_frame, timestamp) { + warn!("Failed to encode frame: {e}"); + } + } + Err(poisoned) => { + error!("Encoder mutex poisoned: {}", poisoned); + return Err(anyhow!("Encoder mutex poisoned")); + } }
718-722: Silent fallback to NV12 for unknown pixel formats risks corrupted output.Same issue as the main muxer: when
video_config.pixel_formatdoesn't match known variants, the code silently defaults toNV12, risking frame corruption.🔎 Proposed fix
let pixel_format = match video_config.pixel_format { cap_media_info::Pixel::NV12 => ffmpeg::format::Pixel::NV12, cap_media_info::Pixel::BGRA => ffmpeg::format::Pixel::BGRA, cap_media_info::Pixel::UYVY422 => ffmpeg::format::Pixel::UYVY422, - _ => ffmpeg::format::Pixel::NV12, + other => { + error!("Unsupported camera pixel format {:?}", other); + return Err(anyhow!("Unsupported camera pixel format: {:?}", other)); + } };
762-766: Silently ignoring poisoned mutex may hide encoder panics.Same issue as the main muxer: the
if let Ok(mut encoder)pattern silently drops frames when the mutex is poisoned.🔎 Proposed fix
- if let Ok(mut encoder) = encoder_clone.lock() { - if let Err(e) = encoder.queue_frame(owned_frame, timestamp) { - warn!("Failed to encode camera frame: {e}"); - } + match encoder_clone.lock() { + Ok(mut encoder) => { + if let Err(e) = encoder.queue_frame(owned_frame, timestamp) { + warn!("Failed to encode camera frame: {e}"); + } + } + Err(poisoned) => { + error!("Camera encoder mutex poisoned: {}", poisoned); + return Err(anyhow!("Camera encoder mutex poisoned")); + } }crates/editor/examples/decode-benchmark.rs (1)
54-55: Unused fields lead to misleading output.The
cache_hitsandcache_missesfields are never populated anywhere in the code, remaining at their default value of 0. The cache statistics section inprint_report()(lines 156-169) is guarded byif total > 0so it won't print, but having these unused fields in the struct is still misleading.Either populate these fields with actual cache metrics from the decoder or remove them from the struct and report.
crates/rendering/src/decoder/avassetreader.rs (1)
360-361:keyframe_index_arcis alwaysNone- the extracted keyframe index is never used in config.The
keyframe_indexis extracted fromprimary_decoderon line 360 butkeyframe_index_arcis hardcoded toNoneon line 361. This means theMultiPositionDecoderConfignever receives the keyframe index, which defeats the purpose of keyframe-aware positioning for additional decoders.🔎 Proposed fix
let keyframe_index = primary_decoder.take_keyframe_index(); - let keyframe_index_arc: Option<Arc<cap_video_decode::avassetreader::KeyframeIndex>> = None; + let keyframe_index_arc = keyframe_index.as_ref().map(|kf| Arc::new(kf.clone())); let fps = keyframe_index + keyframe_index_arc .as_ref() .map(|kf| kf.fps() as u32) .unwrap_or(30); let duration_secs = keyframe_index + keyframe_index_arc .as_ref() .map(|kf| kf.duration_secs()) .unwrap_or(0.0);Note: This requires
KeyframeIndexto implementClone. If it doesn't, consider restructuring to share the Arc or build the index separately.crates/rendering/src/decoder/multi_position.rs (1)
146-150: Potential division by zero iffpsis 0.If
self.config.fpsis 0 (e.g., from corrupted metadata), the divisionframe as f32 / self.config.fps as f32produces infinity, which could cause issues downstream.🔎 Proposed fix
let top_hotspots: Vec<f32> = hotspots .into_iter() .take(MAX_DECODER_POOL_SIZE) - .map(|(frame, _)| frame as f32 / self.config.fps as f32) + .map(|(frame, _)| { + if self.config.fps == 0 { + 0.0 + } else { + frame as f32 / self.config.fps as f32 + } + }) .collect();
🧹 Nitpick comments (9)
crates/rendering/src/yuv_converter.rs (1)
178-201: Consider usingget_or_insert_withfor cleaner bind group creation.The current pattern works correctly, but
get_or_insert_withwould be more idiomatic and avoid the separateis_none()check andunwrap().🔎 Suggested refactor
- if self.nv12_bind_groups[output_index].is_none() { - self.nv12_bind_groups[output_index] = - Some(device.create_bind_group(&wgpu::BindGroupDescriptor { - label: Some("NV12 Converter Bind Group (Cached)"), - layout, - entries: &[ - wgpu::BindGroupEntry { - binding: 0, - resource: wgpu::BindingResource::TextureView(y_view), - }, - wgpu::BindGroupEntry { - binding: 1, - resource: wgpu::BindingResource::TextureView(uv_view), - }, - wgpu::BindGroupEntry { - binding: 2, - resource: wgpu::BindingResource::TextureView(output_view), - }, - ], - })); - } - - self.nv12_bind_groups[output_index].as_ref().unwrap() + self.nv12_bind_groups[output_index].get_or_insert_with(|| { + device.create_bind_group(&wgpu::BindGroupDescriptor { + label: Some("NV12 Converter Bind Group (Cached)"), + layout, + entries: &[ + wgpu::BindGroupEntry { + binding: 0, + resource: wgpu::BindingResource::TextureView(y_view), + }, + wgpu::BindGroupEntry { + binding: 1, + resource: wgpu::BindingResource::TextureView(uv_view), + }, + wgpu::BindGroupEntry { + binding: 2, + resource: wgpu::BindingResource::TextureView(output_view), + }, + ], + }) + })Same pattern applies to
get_or_create_yuv420pat lines 222-249.crates/recording/src/output_pipeline/macos_fragmented_m4s.rs (2)
179-203: Consider using a condvar or channel for encoder completion signaling.The manual polling loop with
sleep(Duration::from_millis(50))works but is less efficient than signaling-based approaches. If the encoder thread finishes immediately, you still wait 50ms before checking. Consider having the encoder thread signal completion via a channel or usingJoinHandle::join()with a timeout wrapper.
242-333: Significant code duplication between main and camera muxer encoder threads.The encoder thread spawning logic in
start_encoderis nearly identical betweenMacOSFragmentedM4SMuxer(lines 242-333) andMacOSFragmentedM4SCameraMuxer(lines 715-808), with only minor differences in logging messages. This duplication makes maintenance harder and increases the risk of inconsistent bug fixes.Consider extracting the encoder thread logic into a shared function that accepts a name prefix and returns the handle:
fn spawn_encoder_thread( video_config: VideoInfo, video_rx: Receiver<Option<(cidre::arc::R<cidre::cm::SampleBuf>, Duration)>>, ready_tx: SyncSender<anyhow::Result<()>>, encoder_clone: Arc<Mutex<SegmentedVideoEncoder>>, thread_name: &str, ) -> anyhow::Result<JoinHandle<anyhow::Result<()>>> { // shared implementation }Then both muxers can call this shared function.
Also applies to: 715-808
crates/enc-ffmpeg/src/video/h264.rs (2)
133-138: Redundant "WARNING:" prefix in log message.The
warn!macro already indicates warning level; the embedded "WARNING:" prefix is redundant and creates noisy output likeWARN ... WARNING: Using SOFTWARE....Suggested fix
} else { warn!( encoder = %codec_name, input_width = input_config.width, input_height = input_config.height, - "WARNING: Using SOFTWARE H264 encoder (high CPU usage expected)" + "Using SOFTWARE H264 encoder (high CPU usage expected)" ); }
231-239: Avoid unnecessary allocations in structured logs.Using
format!()inside the log macro creates intermediateStringallocations. Use separate fields or the%formatting specifier directly.Suggested fix
Ok(context) => { debug!( encoder = %codec.name(), src_format = ?input_config.pixel_format, - src_size = %format!("{}x{}", input_config.width, input_config.height), + src_width = input_config.width, + src_height = input_config.height, dst_format = ?output_format, - dst_size = %format!("{}x{}", output_width, output_height), + dst_width = output_width, + dst_height = output_height, needs_scaling = needs_scaling, "Created SOFTWARE scaler for pixel format conversion (CPU-intensive)" ); Some(context) }crates/rendering/src/decoder/avassetreader.rs (2)
500-505: Single-variant match could be simplified.The match statement has only one variant (
VideoDecoderMessage::GetFrame). This could be simplified to direct destructuring or alet ... = ...pattern ifGetFrameis the only message type.🔎 Proposed simplification
- match r { - VideoDecoderMessage::GetFrame(requested_time, sender) => { - let frame = (requested_time * fps as f32).floor() as u32; - if !sender.is_closed() { - pending_requests.push(PendingRequest { frame, sender }); - } - } - } + let VideoDecoderMessage::GetFrame(requested_time, sender) = r; + let frame = (requested_time * fps as f32).floor() as u32; + if !sender.is_closed() { + pending_requests.push(PendingRequest { frame, sender }); + }This applies to both the initial
match rand thewhile let Ok(msg)loop.
675-679: Emptyifblocks for send errors can be simplified.The pattern
if req.sender.send(...).is_err() {}with empty blocks is unusual. Since the result is intentionally ignored, this is clearer:🔎 Proposed simplification
- } else if let Some(last) = last_sent_frame.borrow().clone() { - if req.sender.send(last.to_decoded_frame()).is_err() {} - } else if let Some(first) = first_ever_frame.borrow().clone() { - if req.sender.send(first.to_decoded_frame()).is_err() {} + } else if let Some(last) = last_sent_frame.borrow().clone() { + let _ = req.sender.send(last.to_decoded_frame()); + } else if let Some(first) = first_ever_frame.borrow().clone() { + let _ = req.sender.send(first.to_decoded_frame());crates/video-decode/src/avassetreader.rs (1)
51-54: Consider reusing the input context instead of reopening the file.The file is opened twice: once for metadata extraction (line 23) and again for packet iteration (line 53-54). This adds I/O overhead. Consider reusing the first input context if the ffmpeg API allows iteration after metadata access.
🔎 Proposed optimization
- let mut input = - avformat::input(path).map_err(|e| format!("Failed to reopen video for scan: {e}"))?; + let mut input = input; // Reuse the existing input contextIf this doesn't work due to ownership/borrowing constraints, consider documenting why the double-open is necessary.
crates/rendering/src/decoder/multi_position.rs (1)
207-207: Unused variable_was_scrubbing.The variable
_was_scrubbingis assigned but never used. While the underscore prefix suppresses the warning, it should either be removed or used for its intended purpose (e.g., logging state transitions).🔎 Proposed fix
- let _was_scrubbing = self.is_scrubbing; - if self.request_rate > Self::SCRUB_THRESHOLD_RATE && frame_delta > 1 {
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.claude/settings.local.jsonapps/desktop/src-tauri/src/lib.rsapps/desktop/src/routes/editor/index.tsxcrates/editor/examples/decode-benchmark.rscrates/enc-ffmpeg/src/remux.rscrates/enc-ffmpeg/src/video/h264.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rscrates/rendering/src/decoder/avassetreader.rscrates/rendering/src/decoder/multi_position.rscrates/rendering/src/yuv_converter.rscrates/video-decode/src/avassetreader.rs
🧰 Additional context used
📓 Path-based instructions (5)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
crates/rendering/src/yuv_converter.rscrates/editor/examples/decode-benchmark.rscrates/enc-ffmpeg/src/video/h264.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rsapps/desktop/src-tauri/src/lib.rscrates/enc-ffmpeg/src/remux.rscrates/rendering/src/decoder/multi_position.rscrates/video-decode/src/avassetreader.rscrates/rendering/src/decoder/avassetreader.rs
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
crates/rendering/src/yuv_converter.rscrates/editor/examples/decode-benchmark.rsapps/desktop/src/routes/editor/index.tsxcrates/enc-ffmpeg/src/video/h264.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rsapps/desktop/src-tauri/src/lib.rscrates/enc-ffmpeg/src/remux.rscrates/rendering/src/decoder/multi_position.rscrates/video-decode/src/avassetreader.rscrates/rendering/src/decoder/avassetreader.rs
**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Use React Query hooks with Server Actions for mutations and perform precise cache updates using setQueryData/setQueriesData instead of broad invalidations
Files:
apps/desktop/src/routes/editor/index.tsx
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx,js,jsx}: Use strict TypeScript; avoid any type; leverage shared types from @cap/* packages
Follow camelCase naming for variables and functions; PascalCase for components; hooks must start with 'use' prefix
Use Biome for linting and formatting; match existing formatting conventions in the codebase
Use Tailwind CSS for styling in web components; stay consistent with spacing and tokens
Use static skeletons for loading states that mirror content; avoid bouncing animations
Memoize expensive work, code-split naturally, and use Next/Image for remote assets
**/*.{ts,tsx,js,jsx}: Use 2-space indent for TypeScript files; format with Biome usingpnpm format
Use Biome for code formatting and linting; runpnpm formatregularly
Use kebab-case for file names (e.g.,user-menu.tsx); use PascalCase for components
Files:
apps/desktop/src/routes/editor/index.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Never add any form of comments to code (single-line //, multi-line /* /, JSDoc /* */, or any other comment syntax); code must be self-explanatory through naming, types, and structure
Directory naming must use lowercase-dashed convention
Files:
apps/desktop/src/routes/editor/index.tsx
🧠 Learnings (5)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
crates/editor/examples/decode-benchmark.rscrates/enc-ffmpeg/src/video/h264.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rsapps/desktop/src-tauri/src/lib.rscrates/enc-ffmpeg/src/remux.rscrates/rendering/src/decoder/avassetreader.rs
📚 Learning: 2025-10-17T05:58:22.586Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1219
File: crates/enc-avfoundation/src/mp4.rs:350-373
Timestamp: 2025-10-17T05:58:22.586Z
Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.
Applied to files:
crates/editor/examples/decode-benchmark.rscrates/recording/src/output_pipeline/macos_fragmented_m4s.rs
📚 Learning: 2025-12-07T14:29:19.180Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.180Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use static skeletons for loading states that mirror content; avoid bouncing animations
Applied to files:
apps/desktop/src/routes/editor/index.tsx
📚 Learning: 2025-12-07T14:29:40.743Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.743Z
Learning: Applies to **/*.rs : Never write `let _ = async_fn()` which silently drops futures; await or explicitly handle them (Clippy: `let_underscore_future` = deny)
Applied to files:
crates/recording/src/output_pipeline/macos_fragmented_m4s.rs
🧬 Code graph analysis (6)
crates/rendering/src/yuv_converter.rs (1)
crates/rendering/src/layers/camera.rs (1)
new(22-56)
crates/editor/examples/decode-benchmark.rs (1)
crates/rendering/src/decoder/mod.rs (1)
spawn_decoder(612-774)
crates/enc-ffmpeg/src/video/h264.rs (3)
crates/recording/src/sources/screen_capture/macos.rs (1)
pixel_format(47-49)crates/recording/src/sources/screen_capture/windows.rs (1)
pixel_format(39-41)crates/frame-converter/src/lib.rs (1)
needs_scaling(118-120)
apps/desktop/src-tauri/src/lib.rs (2)
crates/project/src/meta.rs (5)
path(127-129)path(312-314)path(363-365)load_for_project(131-137)studio_meta(177-182)apps/desktop/src-tauri/src/recording.rs (5)
app(1171-1171)app(1387-1387)app(1443-1443)needs_fragment_remux(1995-2008)remux_fragmented_recording(2010-2030)
crates/rendering/src/decoder/multi_position.rs (1)
crates/video-decode/src/avassetreader.rs (2)
path(305-307)new(181-183)
crates/video-decode/src/avassetreader.rs (1)
crates/video-decode/src/ffmpeg.rs (2)
start_time(281-283)reset(259-266)
🪛 GitHub Check: Typecheck
apps/desktop/src/routes/editor/index.tsx
[failure] 8-8:
Cannot find module './EditorSkeleton' or its corresponding type declarations.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Clippy (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (27)
crates/rendering/src/yuv_converter.rs (4)
136-158: Cache design is appropriate for the double-buffering pattern.The
BindGroupCachecorrectly tracks bind groups per output slot and invalidates when dimensions change, which aligns well with the double-buffered output texture approach used in this converter.
625-629: Cache invalidation is correctly placed after texture reallocation.The call to
self.bind_group_cache.invalidate()after reassigning all texture views ensures that cached bind groups referencing the old views will be recreated on the next conversion call.
696-718: Bind group caching implementation is correct.The cache correctly uses allocated dimensions as the key, and the flow ensures views are always valid:
ensure_texture_size()→ potential invalidation →get_or_create_nv12()with current views.
855-878: YUV420P caching follows the same correct pattern.The implementation correctly mirrors the NV12 caching approach with the additional U and V plane bindings.
crates/enc-ffmpeg/src/video/h264.rs (1)
389-417: LGTM!The
queue_frame_reusablemethod correctly implements frame reuse optimization by lazily initializing the converted frame buffer viaget_or_insert_withand reusing it across calls. This reduces allocations in segment-based encoding workflows.crates/enc-ffmpeg/src/remux.rs (5)
106-175: LGTM!The
remux_streamsfunction correctly handles stream mapping, DTS monotonicity adjustments for concatenated segments, and proper header/trailer writing. The unsafe blocks are necessary for FFmpeg's low-level API.
177-188: LGTM!Clean refactor that delegates to the shared
remux_streamshelper, reducing code duplication.
409-453: LGTM with same temp file note.The implementation correctly validates inputs, concatenates init with segments, remuxes to a regular MP4, and cleans up. The same temp file collision consideration from
probe_m4s_can_decode_with_initapplies here for the combined file path.
455-459: LGTM!Simple and focused helper that delegates to
remux_streams.
375-407: No action needed. The temp file naming is safe and unique across all segments in the directory, as each temp file is derived from its corresponding segment's unique path. No concurrent calls occur due to sequential iteration over entries..claude/settings.local.json (1)
54-55: LGTM!The new permissions for running the
decode-benchmarkexample binary align with the PR's testing and benchmark infrastructure improvements. The format is consistent with existing entries.apps/desktop/src-tauri/src/lib.rs (5)
93-93: LGTM!The
watchimport addition supports the new finalization coordination mechanism.
2631-2631: LGTM!Proper registration of
FinalizingRecordingsstate during app initialization.
2177-2177: LGTM!The integration of
wait_for_recording_readyensures recordings are fully finalized before editor creation, preventing potential corruption or UI blocking issues. This aligns with the PR's goal of asynchronous finalization.
2207-2238: LGTM!The
wait_for_recording_readyimplementation correctly handles the coordination flow:
- Waits for ongoing finalization if detected
- Falls back to crash-recovery remux if needed
- Uses
spawn_blockingappropriately for potentially blocking I/O operationsThe double
??operator at line 2232 correctly handles both task panics (JoinError) and remux errors, with the "task panicked" message only applying to actual panics.
109-138: The implementation correctly handles concurrent access to finalizing recordings. Whenstart_finalizingis called for the same path multiple times, the watch channel's inherent behavior handles this appropriately: old receiver clones detect the channel closure when the previous sender is dropped, while new callers receive a fresh receiver from the new sender. This is safe and expected behavior for watch channels; no additional guards are needed.crates/rendering/src/decoder/avassetreader.rs (4)
34-92: LGTM!The
FrameDatastruct consolidation andProcessedFrame::to_decoded_frameimplementation cleanly handle multiple pixel formats with appropriate Arc-based sharing for zero-copy paths.
228-293: LGTM!The
CachedFrame::newimplementation correctly handles different pixel formats with zero-copy optimization for NV12 and proper data extraction for other formats.
300-346: LGTM!The
DecoderInstancewrapper properly manages per-decoder state with robust error handling inreset()that logs failures and marks the decoder as invalid rather than panicking.
24-32: Add a safety comment documenting whySendableImageBufis thread-safe.The
unsafe impl Sendandunsafe impl SyncforSendableImageBufwrappingR<cv::ImageBuf>requires justification. Apple's Core VideoImageBufuses atomic reference counting and is thread-safe, making this safe. Add a comment explaining this to future reviewers:// Safety: cv::ImageBuf from Apple Core Video uses atomic ref counting and is thread-safe across threads; retained() maintains correct reference counts.⛔ Skipped due to learnings
Learnt from: Brendonovich Repo: CapSoftware/Cap PR: 1305 File: crates/recording/src/output_pipeline/macos.rs:80-90 Timestamp: 2025-10-28T08:39:42.230Z Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.Learnt from: Brendonovich Repo: CapSoftware/Cap PR: 1219 File: crates/enc-avfoundation/src/mp4.rs:350-373 Timestamp: 2025-10-17T05:58:22.586Z Learning: In crates/enc-avfoundation/src/mp4.rs, the `finish()` method intentionally skips video extension when `is_paused` is true. This is correct behavior because if recording is paused, the video should not be extended beyond the pause point—the pause is user-initiated, unlike the case where ScreenCaptureKit stops providing frames during static content.crates/video-decode/src/avassetreader.rs (3)
87-130: LGTM!The
nearest_keyframe_beforeandnearest_keyframe_aftermethods correctly use binary search with proper edge case handling for exact matches and boundary conditions.
267-295: LGTM!The
reset()method correctly uses keyframe-aware seeking to find the nearest keyframe before the target time, ensuring proper decoder positioning for efficient playback.
321-379: LGTM!The
get_reader_track_outputmethod correctly sets up the AVAssetReader with the specified time range and configures the track output for efficient zero-copy sample data access.crates/rendering/src/decoder/multi_position.rs (4)
10-31: LGTM!The
DecoderPositionstruct cleanly tracks per-decoder state withtouch()providing efficient access tracking.
48-83: LGTM!The initialization logic correctly prioritizes keyframe-based strategic positions when available, with a sensible fallback to evenly distributed positions across the video duration.
85-122: LGTM!The two-pass decoder selection strategy correctly prioritizes decoders that can decode forward to the target time (avoiding reset) before falling back to the closest decoder.
225-238: LGTM!The accessor methods and
Defaultimplementation are clean and idiomatic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (11)
apps/desktop/src-tauri/src/export.rs (2)
189-192: Simplify the helper function.The temporary variable assignment is unnecessary. Per coding guidelines, prefer direct returns.
🔎 Suggested simplification
fn bpp_to_jpeg_quality(bpp: f32) -> u8 { - let quality = ((bpp - 0.04) / (0.3 - 0.04) * (95.0 - 40.0) + 40.0).clamp(40.0, 95.0) as u8; - quality + ((bpp - 0.04) / (0.3 - 0.04) * (95.0 - 40.0) + 40.0).clamp(40.0, 95.0) as u8 }
337-433: Consider extracting shared logic to reduce duplication.Both
generate_export_previewandgenerate_export_preview_fastshare significant identical code:
- RGBA→RGB conversion (lines 293-301 vs 396-404)
- JPEG encoding (lines 303-310 vs 406-413)
- Bitrate/size estimation (lines 314-327 vs 417-425)
While the two functions have different data sources, extracting helpers for these common operations would improve maintainability.
🔎 Example helper extraction
fn encode_frame_to_jpeg(frame: &RenderedFrame, quality: u8) -> Result<Vec<u8>, String> { let rgb_data: Vec<u8> = frame .data .chunks(frame.padded_bytes_per_row as usize) .flat_map(|row| { row[0..(frame.width * 4) as usize] .chunks(4) .flat_map(|chunk| [chunk[0], chunk[1], chunk[2]]) }) .collect(); let mut jpeg_buffer = Vec::new(); let mut encoder = JpegEncoder::new_with_quality(&mut jpeg_buffer, quality); encoder .encode(&rgb_data, frame.width, frame.height, image::ExtendedColorType::Rgb8) .map_err(|e| format!("Failed to encode JPEG: {e}"))?; Ok(jpeg_buffer) } fn estimate_size_mb(resolution: XY<u32>, fps: u32, bpp: f32, duration: f64) -> f64 { let total_pixels = (resolution.x * resolution.y) as f64; let video_bitrate = total_pixels * bpp as f64 * fps as f64; let total_bitrate = video_bitrate + 192_000.0; (total_bitrate * duration) / (8.0 * 1024.0 * 1024.0) }apps/desktop/src/routes/editor/EditorSkeleton.tsx (1)
4-6: Verify skeleton constants match Editor.tsx values.The constants
DEFAULT_TIMELINE_HEIGHT,MIN_PLAYER_HEIGHT, andRESIZE_HANDLE_HEIGHTshould match those inEditor.tsxfor consistent visual appearance during loading. The values appear correct based on Editor.tsx (lines 49-53), but this duplication could lead to drift.Consider extracting these to a shared constants file to ensure consistency.
apps/desktop/src/routes/editor/Editor.tsx (1)
431-444: Canvas frame capture for crop preview.The approach of capturing the current frame from the player canvas for the crop preview is clean. The silent
catchblock with fallback tonull(which then falls back to the screenshot file on line 610-613) provides graceful degradation.Consider using a constant for the canvas ID
"canvas"to avoid magic strings and ensure consistency with wherever the canvas element is defined.apps/desktop/src/routes/editor/ExportPage.tsx (7)
61-66: Floating-point keys may cause lookup failures.Using floating-point numbers as object keys is error-prone. When accessed via
BPP_TO_COMPRESSION[v]wherevis a computed float from the slider (e.g.,0.15000000000000002), the lookup will fail due to precision issues. This is currently mitigated by the closest-match logic on lines 823-830, so the code works, but theBPP_TO_COMPRESSIONobject is never actually used for direct lookups.Consider removing this unused mapping or using string keys if direct lookup is needed in the future.
175-177: Remove comment per coding guidelines.As per coding guidelines, comments should not be added to code. The code should be self-explanatory through naming and structure.
🔎 Proposed fix
- // Ensure GIF is not selected when exportTo is "link" - else if (_settings.format === "Gif" && _settings.exportTo === "link") + else if (_settings.format === "Gif" && _settings.exportTo === "link")
180-187: Unconventional use of Object.defineProperty in reactive context.Using
Object.definePropertyto define a getter insidemergePropsis unusual and may cause confusion. A cleaner approach would be to computeorganizationIddirectly in the returned object.🔎 Proposed fix
- Object.defineProperty(ret, "organizationId", { - get() { - if (!_settings.organizationId && organisations().length > 0) - return organisations()[0].id; - - return _settings.organizationId; - }, - }); - - return ret; + ret.organizationId = + !_settings.organizationId && organisations().length > 0 + ? organisations()[0].id + : _settings.organizationId; + return ret;
466-468: Remove comments and clarify createSignInMutation usage.Line 466 has a comment that should be removed per coding guidelines. Additionally,
createSignInMutation()is called without using its return value. If this is intentional (triggering the sign-in flow as a side effect), consider renaming or restructuring for clarity.🔎 Proposed fix
- // Check authentication first const existingAuth = await authStore.get(); if (!existingAuth) createSignInMutation();
485-487: Remove comment per coding guidelines.The comment on line 485 should be removed. The delay pattern before throwing is also unusual—consider if the error should be thrown immediately with the window show happening asynchronously.
🔎 Proposed fix
if (!canShare.allowed) { if (canShare.reason === "upgrade_required") { await commands.showWindow("Upgrade"); - // The window takes a little to show and this prevents the user seeing it glitch await new Promise((resolve) => setTimeout(resolve, 1000)); throw new SilentError(); }
491-513: Remove debug console.log and comments.Lines 492 and 511 contain
console.logstatements that appear to be debug artifacts. Lines 511 and 513 also contain comments that should be removed per coding guidelines.🔎 Proposed fix
const uploadChannel = new Channel<UploadProgress>((progress) => { - console.log("Upload progress:", progress); setExportState( produce((state) => { if (state.type !== "uploading") return; state.progress = Math.round(progress.progress * 100); }), ); }); await exportWithSettings((progress) => { if (isCancelled()) throw new SilentError("Cancelled"); setExportState({ type: "rendering", progress }); }); if (isCancelled()) throw new SilentError("Cancelled"); setExportState({ type: "uploading", progress: 0 }); - console.log({ organizationId: settings.organizationId }); - - // Now proceed with upload const result = meta().sharing
795-796: Stub history object appears intentional but could be clearer.The
historyprop{ pause: () => () => {} }is a no-op stub. If the Slider component requires this prop but history tracking isn't needed here, this is acceptable. Consider adding proper types or using a shared stub constant if this pattern is repeated.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
apps/desktop/src-tauri/src/export.rsapps/desktop/src-tauri/src/lib.rsapps/desktop/src/routes/editor/Editor.tsxapps/desktop/src/routes/editor/EditorSkeleton.tsxapps/desktop/src/routes/editor/ExportDialog.tsxapps/desktop/src/routes/editor/ExportPage.tsxapps/desktop/src/utils/tauri.tspackages/ui-solid/src/auto-imports.d.ts
💤 Files with no reviewable changes (1)
- apps/desktop/src/routes/editor/ExportDialog.tsx
🧰 Additional context used
📓 Path-based instructions (7)
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use TanStack Query v5 for all client-side server state and data fetching in TypeScript files
Files:
packages/ui-solid/src/auto-imports.d.tsapps/desktop/src/utils/tauri.ts
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx,js,jsx}: Use strict TypeScript; avoid any type; leverage shared types from @cap/* packages
Follow camelCase naming for variables and functions; PascalCase for components; hooks must start with 'use' prefix
Use Biome for linting and formatting; match existing formatting conventions in the codebase
Use Tailwind CSS for styling in web components; stay consistent with spacing and tokens
Use static skeletons for loading states that mirror content; avoid bouncing animations
Memoize expensive work, code-split naturally, and use Next/Image for remote assets
**/*.{ts,tsx,js,jsx}: Use 2-space indent for TypeScript files; format with Biome usingpnpm format
Use Biome for code formatting and linting; runpnpm formatregularly
Use kebab-case for file names (e.g.,user-menu.tsx); use PascalCase for components
Files:
packages/ui-solid/src/auto-imports.d.tsapps/desktop/src/routes/editor/EditorSkeleton.tsxapps/desktop/src/utils/tauri.tsapps/desktop/src/routes/editor/Editor.tsxapps/desktop/src/routes/editor/ExportPage.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Never add any form of comments to code (single-line //, multi-line /* /, JSDoc /* */, or any other comment syntax); code must be self-explanatory through naming, types, and structure
Directory naming must use lowercase-dashed convention
Files:
packages/ui-solid/src/auto-imports.d.tsapps/desktop/src/routes/editor/EditorSkeleton.tsxapps/desktop/src/utils/tauri.tsapps/desktop/src/routes/editor/Editor.tsxapps/desktop/src/routes/editor/ExportPage.tsx
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (AGENTS.md)
Never add comments to code (
//,/* */,///,//!,#, etc.); code must be self-explanatory through naming, types, and structure
Files:
packages/ui-solid/src/auto-imports.d.tsapps/desktop/src/routes/editor/EditorSkeleton.tsxapps/desktop/src-tauri/src/lib.rsapps/desktop/src/utils/tauri.tsapps/desktop/src-tauri/src/export.rsapps/desktop/src/routes/editor/Editor.tsxapps/desktop/src/routes/editor/ExportPage.tsx
**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Use React Query hooks with Server Actions for mutations and perform precise cache updates using setQueryData/setQueriesData instead of broad invalidations
Files:
apps/desktop/src/routes/editor/EditorSkeleton.tsxapps/desktop/src/routes/editor/Editor.tsxapps/desktop/src/routes/editor/ExportPage.tsx
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Never use dbg!() macro; use proper logging (tracing::debug!, etc.) instead
Never write let _ = async_fn() which silently drops futures; await or explicitly handle them
Use duration.saturating_sub(other) instead of duration - other to avoid panics on underflow
Merge nested if statements: write 'if a && b { }' instead of 'if a { if b { } }'
Don't call .clone() on Copy types (integers, bools, etc.); copy them directly
Use function references directly: iter.map(foo) instead of iter.map(|x| foo(x))
Accept &[T] or &str instead of &Vec or &String in function parameters for flexibility
Use .is_empty() instead of .len() == 0 or .len() > 0 / .len() != 0
Don't assign () to a variable: write foo(); instead of let _ = foo(); or let x = foo(); when return is unit
Use .unwrap_or(val) instead of .unwrap_or_else(|| val) when the default is a simple/cheap value
Use 'for item in &collection' or 'for (i, item) in collection.iter().enumerate()' instead of 'for i in 0..collection.len()'
Use value.clamp(min, max) instead of manual if chains or .min(max).max(min) patterns
Always handle Result/Option or types marked #[must_use]; never ignore them
**/*.rs: Userustfmtand workspace clippy lints for Rust code formatting and linting
Use snake_case for Rust module names and kebab-case for crate names
Never usedbg!()macro in Rust code; use proper logging instead (Clippy:dbg_macro= deny)
Always handleResult/Optionor types marked#[must_use]; never ignore them (Rust compiler lint:unused_must_use= deny)
Never writelet _ = async_fn()which silently drops futures; await or explicitly handle them (Clippy:let_underscore_future= deny)
Usesaturating_subinstead of-forDurationto avoid panics (Clippy:unchecked_duration_subtraction= deny)
Merge nestedifstatements: useif a && b { }instead ofif a { if b { } }(Clippy:collapsible_if= deny)
Don't call.clone()onCopytypes; just copy them directly (Clippy:clone_on_copy= deny)
U...
Files:
apps/desktop/src-tauri/src/lib.rsapps/desktop/src-tauri/src/export.rs
apps/desktop/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/desktop/**/*.ts: Use @tanstack/solid-query for server state management in SolidJS components
Use generated commands and events from tauri_specta for IPC; never manually construct IPC calls
Listen directly to generated events from tauri_specta and use typed event interfaces
Files:
apps/desktop/src/utils/tauri.ts
🧠 Learnings (7)
📓 Common learnings
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
📚 Learning: 2025-12-07T14:29:19.180Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.180Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use strict TypeScript; avoid any type; leverage shared types from cap/* packages
Applied to files:
packages/ui-solid/src/auto-imports.d.ts
📚 Learning: 2025-12-07T14:29:19.180Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.180Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use static skeletons for loading states that mirror content; avoid bouncing animations
Applied to files:
apps/desktop/src/routes/editor/EditorSkeleton.tsx
📚 Learning: 2025-10-28T08:39:42.230Z
Learnt from: Brendonovich
Repo: CapSoftware/Cap PR: 1305
File: crates/recording/src/output_pipeline/macos.rs:80-90
Timestamp: 2025-10-28T08:39:42.230Z
Learning: In `crates/recording/src/output_pipeline/macos.rs`, the `AVFoundationMp4Muxer` intentionally holds the `Mutex<MP4Encoder>` lock during retry attempts in `send_video_frame()` and `send_audio_frame()`. This blocking behavior is correct because frame processing must happen sequentially to prevent audio and video frames from being interleaved incorrectly in the encoder.
Applied to files:
apps/desktop/src-tauri/src/lib.rs
📚 Learning: 2025-12-07T14:29:19.180Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-07T14:29:19.180Z
Learning: Applies to apps/desktop/**/*.ts : Use generated commands and events from tauri_specta for IPC; never manually construct IPC calls
Applied to files:
apps/desktop/src/utils/tauri.ts
📚 Learning: 2025-12-07T14:29:40.743Z
Learnt from: CR
Repo: CapSoftware/Cap PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-07T14:29:40.743Z
Learning: Applies to **/tauri.ts|**/queries.ts|apps/desktop/src-tauri/gen/** : Never edit auto-generated files: `**/tauri.ts`, `**/queries.ts`, `apps/desktop/src-tauri/gen/**`
Applied to files:
apps/desktop/src/utils/tauri.ts
📚 Learning: 2025-11-19T18:25:41.457Z
Learnt from: ItsEeleeya
Repo: CapSoftware/Cap PR: 1396
File: apps/desktop/src/routes/(window-chrome)/settings/general.tsx:183-192
Timestamp: 2025-11-19T18:25:41.457Z
Learning: In SolidJS, when using the Show component with the `keyed` attribute and a dynamic reactive value (like a memo result), the component will recreate its children when the condition value changes by reference. This pattern is useful for triggering CSS animations (like fade-in) when the rendered content needs to update. For example, in apps/desktop/src/routes/(window-chrome)/settings/general.tsx, `<Show when={previews()[theme.id]} keyed>` ensures the image re-animates when switching between different preview sources.
Applied to files:
apps/desktop/src/routes/editor/ExportPage.tsx
🧬 Code graph analysis (2)
apps/desktop/src/routes/editor/Editor.tsx (2)
apps/desktop/src/routes/editor/ExportPage.tsx (1)
ExportPage(125-1357)apps/desktop/src/routes/editor/Header.tsx (1)
Header(44-186)
apps/desktop/src/routes/editor/ExportPage.tsx (4)
apps/desktop/src/utils/tauri.ts (4)
ExportCompression(415-415)ExportSettings(419-419)commands(7-310)FramesRendered(422-422)apps/desktop/src/utils/export.ts (1)
createExportTask(4-27)apps/desktop/src/routes/editor/Header.tsx (1)
RESOLUTION_OPTIONS(32-36)apps/desktop/src/routes/editor/context.ts (1)
RenderState(90-92)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (rust)
🔇 Additional comments (16)
packages/ui-solid/src/auto-imports.d.ts (1)
9-9: LGTM! Auto-generated icon type declarations.The three new global icon type declarations are correctly generated and follow the established pattern in the file. These additions appear to be the result of regenerating the auto-imports configuration after new icons were added to the project.
Also applies to: 71-71, 90-90
apps/desktop/src-tauri/src/lib.rs (3)
109-138: Well-structured thread-safe finalization tracker.The
FinalizingRecordingsimplementation correctly usesstd::sync::Mutexfor short-lived synchronous access andtokio::sync::watchchannels for async notification. The API is clean and follows the expected patterns.One minor observation:
finish_finalizingsilently handles the case where the path doesn't exist in the map (theif let Some(...)pattern). This is likely intentional for idempotency, but consider whether a warning log would be helpful for debugging unexpected states.
3209-3240: Solid async wait-for-ready implementation with proper error handling.The function correctly:
- Checks for active finalization and waits using
watch::Receiver::wait_for- Falls back to checking if crash recovery remux is needed
- Uses
spawn_blockingappropriately for the CPU-bound remux operation- Properly propagates both task panics and remux errors with the double
?operator
3172-3207: Correct integration of wait-for-ready in editor creation flow.The
wait_for_recording_readycall is properly placed beforeEditorInstance::new, ensuring the recording is finalized and ready before the editor attempts to load it. This prevents race conditions between background finalization and editor initialization.apps/desktop/src-tauri/src/export.rs (2)
293-310: RGBA to RGB conversion and JPEG encoding looks correct.The conversion from RGBA (4 bytes per pixel) to RGB (3 bytes per pixel) properly handles the padded row stride and extracts only the RGB channels. The JPEG encoding uses the quality derived from the compression settings.
417-425: Duration source differs between preview functions.
generate_export_preview_fastuseseditor.recordings.duration()directly (line 420), whilegenerate_export_previewfetches metadata and checks for timeline configuration. This is appropriate since the fast path leverages in-memory data, but note that the size estimates may differ slightly if timeline segments modify the effective duration.apps/desktop/src/utils/tauri.ts (1)
1-3: Auto-generated file - no manual changes needed.This file is generated by
tauri-spectaas indicated in the header comment. The newgenerateExportPreview,generateExportPreviewFastcommands and associated types (ExportPreviewResult,ExportPreviewSettings) are correctly generated from the Rust backend definitions.Based on learnings, this file should not be manually edited.
apps/desktop/src/routes/editor/EditorSkeleton.tsx (2)
205-232: Well-structured skeleton matching editor layout.The
EditorSkeletoncomponent properly mirrors the editor's layout structure withHeaderSkeleton,PlayerSkeleton,SidebarSkeleton, andTimelineSkeleton. The use of static skeletons follows the coding guidelines for loading states.
72-84: IconCapLogo requires an explicit import.The
VideoPreviewSkeletoncomponent usesIconCapLogoat line 78 but does not import it. While other files in the codebase also use this icon without explicit imports, this suggests a missing import statement. Add the import:import IconCapLogo from "~icons/cap/logo";apps/desktop/src/routes/editor/Editor.tsx (3)
202-210: Clean export mode gating implementation.The
isExportMode()function correctly checks the dialog state to determine if the export flow should be rendered. Using aShowcomponent with theExportPageas fallback is an elegant way to swap between editor and export views without mounting both simultaneously.
606-615: Proper fallback chain for crop preview image.The image source correctly prioritizes the dynamically captured
frameDataUrl()and falls back to the static screenshot file usingconvertFileSrc. This ensures the crop dialog always has a valid image to display.
285-289: Dialog filter correctly excludes export type.The condition
d.type !== "export"ensures the export flow is handled by theExportPagecomponent (via theShowfallback) rather than through theDialogcomponent. This aligns with the architectural change from modal export to full-page export.apps/desktop/src/routes/editor/ExportPage.tsx (4)
1-48: LGTM!Imports are well-organized and the
SilentErrorpattern for handling user cancellations is appropriate.
361-396: LGTM!The copy mutation properly handles cancellation, distinguishes between user-initiated cancellations and actual errors, and provides appropriate user feedback.
398-458: LGTM!The save mutation follows consistent patterns with proper dialog handling and file operations.
1359-1392: LGTM!The helper components
RenderProgressandProgressVieware clean, focused, and correctly implement the progress visualization.
| <a | ||
| href={meta().sharing?.link} | ||
| target="_blank" | ||
| rel="noreferrer" | ||
| class="block" | ||
| > | ||
| <Button | ||
| onClick={() => { | ||
| setCopyPressed(true); | ||
| setTimeout(() => { | ||
| setCopyPressed(false); | ||
| }, 2000); | ||
| navigator.clipboard.writeText( | ||
| meta().sharing?.link!, | ||
| ); | ||
| }} | ||
| variant="dark" | ||
| class="flex gap-2 justify-center items-center" | ||
| > | ||
| {!copyPressed() ? ( | ||
| <IconCapCopy class="transition-colors duration-200 text-gray-1 size-4 group-hover:text-gray-12" /> | ||
| ) : ( | ||
| <IconLucideCheck class="transition-colors duration-200 text-gray-1 size-4 svgpathanimation group-hover:text-gray-12" /> | ||
| )} | ||
| <p>Open Link</p> | ||
| </Button> | ||
| </a> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nested interactive elements cause conflicting behavior.
The <Button> is nested inside an <a> tag. When clicked, both the button's onClick (copying to clipboard) and the anchor's navigation (opening link) will trigger simultaneously. This may cause unexpected behavior where the link opens while the clipboard is also modified.
Consider separating these into distinct buttons or preventing event propagation.
🔎 Proposed fix - separate the actions
- <div class="relative">
- <a
- href={meta().sharing?.link}
- target="_blank"
- rel="noreferrer"
- class="block"
- >
- <Button
- onClick={() => {
- setCopyPressed(true);
- setTimeout(() => {
- setCopyPressed(false);
- }, 2000);
- navigator.clipboard.writeText(
- meta().sharing?.link!,
- );
- }}
- variant="dark"
- class="flex gap-2 justify-center items-center"
- >
- {!copyPressed() ? (
- <IconCapCopy class="transition-colors duration-200 text-gray-1 size-4 group-hover:text-gray-12" />
- ) : (
- <IconLucideCheck class="transition-colors duration-200 text-gray-1 size-4 svgpathanimation group-hover:text-gray-12" />
- )}
- <p>Open Link</p>
- </Button>
- </a>
- </div>
+ <div class="flex gap-2">
+ <Button
+ onClick={() => {
+ setCopyPressed(true);
+ setTimeout(() => {
+ setCopyPressed(false);
+ }, 2000);
+ navigator.clipboard.writeText(
+ meta().sharing?.link!,
+ );
+ }}
+ variant="dark"
+ class="flex gap-2 justify-center items-center"
+ >
+ {!copyPressed() ? (
+ <IconCapCopy class="transition-colors duration-200 text-gray-1 size-4 group-hover:text-gray-12" />
+ ) : (
+ <IconLucideCheck class="transition-colors duration-200 text-gray-1 size-4 svgpathanimation group-hover:text-gray-12" />
+ )}
+ <p>Copy Link</p>
+ </Button>
+ <Button
+ as="a"
+ href={meta().sharing?.link}
+ target="_blank"
+ rel="noreferrer"
+ variant="dark"
+ class="flex gap-2 justify-center items-center"
+ >
+ <IconLucideExternalLink class="size-4" />
+ <p>Open Link</p>
+ </Button>
+ </div>Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/desktop/src/routes/editor/ExportPage.tsx around lines 1271-1297, the
Button is nested inside an anchor causing both the button's onClick and the
anchor navigation to fire; split the actions into two separate interactive
elements: render one anchor-wrapped Button (or plain anchor styled like a
button) that only opens the link (no onClick), and render a second standalone
Button for copying that handles setCopyPressed and
navigator.clipboard.writeText. For the copy Button, ensure the click handler
calls event.stopPropagation() and event.preventDefault() if necessary, and add
appropriate aria-labels/roles so each control is distinct and accessible.
Summary by CodeRabbit
New Features
Performance
Bug Fixes
✏️ Tip: You can customize this high-level summary in your review settings.