deps: bump flake inputs and cargo deps#588
Conversation
WalkthroughReplaced workspace subprocess and roff versions and migrated process handling to the subprocess 1.0 Job API: Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant Local
end
rect rgba(200,255,200,0.5)
participant SSH
end
rect rgba(255,200,200,0.5)
participant RemoteJob as Job
end
Local->>SSH: start() (returns Job)
SSH->>RemoteJob: spawn remote command
Local->>RemoteJob: write stdin bytes (password)
RemoteJob-->>Local: stdout/stderr streams available
Local->>RemoteJob: wait() / wait_timeout()
alt timeout
Local->>RemoteJob: kill()
RemoteJob-->>Local: killed, collect exit status
else success/fail
RemoteJob-->>Local: exit status
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/nh-remote/src/remote.rs`:
- Around line 1660-1665: The polling loop that starts the SSH command via
ssh_cmd.start() and repeatedly calls job.wait_timeout(...) is not draining
stdout/stderr pipes and can deadlock when the remote process writes enough
output; update the wait loop in the exit_status logic to either: (1) replace the
manual wait_timeout loop with Job::join_timeout(...) which drains stdout/stderr
automatically, or (2) while polling with job.wait_timeout(...), spawn
non-blocking readers (or use asynchronous readers) to continuously drain
job.stdout and job.stderr into buffers/streams until the job completes, ensuring
pipes are read on each iteration; modify the code around ssh_cmd.start(), the
loop using job.wait_timeout, and the exit/completion handling so output is
drained before and during waits (alternatively use capture() if interrupt
polling is not required).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bcbb1a41-3877-4a22-9802-ed742c33a68b
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockflake.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
Cargo.tomlcrates/nh-core/src/command.rscrates/nh-remote/src/remote.rs
9e824a3 to
0e42e31
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
crates/nh-remote/src/remote.rs (1)
1654-1704:⚠️ Potential issue | 🔴 CriticalPiped output can still deadlock this polling loop.
This path starts SSH with both stdout and stderr piped, then repeatedly calls
job.wait_timeout(...)without draining either pipe until after the child exits. Insubprocess1.0.0,Job::wait()/wait_timeout()do not drain piped output, whilejoin(),capture(), and the communication helpers are the drain-safe APIs. A verbose remote build can therefore fill the pipe buffer, block the SSH child, and keep this loop from ever observing completion. (docs.rs)Please either drain
job.stdout/job.stderrconcurrently while polling, or switch this path to a drain-aware API and rework the interrupt handling around that.For subprocess 1.0.0 on docs.rs, does Job::wait()/wait_timeout() drain piped stdout/stderr, or do I need Job::join()/capture()/communicate() to avoid deadlock when output is redirected to pipes?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/nh-remote/src/remote.rs` around lines 1654 - 1704, The loop using Job::wait_timeout on the spawned SSH Job while both stdout and stderr are piped can deadlock because wait_timeout does not drain pipes; either (A) spawn threads/tasks to continuously read from job.stdout and job.stderr into buffers while the loop polls get_interrupt_flag()/job.wait_timeout (referencing job.stdout, job.stderr, and job.wait_timeout), or (B) refactor to use a drain-aware API such as Job::capture()/Job::join()/communicate() to collect stdout/stderr atomically and then implement interrupt handling around that call (ensure attempt_remote_cleanup and job.kill() are invoked if interrupted while using the capture/join path). Make sure readers safely share the captured buffers (or return results) so the later exit_status/error check and the output String use the drained data instead of accessing job.stdout/job.stderr after join/capture.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@crates/nh-remote/src/remote.rs`:
- Around line 1654-1704: The loop using Job::wait_timeout on the spawned SSH Job
while both stdout and stderr are piped can deadlock because wait_timeout does
not drain pipes; either (A) spawn threads/tasks to continuously read from
job.stdout and job.stderr into buffers while the loop polls
get_interrupt_flag()/job.wait_timeout (referencing job.stdout, job.stderr, and
job.wait_timeout), or (B) refactor to use a drain-aware API such as
Job::capture()/Job::join()/communicate() to collect stdout/stderr atomically and
then implement interrupt handling around that call (ensure
attempt_remote_cleanup and job.kill() are invoked if interrupted while using the
capture/join path). Make sure readers safely share the captured buffers (or
return results) so the later exit_status/error check and the output String use
the drained data instead of accessing job.stdout/job.stderr after join/capture.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3db1a262-3c3f-46a1-9b82-bd4f3f4fe85a
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
Cargo.tomlcrates/nh-core/src/command.rscrates/nh-remote/src/remote.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- Cargo.toml
Sanity Checking
nix fmtto format my Nix codecargo fmtto format my Rust codecargo clippyand fixed any new linter warnings.logic
description.
x86_64-linuxaarch64-linuxx86_64-darwinaarch64-darwinAdd a 👍 reaction to pull requests you find important.
Summary by CodeRabbit
Chores
Tests
Refactor