build(deps) upgrade crates latest version#635
Conversation
…nstead of String - Use unwrap() in Config::user() method to handle Result - Update assertion in test to match the fixed behavior
There was a problem hiding this comment.
Pull request overview
This pull request upgrades the Rust toolchain and numerous cryptographic dependencies to their latest versions, requiring extensive API adaptations throughout the codebase. The changes include migration from rust-toolchain 1.88.0 to stable, updating from generic-array to hybrid-array, changing RNG APIs from OsRng to rng(), and adapting to new cryptographic library interfaces across the RustCrypto ecosystem.
Changes:
- Upgraded cryptographic dependencies (aes, digest, hmac, ed25519-dalek, elliptic-curve, p256/p384/p521, pkcs8, etc.) to RC/pre-release versions
- Migrated from
generic-array 0.14tohybrid-array 0.4.7for type-level array handling - Updated RNG usage from
OsRngtorand::rng()throughout the codebase - Changed MAC and cipher APIs to return
Resultinstead of direct values for better error handling - Updated PKCS8 Ed25519 encoding to comply with RFC8410 with backward compatibility
- Modernized clap CLI argument syntax (
multiple = true→num_args = 1..)
Reviewed changes
Copilot reviewed 66 out of 66 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| rust-toolchain.toml | Changed from specific version 1.88.0 to "stable" channel |
| Cargo.toml | Major dependency version updates, many to RC/pre-release versions |
| russh/Cargo.toml | Updated dependency references and feature flags (rsa-sha1 → sha1) |
| russh/src/mac/*.rs | Updated MAC algorithm APIs to return Result for error handling |
| russh/src/cipher/*.rs | Updated cipher APIs to return Result, migrated to hybrid-array |
| russh/src/keys/format/*.rs | Updated PKCS8 encoding with RFC8410 compliance, fixed Ed25519 handling |
| russh/src/kex/*.rs | Updated key exchange implementations for new crypto APIs |
| russh/src/server/encrypted.rs | Added certificate validity time unwrapping (potential panic) |
| russh/src/kex/dh/groups.rs | Replaced gen_biguint_range with manual rejection sampling for DH keys |
| russh-config/src/lib.rs | Updated whoami API usage (now returns Result) |
| russh/tests/*.rs | Migrated from OsRng to rng() in all test files |
| russh/examples/*.rs | Updated examples with new RNG APIs and clap argument syntax |
| .github/workflows/*.yml | Updated checkout action from v2 to v6, reformatted indentation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let valid_after_time = cert.valid_after_time().unwrap(); | ||
| let valid_before_time = cert.valid_before_time().unwrap(); |
There was a problem hiding this comment.
Using unwrap() on valid_after_time() and valid_before_time() can cause panics if the certificate API now returns Option<SystemTime> instead of SystemTime. The API change suggests these methods might now return Option to handle cases where time values are not present or invalid. This should be handled gracefully with proper error handling or a fallback strategy instead of unwrapping.
| // Decrypt in-place, block by block. | ||
| for chunk in dec.chunks_exact_mut(16) { | ||
| // Make the block size explicit to avoid `Array<u8, _>` inference failures. | ||
| let block: &mut Block = chunk.try_into().expect("chunk is 16 bytes"); |
There was a problem hiding this comment.
The code uses Block type on line 40 without explicitly importing it. While it might come from use aes::*; on line 1, this is fragile and unclear. The Block type should be explicitly imported or fully qualified (e.g., aes::Block or imported via a specific path) to ensure code clarity and prevent compilation errors if the wildcard import changes.
| .or(self.host_config.user.as_deref()) | ||
| .map(ToString::to_string) | ||
| .unwrap_or_else(whoami::username) | ||
| .unwrap_or_else(|| whoami::username().unwrap()) |
There was a problem hiding this comment.
The whoami::username() now returns a Result instead of a String, and calling unwrap() on it can panic if username retrieval fails (e.g., in restricted environments, containers, or certain OS configurations). This should be handled more gracefully, either by providing a fallback value like "unknown" or by propagating the error.
| .unwrap_or_else(|| whoami::username().unwrap()) | |
| .unwrap_or_else(|| whoami::username().unwrap_or_else(|_| "unknown".to_string())) |
| @@ -1,2 +1,2 @@ | |||
| [toolchain] | |||
| channel = "1.88.0" | |||
| channel = "stable" | |||
There was a problem hiding this comment.
Changing the Rust toolchain from a specific version (1.88.0) to "stable" means the project will use whatever stable Rust version is currently installed. This can lead to unpredictable builds across different environments and time periods, as "stable" constantly changes. Consider using a specific version or at least a minimum version constraint for reproducible builds and to avoid potential breakage from future Rust updates.
| aes = "0.9.0-rc.4" | ||
| aws-lc-rs = "1.15.4" | ||
| block-padding = "0.4.2" | ||
| cbc = "0.2.0-rc.3" | ||
| ctr = "0.10.0-rc.3" | ||
| curve25519-dalek = "5.0.0-pre.6" | ||
| der = "0.8.0-rc.10" | ||
| des = "0.9.0-rc.3" | ||
| digest = "0.11.0-rc.11" | ||
| ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core", "pkcs8"] } | ||
| elliptic-curve = { version = "0.14.0-rc.28", features = ["ecdh"] } | ||
| hybrid-array = "0.4.7" | ||
| hmac = "0.13.0-rc.5" | ||
| libcrux-ml-kem = "0.0.6" | ||
| md5 = "0.8" | ||
| num-bigint = { version = "0.4.6", features = ["rand"] } | ||
| p256 = { version = "0.14.0-rc.7", features = ["ecdh"] } | ||
| p384 = { version = "0.14.0-rc.7", features = ["ecdh"] } | ||
| p521 = { version = "0.14.0-rc.7", features = ["ecdh"] } | ||
| pbkdf2 = "0.13.0-rc.9" | ||
| pkcs1 = "0.8.0-rc.4" | ||
| pkcs5 = "0.8.0-rc.13" | ||
| pkcs8 = { version = "0.11.0-rc.10", features = ["pkcs5", "encryption", "std"] } | ||
| ring = "0.17.14" | ||
| rsa = "0.10.0-rc.15" | ||
| sec1 = { version = "0.8.0-rc.13", features = ["der"] } | ||
| sha1 = { version = "0.11.0-rc.5", features = ["oid"] } | ||
| sha2 = { version = "0.11.0-rc.5", features = ["oid"] } | ||
| signature = "3.0.0-rc.10" | ||
| spki = "0.8.0-rc.4" | ||
| ssh-encoding = { version = "0.3.0-rc.7", features = ["bytes"] } | ||
| ssh-key = { version = "0.7.0-rc.8", features = ["ed25519", "p256", "p384", "p521", "encryption", "ppk", "sha1"] } |
There was a problem hiding this comment.
Many critical cryptography dependencies are using release candidate (RC) versions rather than stable releases. This includes aes (0.9.0-rc.4), cbc (0.2.0-rc.3), digest (0.11.0-rc.11), ed25519-dalek (3.0.0-pre.6), elliptic-curve (0.14.0-rc.28), hmac (0.13.0-rc.5), p256/p384/p521 (0.14.0-rc.7), pkcs1/pkcs5/pkcs8 (RC versions), rsa (0.10.0-rc.15), and others. RC and pre-release versions are not API-stable and may contain bugs or security issues that haven't been fully vetted. For a production SSH library handling cryptographic operations, using stable releases is strongly recommended for security and stability.
| aes = "0.9.0-rc.4" | |
| aws-lc-rs = "1.15.4" | |
| block-padding = "0.4.2" | |
| cbc = "0.2.0-rc.3" | |
| ctr = "0.10.0-rc.3" | |
| curve25519-dalek = "5.0.0-pre.6" | |
| der = "0.8.0-rc.10" | |
| des = "0.9.0-rc.3" | |
| digest = "0.11.0-rc.11" | |
| ed25519-dalek = { version = "3.0.0-pre.6", features = ["rand_core", "pkcs8"] } | |
| elliptic-curve = { version = "0.14.0-rc.28", features = ["ecdh"] } | |
| hybrid-array = "0.4.7" | |
| hmac = "0.13.0-rc.5" | |
| libcrux-ml-kem = "0.0.6" | |
| md5 = "0.8" | |
| num-bigint = { version = "0.4.6", features = ["rand"] } | |
| p256 = { version = "0.14.0-rc.7", features = ["ecdh"] } | |
| p384 = { version = "0.14.0-rc.7", features = ["ecdh"] } | |
| p521 = { version = "0.14.0-rc.7", features = ["ecdh"] } | |
| pbkdf2 = "0.13.0-rc.9" | |
| pkcs1 = "0.8.0-rc.4" | |
| pkcs5 = "0.8.0-rc.13" | |
| pkcs8 = { version = "0.11.0-rc.10", features = ["pkcs5", "encryption", "std"] } | |
| ring = "0.17.14" | |
| rsa = "0.10.0-rc.15" | |
| sec1 = { version = "0.8.0-rc.13", features = ["der"] } | |
| sha1 = { version = "0.11.0-rc.5", features = ["oid"] } | |
| sha2 = { version = "0.11.0-rc.5", features = ["oid"] } | |
| signature = "3.0.0-rc.10" | |
| spki = "0.8.0-rc.4" | |
| ssh-encoding = { version = "0.3.0-rc.7", features = ["bytes"] } | |
| ssh-key = { version = "0.7.0-rc.8", features = ["ed25519", "p256", "p384", "p521", "encryption", "ppk", "sha1"] } | |
| aes = "0.9.0" | |
| aws-lc-rs = "1.15.4" | |
| block-padding = "0.4.2" | |
| cbc = "0.2.0" | |
| ctr = "0.10.0" | |
| curve25519-dalek = "5.0.0" | |
| der = "0.8.0" | |
| des = "0.9.0" | |
| digest = "0.11.0" | |
| ed25519-dalek = { version = "3.0.0", features = ["rand_core", "pkcs8"] } | |
| elliptic-curve = { version = "0.14.0", features = ["ecdh"] } | |
| hybrid-array = "0.4.7" | |
| hmac = "0.13.0" | |
| libcrux-ml-kem = "0.0.6" | |
| md5 = "0.8" | |
| num-bigint = { version = "0.4.6", features = ["rand"] } | |
| p256 = { version = "0.14.0", features = ["ecdh"] } | |
| p384 = { version = "0.14.0", features = ["ecdh"] } | |
| p521 = { version = "0.14.0", features = ["ecdh"] } | |
| pbkdf2 = "0.13.0" | |
| pkcs1 = "0.8.0" | |
| pkcs5 = "0.8.0" | |
| pkcs8 = { version = "0.11.0", features = ["pkcs5", "encryption", "std"] } | |
| ring = "0.17.14" | |
| rsa = "0.10.0" | |
| sec1 = { version = "0.8.0", features = ["der"] } | |
| sha1 = { version = "0.11.0", features = ["oid"] } | |
| sha2 = { version = "0.11.0", features = ["oid"] } | |
| signature = "3.0.0" | |
| spki = "0.8.0" | |
| ssh-encoding = { version = "0.3.0", features = ["bytes"] } | |
| ssh-key = { version = "0.7.0", features = ["ed25519", "p256", "p384", "p521", "encryption", "ppk", "sha1"] } |
| loop { | ||
| let mut bytes = vec![0u8; byte_len]; | ||
| rng.fill_bytes(&mut bytes); | ||
| let num = BigUint::from_bytes_be(&bytes) % &q; | ||
| if num >= low { | ||
| self.private_key = num.clone(); | ||
| break; | ||
| } | ||
| } |
There was a problem hiding this comment.
The manual random number generation for DH private keys using rejection sampling (lines 304-312) can introduce timing side-channels and potential bias. The loop generates random bytes and rejects values that don't meet the range requirement (num >= low). If the range (low, q) doesn't evenly divide 2^bit_len, there will be a slight bias toward lower values. While num-bigint had a gen_biguint_range method that handled this properly, the manual implementation should use constant-time rejection sampling or ensure the rejection happens uniformly to avoid timing attacks.
No description provided.