-
Notifications
You must be signed in to change notification settings - Fork 12
refactor(attestation): move TCB measurement extraction to build time #1646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
01f62a9 to
7753737
Compare
…d remove runtime parsing path
7753737 to
00f656b
Compare
gilcu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just commenting as some cleanup are still coming, but overall looks good, did not check if it works as expected though
|
JFYI: Not sure why Copilot’s review didn’t kick in automatically (AFAIK it should trigger for every PR now), so I just ran it manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors TCB measurement extraction by moving it from runtime to build time, eliminating all JSON parsing overhead in the smart contract. TCB measurements (MRTD, RTMR0-2, and key-provider digest) are now extracted from JSON files during compilation and embedded as static Rust structures.
Key Changes:
- Adds build.rs script that discovers and parses tcb_info*.json files from assets/ directory at build time
- Generates static Rust structures containing pre-decoded measurement values
- Removes runtime JSON parsing and hex decoding from attestation verification flow
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| crates/mpc-attestation/build.rs | New build script that parses JSON measurement files and generates static Rust constants |
| crates/mpc-attestation/Cargo.toml | Adds build script reference and build-dependencies for JSON parsing |
| crates/mpc-attestation/src/attestation.rs | Replaces runtime JSON parsing with statically compiled measurements via all_expected_measurements() |
| crates/attestation/src/measurements.rs | Removes from_embedded_tcb_info() method and related runtime parsing logic |
| crates/mpc-attestation/measurements_build.md | Comprehensive documentation for the build-time measurement generation process |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
crates/mpc-attestation/build.rs
Outdated
| for entry in fs::read_dir(&assets_dir).unwrap() { | ||
| let entry = entry.unwrap(); | ||
| let path = entry.path(); | ||
|
|
||
| if path.extension().and_then(|x| x.to_str()) == Some("json") | ||
| && path | ||
| .file_name() | ||
| .unwrap() | ||
| .to_str() | ||
| .unwrap() | ||
| .starts_with("tcb_info") | ||
| { | ||
| measurement_files.push(path); | ||
| } | ||
| } |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The build script panics with unwrap() if the assets directory doesn't exist or if there are permission issues reading it. This will cause cryptic build failures. Consider adding a proper error message or using expect() with a descriptive message to help developers understand what went wrong. For example, if the assets directory is missing, the error should clearly indicate that the directory is expected to exist and contain tcb_info*.json files.
crates/mpc-attestation/build.rs
Outdated
| let json_str = fs::read_to_string(&path).unwrap(); | ||
| let tcb: serde_json::Value = serde_json::from_str(&json_str).unwrap(); | ||
|
|
||
| // extract 4 RTMRs + MRTD | ||
| let mrtd = decode_hex(tcb["mrtd"].as_str().unwrap()); | ||
| let rtmr0 = decode_hex(tcb["rtmr0"].as_str().unwrap()); | ||
| let rtmr1 = decode_hex(tcb["rtmr1"].as_str().unwrap()); | ||
| let rtmr2 = decode_hex(tcb["rtmr2"].as_str().unwrap()); |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The build script panics with unwrap() at multiple points when parsing JSON. If a JSON file is malformed or missing required fields, the build will fail with an unhelpful panic. Consider using expect() with descriptive messages that include the filename and what field is missing or invalid. For example, "Failed to parse {filename}: missing 'mrtd' field" or "Failed to decode hex in {filename} for field 'rtmr0'".
pbeza
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nits only.
crates/mpc-attestation/build.rs
Outdated
|
|
||
| // Find all tcb_info*.json files (prod, dev, future ones) | ||
| let mut measurement_files = Vec::new(); | ||
| for entry in fs::read_dir(&assets_dir).unwrap() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gilcu3 is it safe to have unwrap()s in build.rs? 🤔 Since it runs at compile time, I assume it’s fine, but just double-checking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assumed it is fine as well, for that reason
pbeza
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot flagged a few good concerns — PTAL when you get a moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| You can add more files in the future (e.g., staging, additional images). | ||
| Any file matching the prefix: | ||
|
|
||
| ``` | ||
| tcb_info* | ||
| ``` | ||
|
|
||
| will be automatically included at build time. |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation mentions that "Any file matching the prefix: tcb_info*" will be automatically included, but the build script only checks if the filename starts with "tcb_info". Consider clarifying in the documentation whether other file extensions (e.g., .txt, .yaml) would be included if they match the prefix, or specify that only .json files are processed.
crates/mpc-attestation/build.rs
Outdated
| fn main() { | ||
| // Find assets directory | ||
| let manifest_dir = | ||
| env::var("CARGO_MANIFEST_DIR").expect("CARGO_MANIFEST_DIR must be set by Cargo"); | ||
|
|
||
| let assets_dir = PathBuf::from(&manifest_dir).join(ASSETS_DIR_NAME); | ||
|
|
||
| println!("cargo:rerun-if-changed={}", ASSETS_DIR_NAME); | ||
|
|
||
| // First pass: ensure directory exists + register rerun triggers | ||
| let entries = fs::read_dir(&assets_dir).unwrap_or_else(|e| { | ||
| panic!( | ||
| "Failed to read assets directory '{}': {}.\n\ | ||
| This directory must exist and contain tcb_info*.json files.", | ||
| assets_dir.display(), | ||
| e | ||
| ) | ||
| }); | ||
|
|
||
| let mut measurement_files = Vec::new(); | ||
|
|
||
| for entry in entries { | ||
| let entry = entry.unwrap_or_else(|e| { | ||
| panic!( | ||
| "Failed to read an entry inside '{}': {}", | ||
| assets_dir.display(), | ||
| e | ||
| ) | ||
| }); | ||
|
|
||
| let path = entry.path(); | ||
|
|
||
| if path.extension().and_then(|x| x.to_str()) == Some("json") { | ||
| println!("cargo:rerun-if-changed={}", path.display()); | ||
|
|
||
| // Only include tcb_info*.json | ||
| let filename = path | ||
| .file_name() | ||
| .and_then(|x| x.to_str()) | ||
| .unwrap_or_else(|| { | ||
| panic!( | ||
| "Found a JSON file with invalid UTF-8 filename inside '{}': {:?}", | ||
| assets_dir.display(), | ||
| path.file_name() | ||
| ) | ||
| }); | ||
|
|
||
| if filename.starts_with("tcb_info") { | ||
| measurement_files.push(path); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if measurement_files.is_empty() { | ||
| panic!( | ||
| "No tcb_info*.json files found in directory '{}'. \ | ||
| Add files such as tcb_info.json or tcb_info_dev.json.", | ||
| assets_dir.display() | ||
| ); | ||
| } | ||
|
|
||
| // Write generated Rust file | ||
| let out_dir = PathBuf::from(env::var("OUT_DIR").expect("OUT_DIR must be provided by Cargo")); | ||
| let out_file = out_dir.join("measurements_generated.rs"); | ||
|
|
||
| let mut f = File::create(&out_file).unwrap_or_else(|e| { | ||
| panic!( | ||
| "Failed to create output file '{}': {}", | ||
| out_file.display(), | ||
| e | ||
| ) | ||
| }); | ||
|
|
||
| writeln!( | ||
| f, | ||
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n\ | ||
| use attestation::measurements::*;\n\ | ||
| pub const EXPECTED_MEASUREMENTS: &[ExpectedMeasurements] = &[\n" | ||
| ) | ||
| .expect("failed to write prelude to generated file"); | ||
|
|
||
| // Process each file | ||
| for path in measurement_files { | ||
| let json_str = fs::read_to_string(&path) | ||
| .unwrap_or_else(|e| panic!("Failed to read JSON file '{}': {}", path.display(), e)); | ||
|
|
||
| let tcb: serde_json::Value = serde_json::from_str(&json_str) | ||
| .unwrap_or_else(|e| panic!("Failed to parse JSON file '{}': {}", path.display(), e)); | ||
|
|
||
| // Extract RTMRs + MRTD | ||
| let mrtd = decode_measurement(&tcb, "mrtd", &path); | ||
| let rtmr0 = decode_measurement(&tcb, "rtmr0", &path); | ||
| let rtmr1 = decode_measurement(&tcb, "rtmr1", &path); | ||
| let rtmr2 = decode_measurement(&tcb, "rtmr2", &path); | ||
|
|
||
| // Extract key-provider digest | ||
| let key_provider_digest = extract_key_provider_digest(&tcb, &path); | ||
|
|
||
| // Emit Rust struct | ||
| writeln!( | ||
| f, | ||
| " ExpectedMeasurements {{ \ | ||
| rtmrs: Measurements {{ \ | ||
| mrtd: {:?}, \ | ||
| rtmr0: {:?}, \ | ||
| rtmr1: {:?}, \ | ||
| rtmr2: {:?} \ | ||
| }}, \ | ||
| key_provider_event_digest: {:?}, \ | ||
| }},", | ||
| mrtd, rtmr0, rtmr1, rtmr2, key_provider_digest | ||
| ) | ||
| .expect("failed writing measurement struct"); | ||
| } | ||
|
|
||
| writeln!(f, "];").expect("failed writing closing bracket"); | ||
| } | ||
|
|
||
| /// Extract a measurement field and decode hex, with good error messages. | ||
| fn decode_measurement(tcb: &serde_json::Value, field: &str, path: &PathBuf) -> [u8; 48] { | ||
| let hex = tcb[field].as_str().unwrap_or_else(|| { | ||
| panic!( | ||
| "Field '{}' missing or not a string in '{}'", | ||
| field, | ||
| path.display() | ||
| ) | ||
| }); | ||
|
|
||
| decode_hex(hex, field, path) | ||
| } | ||
|
|
||
| /// Extract the key-provider hash with clear diagnostics | ||
| fn extract_key_provider_digest(tcb: &serde_json::Value, path: &PathBuf) -> [u8; 48] { | ||
| let events = tcb["event_log"] | ||
| .as_array() | ||
| .unwrap_or_else(|| panic!("event_log missing or not an array in '{}'", path.display())); | ||
|
|
||
| for event in events { | ||
| let event_name = event["event"].as_str().unwrap_or(""); | ||
|
|
||
| if event_name == "key-provider" { | ||
| let digest_hex = event["digest"].as_str().unwrap_or_else(|| { | ||
| panic!( | ||
| "key-provider event in '{}' does not contain a valid digest", | ||
| path.display() | ||
| ) | ||
| }); | ||
|
|
||
| return decode_hex(digest_hex, "key-provider digest", path); | ||
| } | ||
| } | ||
|
|
||
| panic!("No key-provider event found in '{}'", path.display()); | ||
| } | ||
|
|
||
| /// Decode a hex string into a 48-byte array with validation | ||
| fn decode_hex(hex: &str, field: &str, path: &PathBuf) -> [u8; 48] { | ||
| let bytes = hex::decode(hex).unwrap_or_else(|e| { | ||
| panic!( | ||
| "Invalid hex in field '{}' in '{}': {}", | ||
| field, | ||
| path.display(), | ||
| e | ||
| ) | ||
| }); | ||
|
|
||
| if bytes.len() != 48 { | ||
| panic!( | ||
| "Expected 48-byte measurement for field '{}' in '{}', got {} bytes", | ||
| field, | ||
| path.display(), | ||
| bytes.len() | ||
| ); | ||
| } | ||
|
|
||
| let mut arr = [0u8; 48]; | ||
| arr.copy_from_slice(&bytes); | ||
| arr | ||
| } |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The build script lacks test coverage. Consider adding tests to verify that the build script correctly parses valid JSON files, handles invalid JSON gracefully, and fails with clear error messages when required fields are missing. This would help catch issues early during development.
| pub fn all_expected_measurements() -> &'static [ExpectedMeasurements] { | ||
| EXPECTED_MEASUREMENTS | ||
| } |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new all_expected_measurements function and the generated measurements lack test coverage. Consider adding a test that verifies the function returns the expected number of measurement sets (e.g., at least 2 for prod and dev), validates the structure of the returned data, and ensures the measurements are not all zeros.
| ``` | ||
| crates/mpc-attestation/assets/ | ||
| ``` | ||
|
|
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation states "Human-readable TCB measurement JSON files live in:" but doesn't mention that these files should not be edited directly in production. Consider adding a note about the security implications of modifying these files and the need for proper validation/approval processes when updating measurements.
| > **⚠️ Security Warning:** | |
| > **Do not edit TCB measurement JSON files directly in production environments.** | |
| > These files are critical for attestation and system security. | |
| > Any updates must go through proper validation and approval processes to prevent accidental or malicious changes. |
crates/mpc-attestation/build.rs
Outdated
| fn decode_measurement(tcb: &serde_json::Value, field: &str, path: &PathBuf) -> [u8; 48] { | ||
| let hex = tcb[field].as_str().unwrap_or_else(|| { | ||
| panic!( | ||
| "Field '{}' missing or not a string in '{}'", | ||
| field, | ||
| path.display() | ||
| ) | ||
| }); | ||
|
|
||
| decode_hex(hex, field, path) | ||
| } | ||
|
|
||
| /// Extract the key-provider hash with clear diagnostics | ||
| fn extract_key_provider_digest(tcb: &serde_json::Value, path: &PathBuf) -> [u8; 48] { | ||
| let events = tcb["event_log"] | ||
| .as_array() | ||
| .unwrap_or_else(|| panic!("event_log missing or not an array in '{}'", path.display())); | ||
|
|
||
| for event in events { | ||
| let event_name = event["event"].as_str().unwrap_or(""); | ||
|
|
||
| if event_name == "key-provider" { | ||
| let digest_hex = event["digest"].as_str().unwrap_or_else(|| { | ||
| panic!( | ||
| "key-provider event in '{}' does not contain a valid digest", | ||
| path.display() | ||
| ) | ||
| }); | ||
|
|
||
| return decode_hex(digest_hex, "key-provider digest", path); | ||
| } | ||
| } | ||
|
|
||
| panic!("No key-provider event found in '{}'", path.display()); | ||
| } | ||
|
|
||
| /// Decode a hex string into a 48-byte array with validation | ||
| fn decode_hex(hex: &str, field: &str, path: &PathBuf) -> [u8; 48] { | ||
| let bytes = hex::decode(hex).unwrap_or_else(|e| { | ||
| panic!( | ||
| "Invalid hex in field '{}' in '{}': {}", | ||
| field, | ||
| path.display(), | ||
| e | ||
| ) | ||
| }); | ||
|
|
||
| if bytes.len() != 48 { | ||
| panic!( | ||
| "Expected 48-byte measurement for field '{}' in '{}', got {} bytes", | ||
| field, | ||
| path.display(), | ||
| bytes.len() | ||
| ); | ||
| } | ||
|
|
||
| let mut arr = [0u8; 48]; | ||
| arr.copy_from_slice(&bytes); | ||
| arr | ||
| } |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The decode_hex function has a hardcoded array size of 48 bytes. This magic number appears multiple times throughout the build script. Consider defining a constant at the top of the file like 'const MEASUREMENT_SIZE: usize = 48;' to improve maintainability and make it easier to understand what this value represents.
gilcu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall, but found a bug in the cargo annotation and noticed we are passing the path parameter to several functions that not need it. Added a comment with a suggestion on how to achieve that.
| cargo clean -p mpc-attestation | ||
| cargo build -p mpc-attestation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the clean required now after the last parameter that was added? I would guess no, but please check just in case
|
|
||
| let assets_dir = PathBuf::from(&manifest_dir).join(ASSETS_DIR_NAME); | ||
|
|
||
| println!("cargo:rerun-if-changed={}", ASSETS_DIR_NAME); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| println!("cargo:rerun-if-changed={}", ASSETS_DIR_NAME); | |
| println!("cargo::rerun-if-changed={}", ASSETS_DIR_NAME); |
https://doc.rust-lang.org/cargo/reference/build-scripts.html#rerun-if-changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ChatGPT does not agree with you, and from claude.ai:
cargo:rerun-if-changed=... ✅ (works everywhere)
cargo::rerun-if-changed=... ✅ (works in Rust 1.77+)
Stick with the single colon unless you have a specific reason to require Rust 1.77+.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! You are probably right: https://github.com/search?q=cargo%3Arerun-if-changed&type=code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are using rust 1.86, so better use the one in the current docs, as the other one might be deprecated (as it is not mentioned there)
crates/mpc-attestation/build.rs
Outdated
| /// Extract a measurement field and decode hex, with good error messages. | ||
| fn decode_measurement(tcb: &serde_json::Value, field: &str, path: &Path) -> [u8; 48] { | ||
| let hex = tcb[field].as_str().unwrap_or_else(|| { | ||
| panic!( | ||
| "Field '{}' missing or not a string in '{}'", | ||
| field, | ||
| path.display() | ||
| ) | ||
| }); | ||
|
|
||
| decode_hex(hex, field, path) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this functions are taking a path parameter just to print it out in case of errors. I don't think that's the correct approach. The function should return a Result, and then the caller should attach the corresponding path information if required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed
207857c to
0256795
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
crates/mpc-attestation/build.rs
Outdated
| panic!( | ||
| "Failed to read assets directory '{}': {}.\n\ | ||
| This directory must exist and contain tcb_info*.json files.", | ||
| assets_dir.display(), | ||
| e | ||
| ) |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using Path::display() to format the path in the error message instead of calling it twice. This is more idiomatic and avoids potential inconsistency if the path display changes.
crates/mpc-attestation/build.rs
Outdated
| writeln!( | ||
| f, | ||
| " ExpectedMeasurements {{ \ | ||
| rtmrs: Measurements {{ \ | ||
| mrtd: {:?}, \ | ||
| rtmr0: {:?}, \ | ||
| rtmr1: {:?}, \ | ||
| rtmr2: {:?} \ | ||
| }}, \ | ||
| key_provider_event_digest: {:?}, \ | ||
| }},", | ||
| mrtd, rtmr0, rtmr1, rtmr2, key_provider_digest | ||
| ) |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The generated Rust code uses the Debug formatting for byte arrays which will produce a verbose output like [1, 2, 3, ...]. Consider formatting these arrays as hex literals instead (e.g., *b"hex_string_here" or using a custom formatting function) to make the generated code more compact and easier to read.
crates/mpc-attestation/build.rs
Outdated
|
|
||
| writeln!( | ||
| f, | ||
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n\ |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The module path in the generated code assumes the structure attestation::measurements::*. Consider adding a comment or documentation explaining that this generated code must be included in a module that has access to these imports, or make the imports more explicit with full paths.
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n\ | |
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n\ | |
| // NOTE: This generated code assumes the module path `attestation::measurements::*` is available.\n\ | |
| // Ensure this file is included in a module that has access to these imports.\n\ |
crates/mpc-attestation/build.rs
Outdated
| .ok_or_else(|| "event_log missing or not an array".to_string())?; | ||
|
|
||
| for event in events { | ||
| if event["event"].as_str().unwrap_or("") == "key-provider" { |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code uses unwrap_or("") when extracting event names, which could mask errors if an event structure is malformed. If a "key-provider" event is found but doesn't have a valid event name string (which should be impossible based on the data structure), this could silently skip it. Consider using a more explicit error handling approach or at least documenting why empty string is a safe default.
| if event["event"].as_str().unwrap_or("") == "key-provider" { | |
| let event_name = event["event"] | |
| .as_str() | |
| .ok_or_else(|| "event missing or not a string".to_string())?; | |
| if event_name == "key-provider" { |
| pub fn all_expected_measurements() -> &'static [ExpectedMeasurements] { | ||
| EXPECTED_MEASUREMENTS | ||
| } |
Copilot
AI
Dec 11, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a test that validates the build-time measurement generation. For example, a test that calls all_expected_measurements() and verifies that it returns a non-empty slice with valid measurements. This would help catch build script issues early and ensure the generated code is correct.
gilcu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
| use std::fs::{self, File}; | ||
| use std::io::Write; | ||
| use std::path::Path; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional nit:
| use std::fs::{self, File}; | |
| use std::io::Write; | |
| use std::path::Path; | |
| use std::{ | |
| fs::{self, File}, | |
| io::Write, | |
| path::Path, | |
| }; |
| /// Main logic for generating the Rust measurements file. | ||
| /// This is fully testable and contains no side effects except writing out_file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is somewhat redundant.
| /// Main logic for generating the Rust measurements file. | |
| /// This is fully testable and contains no side effects except writing out_file. |
| pub fn generate_measurements(in_dir: &Path, out_file: &Path) -> Result<(), String> { | ||
| // Discover measurement files | ||
| let mut measurement_files = Vec::new(); | ||
|
|
||
| let entries = fs::read_dir(in_dir) | ||
| .map_err(|e| format!("Failed to read directory '{}': {}", in_dir.display(), e))?; | ||
|
|
||
| for entry in entries { | ||
| let entry = entry.map_err(|e| format!("Failed to read entry: {}", e))?; | ||
| let path = entry.path(); | ||
|
|
||
| if path.extension().and_then(|x| x.to_str()) == Some("json") { | ||
| if let Some(fname) = path.file_name().and_then(|x| x.to_str()) { | ||
| if fname.starts_with("tcb_info") { | ||
| measurement_files.push(path); | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if measurement_files.is_empty() { | ||
| return Err(format!( | ||
| "No tcb_info*.json files found in '{}'", | ||
| in_dir.display() | ||
| )); | ||
| } | ||
|
|
||
| // Create output file | ||
| let mut f = File::create(out_file).map_err(|e| { | ||
| format!( | ||
| "Failed to create output file '{}': {}", | ||
| out_file.display(), | ||
| e | ||
| ) | ||
| })?; | ||
|
|
||
| writeln!( | ||
| f, | ||
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n\ | ||
| use attestation::measurements::*;\n\ | ||
| pub const EXPECTED_MEASUREMENTS: &[ExpectedMeasurements] = &[\n" | ||
| ) | ||
| .map_err(|e| e.to_string())?; | ||
|
|
||
| // Process each file | ||
| for path in measurement_files { | ||
| let json_str = fs::read_to_string(&path) | ||
| .map_err(|e| format!("Failed to read '{}': {}", path.display(), e))?; | ||
|
|
||
| let tcb: Value = serde_json::from_str(&json_str) | ||
| .map_err(|e| format!("Invalid JSON '{}': {}", path.display(), e))?; | ||
|
|
||
| let mrtd = decode_measurement(&tcb, "mrtd")?; | ||
| let rtmr0 = decode_measurement(&tcb, "rtmr0")?; | ||
| let rtmr1 = decode_measurement(&tcb, "rtmr1")?; | ||
| let rtmr2 = decode_measurement(&tcb, "rtmr2")?; | ||
|
|
||
| let key_provider_digest = extract_key_provider_digest(&tcb)?; | ||
|
|
||
| writeln!( | ||
| f, | ||
| " ExpectedMeasurements {{ \ | ||
| rtmrs: Measurements {{ \ | ||
| mrtd: {:?}, \ | ||
| rtmr0: {:?}, \ | ||
| rtmr1: {:?}, \ | ||
| rtmr2: {:?} \ | ||
| }}, \ | ||
| key_provider_event_digest: {:?}, \ | ||
| }},", | ||
| mrtd, rtmr0, rtmr1, rtmr2, key_provider_digest | ||
| ) | ||
| .map_err(|e| e.to_string())?; | ||
| } | ||
|
|
||
| writeln!(f, "];").map_err(|e| e.to_string())?; | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’d generally suggest asking Copilot for help with shortening or simplifying the code you’re writing. It can help you make it more idiomatic and concise.
| pub fn generate_measurements(in_dir: &Path, out_file: &Path) -> Result<(), String> { | |
| // Discover measurement files | |
| let mut measurement_files = Vec::new(); | |
| let entries = fs::read_dir(in_dir) | |
| .map_err(|e| format!("Failed to read directory '{}': {}", in_dir.display(), e))?; | |
| for entry in entries { | |
| let entry = entry.map_err(|e| format!("Failed to read entry: {}", e))?; | |
| let path = entry.path(); | |
| if path.extension().and_then(|x| x.to_str()) == Some("json") { | |
| if let Some(fname) = path.file_name().and_then(|x| x.to_str()) { | |
| if fname.starts_with("tcb_info") { | |
| measurement_files.push(path); | |
| } | |
| } | |
| } | |
| } | |
| if measurement_files.is_empty() { | |
| return Err(format!( | |
| "No tcb_info*.json files found in '{}'", | |
| in_dir.display() | |
| )); | |
| } | |
| // Create output file | |
| let mut f = File::create(out_file).map_err(|e| { | |
| format!( | |
| "Failed to create output file '{}': {}", | |
| out_file.display(), | |
| e | |
| ) | |
| })?; | |
| writeln!( | |
| f, | |
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n\ | |
| use attestation::measurements::*;\n\ | |
| pub const EXPECTED_MEASUREMENTS: &[ExpectedMeasurements] = &[\n" | |
| ) | |
| .map_err(|e| e.to_string())?; | |
| // Process each file | |
| for path in measurement_files { | |
| let json_str = fs::read_to_string(&path) | |
| .map_err(|e| format!("Failed to read '{}': {}", path.display(), e))?; | |
| let tcb: Value = serde_json::from_str(&json_str) | |
| .map_err(|e| format!("Invalid JSON '{}': {}", path.display(), e))?; | |
| let mrtd = decode_measurement(&tcb, "mrtd")?; | |
| let rtmr0 = decode_measurement(&tcb, "rtmr0")?; | |
| let rtmr1 = decode_measurement(&tcb, "rtmr1")?; | |
| let rtmr2 = decode_measurement(&tcb, "rtmr2")?; | |
| let key_provider_digest = extract_key_provider_digest(&tcb)?; | |
| writeln!( | |
| f, | |
| " ExpectedMeasurements {{ \ | |
| rtmrs: Measurements {{ \ | |
| mrtd: {:?}, \ | |
| rtmr0: {:?}, \ | |
| rtmr1: {:?}, \ | |
| rtmr2: {:?} \ | |
| }}, \ | |
| key_provider_event_digest: {:?}, \ | |
| }},", | |
| mrtd, rtmr0, rtmr1, rtmr2, key_provider_digest | |
| ) | |
| .map_err(|e| e.to_string())?; | |
| } | |
| writeln!(f, "];").map_err(|e| e.to_string())?; | |
| Ok(()) | |
| } | |
| pub fn generate_measurements(in_dir: &Path, out_file: &Path) -> Result<(), String> { | |
| let measurement_files: Vec<_> = fs::read_dir(in_dir) | |
| .map_err(|e| format!("Failed to read directory '{}': {}", in_dir.display(), e))? | |
| .filter_map(|entry| { | |
| let path = entry.ok()?.path(); | |
| (path.extension()? == "json" && path.file_name()?.to_str()?.starts_with("tcb_info")) | |
| .then_some(path) | |
| }) | |
| .collect(); | |
| if measurement_files.is_empty() { | |
| return Err(format!( | |
| "No tcb_info*.json files found in '{}'", | |
| in_dir.display() | |
| )); | |
| } | |
| let mut f = File::create(out_file).map_err(|e| { | |
| format!( | |
| "Failed to create output file '{}': {}", | |
| out_file.display(), | |
| e | |
| ) | |
| })?; | |
| writeln!( | |
| f, | |
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n\ | |
| use attestation::measurements::*;\n\ | |
| pub const EXPECTED_MEASUREMENTS: &[ExpectedMeasurements] = &[\n" | |
| ) | |
| .map_err(|e| e.to_string())?; | |
| for path in measurement_files { | |
| let tcb: Value = serde_json::from_str( | |
| &fs::read_to_string(&path) | |
| .map_err(|e| format!("Failed to read '{}': {}", path.display(), e))?, | |
| ) | |
| .map_err(|e| format!("Invalid JSON '{}': {}", path.display(), e))?; | |
| writeln!( | |
| f, | |
| " ExpectedMeasurements {{ \ | |
| rtmrs: Measurements {{ \ | |
| mrtd: {:?}, \ | |
| rtmr0: {:?}, \ | |
| rtmr1: {:?}, \ | |
| rtmr2: {:?} \ | |
| }}, \ | |
| key_provider_event_digest: {:?}, \ | |
| }},", | |
| decode_measurement(&tcb, "mrtd")?, | |
| decode_measurement(&tcb, "rtmr0")?, | |
| decode_measurement(&tcb, "rtmr1")?, | |
| decode_measurement(&tcb, "rtmr2")?, | |
| extract_key_provider_digest(&tcb)? | |
| ) | |
| .map_err(|e| e.to_string())?; | |
| } | |
| writeln!(f, "];").map_err(|e| e.to_string())?; | |
| Ok(()) | |
| } |
| } | ||
|
|
||
| /// Extract one 48-byte hex measurement field (mrtd, rtmrX) | ||
| fn decode_measurement(tcb: &Value, field: &str) -> Result<[u8; 48], String> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional nit: it would probably be more natural to use anyhow::Result for error handling (this applies to the entire PR).
| decode_hex(hex).map_err(|e| format!("{}: {}", field, e)) | ||
| } | ||
|
|
||
| /// Extract key-provider digest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let’s avoid repeating what’s already clear from the function name.
| /// Extract key-provider digest |
| Err("No key-provider event found".to_string()) | ||
| } | ||
|
|
||
| /// Decode hex string to 48-byte array |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| /// Decode hex string to 48-byte array |
| fn decode_hex(hex: &str) -> Result<[u8; 48], String> { | ||
| let bytes = hex::decode(hex).map_err(|e| format!("invalid hex: {}", e))?; | ||
|
|
||
| if bytes.len() != 48 { | ||
| return Err(format!("expected 48 bytes, got {} bytes", bytes.len())); | ||
| } | ||
|
|
||
| let mut arr = [0u8; 48]; | ||
| arr.copy_from_slice(&bytes); | ||
| Ok(arr) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| fn decode_hex(hex: &str) -> Result<[u8; 48], String> { | |
| let bytes = hex::decode(hex).map_err(|e| format!("invalid hex: {}", e))?; | |
| if bytes.len() != 48 { | |
| return Err(format!("expected 48 bytes, got {} bytes", bytes.len())); | |
| } | |
| let mut arr = [0u8; 48]; | |
| arr.copy_from_slice(&bytes); | |
| Ok(arr) | |
| } | |
| fn decode_hex(hex: &str) -> Result<[u8; 48], String> { | |
| hex::decode(hex) | |
| .map_err(|e| format!("invalid hex: {}", e))? | |
| .try_into() | |
| .map_err(|v: Vec<u8>| format!("expected 48 bytes, got {} bytes", v.len())) | |
| } |
|
|
||
| const MPC_IMAGE_HASH_EVENT: &str = "mpc-image-digest"; | ||
|
|
||
| include!(concat!(env!("OUT_DIR"), "/measurements_generated.rs")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned elsewhere, let’s extract this filename into a constant.
|
|
||
| #[test] | ||
| fn test_generate_measurements_with_two_files() { | ||
| use tempfile::tempdir; | ||
| use std::fs; | ||
|
|
||
| // Create temporary input + output directories | ||
| let assets = tempdir().expect("tmp assets"); | ||
| let out = tempdir().expect("tmp out dir"); | ||
|
|
||
| // -------- JSON FILE #1 (same as in first test) -------- | ||
| let json1 = r#"{ | ||
| "mrtd": "f06dfda6dce1cf904d4e2bab1dc370634cf95cefa2ceb2de2eee127c9382698090d7a4a13e14c536ec6c9c3c8fa87077", | ||
| "rtmr0": "e673be2f70beefb70b48a6109eed4715d7270d4683b3bf356fa25fafbf1aa76e39e9127e6e688ccda98bdab1d4d47f46", | ||
| "rtmr1": "a7b523278d4f914ee8df0ec80cd1c3d498cbf1152b0c5eaf65bad9425072874a3fcf891e8b01713d3d9937e3e0d26c15", | ||
| "rtmr2": "24847f5c5a2360d030bc4f7b8577ce32e87c4d051452c937e91220cab69542daef83433947c492b9c201182fc9769bbe", | ||
| "event_log": [ | ||
| { | ||
| "event": "key-provider", | ||
| "digest": "74ca939b8c3c74aab3c30966a788f7743951d54a936a711dd01422f003ff9df6666f3cc54975d2e4f35c829865583f0f" | ||
| } | ||
| ] | ||
| }"#; | ||
|
|
||
| // -------- JSON FILE #2 (your new values) -------- | ||
| let json2 = r#"{ | ||
| "mrtd": "a06dfda6dce1cf904d4e2bab1dc370634cf95cefa2ceb2de2eee127c9382698090d7a4a13e14c536ec6c9c3c8fa87077", | ||
| "rtmr0": "a673be2f70beefb70b48a6109eed4715d7270d4683b3bf356fa25fafbf1aa76e39e9127e6e688ccda98bdab1d4d47f46", | ||
| "rtmr1": "d7b523278d4f914ee8df0ec80cd1c3d498cbf1152b0c5eaf65bad9425072874a3fcf891e8b01713d3d9937e3e0d26c15", | ||
| "rtmr2": "abf4924c07f5066f3dc6859844184344306aa3263817153dcaee85af97d23e0c0b96efe0731d8865a8747e51b9e351ac", | ||
| "event_log": [ | ||
| { | ||
| "event": "key-provider", | ||
| "digest": "64ca939b8c3c74aab3c30966a788f7743951d54a936a711dd01422f003ff9df6666f3cc54975d2e4f35c829865583f0f" | ||
| } | ||
| ] | ||
| }"#; | ||
|
|
||
| // Write both input JSON files | ||
| fs::write(assets.path().join("tcb_info_test1.json"), json1).unwrap(); | ||
| fs::write(assets.path().join("tcb_info_test2.json"), json2).unwrap(); | ||
|
|
||
| // Output Rust file path | ||
| let out_file = out.path().join("measurements_generated.rs"); | ||
|
|
||
| // Run generator | ||
| generate_measurements(assets.path(), &out_file).expect("generation failed"); | ||
|
|
||
| // Read generated file | ||
| let generated = fs::read_to_string(&out_file).expect("read output"); | ||
|
|
||
| // -------- Expected byte arrays for JSON #1 -------- | ||
| let mrtd1 = "[240, 109, 253, 166, 220, 225, 207, 144, 77, 78, 43, 171, 29, 195, 112, 99, 76, 249, 92, 239, 162, 206, 178, 222, 46, 238, 18, 124, 147, 130, 105, 128, 144, 215, 164, 161, 62, 20, 197, 54, 236, 108, 156, 60, 143, 168, 112, 119]"; | ||
| let rtmr01 = "[230, 115, 190, 47, 112, 190, 239, 183, 11, 72, 166, 16, 158, 237, 71, 21, 215, 39, 13, 70, 131, 179, 191, 53, 111, 162, 95, 175, 191, 26, 167, 110, 57, 233, 18, 126, 110, 104, 140, 205, 169, 139, 218, 177, 212, 212, 127, 70]"; | ||
| let rtmr11 = "[167, 181, 35, 39, 141, 79, 145, 78, 232, 223, 14, 200, 12, 209, 195, 212, 152, 203, 241, 21, 43, 12, 94, 175, 101, 186, 217, 66, 80, 114, 135, 74, 63, 207, 137, 30, 139, 1, 113, 61, 61, 153, 55, 227, 224, 210, 108, 21]"; | ||
| let rtmr21 = "[36, 132, 127, 92, 90, 35, 96, 208, 48, 188, 79, 123, 133, 119, 206, 50, 232, 124, 77, 5, 20, 82, 201, 55, 233, 18, 32, 202, 182, 149, 66, 218, 239, 131, 67, 57, 71, 196, 146, 185, 194, 1, 24, 47, 201, 118, 155, 190]"; | ||
| let digest1 = "[116, 202, 147, 155, 140, 60, 116, 170, 179, 195, 9, 102, 167, 136, 247, 116, 57, 81, 213, 74, 147, 106, 113, 29, 208, 20, 34, 240, 3, 255, 157, 246, 102, 111, 60, 197, 73, 117, 210, 228, 243, 92, 130, 152, 101, 88, 63, 15]"; | ||
|
|
||
| // -------- Expected byte arrays for JSON #2 -------- | ||
| let mrtd2 = "[160, 109, 253, 166, 220, 225, 207, 144, 77, 78, 43, 171, 29, 195, 112, 99, 76, 249, 92, 239, 162, 206, 178, 222, 46, 238, 18, 124, 147, 130, 105, 128, 144, 215, 164, 161, 62, 20, 197, 54, 236, 108, 156, 60, 143, 168, 112, 119]"; | ||
| let rtmr02 = "[166, 115, 190, 47, 112, 190, 239, 183, 11, 72, 166, 16, 158, 237, 71, 21, 215, 39, 13, 70, 131, 179, 191, 53, 111, 162, 95, 175, 191, 26, 167, 110, 57, 233, 18, 126, 110, 104, 140, 205, 169, 139, 218, 177, 212, 212, 127, 70]"; | ||
| let rtmr12 = "[215, 181, 35, 39, 141, 79, 145, 78, 232, 223, 14, 200, 12, 209, 195, 212, 152, 203, 241, 21, 43, 12, 94, 175, 101, 186, 217, 66, 80, 114, 135, 74, 63, 207, 137, 30, 139, 1, 113, 61, 61, 153, 55, 227, 224, 210, 108, 21]"; | ||
| let rtmr22 = "[171, 244, 146, 76, 7, 245, 6, 111, 61, 198, 133, 152, 68, 24, 67, 68, 48, 106, 163, 38, 56, 23, 21, 61, 202, 238, 133, 175, 151, 210, 62, 12, 11, 150, 239, 224, 115, 29, 136, 101, 168, 116, 126, 81, 185, 227, 81, 172]"; | ||
| let digest2 = "[100, 202, 147, 155, 140, 60, 116, 170, 179, 195, 9, 102, 167, 136, 247, 116, 57, 81, 213, 74, 147, 106, 113, 29, 208, 20, 34, 240, 3, 255, 157, 246, 102, 111, 60, 197, 73, 117, 210, 228, 243, 92, 130, 152, 101, 88, 63, 15]"; | ||
|
|
||
| // -------- Assertions for entry #1 -------- | ||
| assert!(generated.contains(mrtd1), "JSON1 mrtd mismatch"); | ||
| assert!(generated.contains(rtmr01), "JSON1 rtmr0 mismatch"); | ||
| assert!(generated.contains(rtmr11), "JSON1 rtmr1 mismatch"); | ||
| assert!(generated.contains(rtmr21), "JSON1 rtmr2 mismatch"); | ||
| assert!(generated.contains(digest1), "JSON1 digest mismatch"); | ||
|
|
||
| // -------- Assertions for entry #2 -------- | ||
| assert!(generated.contains(mrtd2), "JSON2 mrtd mismatch"); | ||
| assert!(generated.contains(rtmr02), "JSON2 rtmr0 mismatch"); | ||
| assert!(generated.contains(rtmr12), "JSON2 rtmr1 mismatch"); | ||
| assert!(generated.contains(rtmr22), "JSON2 rtmr2 mismatch"); | ||
| assert!(generated.contains(digest2), "JSON2 digest mismatch"); | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test doesn’t add any additional coverage and should be removed, IMHO.
| #[test] | |
| fn test_generate_measurements_with_two_files() { | |
| use tempfile::tempdir; | |
| use std::fs; | |
| // Create temporary input + output directories | |
| let assets = tempdir().expect("tmp assets"); | |
| let out = tempdir().expect("tmp out dir"); | |
| // -------- JSON FILE #1 (same as in first test) -------- | |
| let json1 = r#"{ | |
| "mrtd": "f06dfda6dce1cf904d4e2bab1dc370634cf95cefa2ceb2de2eee127c9382698090d7a4a13e14c536ec6c9c3c8fa87077", | |
| "rtmr0": "e673be2f70beefb70b48a6109eed4715d7270d4683b3bf356fa25fafbf1aa76e39e9127e6e688ccda98bdab1d4d47f46", | |
| "rtmr1": "a7b523278d4f914ee8df0ec80cd1c3d498cbf1152b0c5eaf65bad9425072874a3fcf891e8b01713d3d9937e3e0d26c15", | |
| "rtmr2": "24847f5c5a2360d030bc4f7b8577ce32e87c4d051452c937e91220cab69542daef83433947c492b9c201182fc9769bbe", | |
| "event_log": [ | |
| { | |
| "event": "key-provider", | |
| "digest": "74ca939b8c3c74aab3c30966a788f7743951d54a936a711dd01422f003ff9df6666f3cc54975d2e4f35c829865583f0f" | |
| } | |
| ] | |
| }"#; | |
| // -------- JSON FILE #2 (your new values) -------- | |
| let json2 = r#"{ | |
| "mrtd": "a06dfda6dce1cf904d4e2bab1dc370634cf95cefa2ceb2de2eee127c9382698090d7a4a13e14c536ec6c9c3c8fa87077", | |
| "rtmr0": "a673be2f70beefb70b48a6109eed4715d7270d4683b3bf356fa25fafbf1aa76e39e9127e6e688ccda98bdab1d4d47f46", | |
| "rtmr1": "d7b523278d4f914ee8df0ec80cd1c3d498cbf1152b0c5eaf65bad9425072874a3fcf891e8b01713d3d9937e3e0d26c15", | |
| "rtmr2": "abf4924c07f5066f3dc6859844184344306aa3263817153dcaee85af97d23e0c0b96efe0731d8865a8747e51b9e351ac", | |
| "event_log": [ | |
| { | |
| "event": "key-provider", | |
| "digest": "64ca939b8c3c74aab3c30966a788f7743951d54a936a711dd01422f003ff9df6666f3cc54975d2e4f35c829865583f0f" | |
| } | |
| ] | |
| }"#; | |
| // Write both input JSON files | |
| fs::write(assets.path().join("tcb_info_test1.json"), json1).unwrap(); | |
| fs::write(assets.path().join("tcb_info_test2.json"), json2).unwrap(); | |
| // Output Rust file path | |
| let out_file = out.path().join("measurements_generated.rs"); | |
| // Run generator | |
| generate_measurements(assets.path(), &out_file).expect("generation failed"); | |
| // Read generated file | |
| let generated = fs::read_to_string(&out_file).expect("read output"); | |
| // -------- Expected byte arrays for JSON #1 -------- | |
| let mrtd1 = "[240, 109, 253, 166, 220, 225, 207, 144, 77, 78, 43, 171, 29, 195, 112, 99, 76, 249, 92, 239, 162, 206, 178, 222, 46, 238, 18, 124, 147, 130, 105, 128, 144, 215, 164, 161, 62, 20, 197, 54, 236, 108, 156, 60, 143, 168, 112, 119]"; | |
| let rtmr01 = "[230, 115, 190, 47, 112, 190, 239, 183, 11, 72, 166, 16, 158, 237, 71, 21, 215, 39, 13, 70, 131, 179, 191, 53, 111, 162, 95, 175, 191, 26, 167, 110, 57, 233, 18, 126, 110, 104, 140, 205, 169, 139, 218, 177, 212, 212, 127, 70]"; | |
| let rtmr11 = "[167, 181, 35, 39, 141, 79, 145, 78, 232, 223, 14, 200, 12, 209, 195, 212, 152, 203, 241, 21, 43, 12, 94, 175, 101, 186, 217, 66, 80, 114, 135, 74, 63, 207, 137, 30, 139, 1, 113, 61, 61, 153, 55, 227, 224, 210, 108, 21]"; | |
| let rtmr21 = "[36, 132, 127, 92, 90, 35, 96, 208, 48, 188, 79, 123, 133, 119, 206, 50, 232, 124, 77, 5, 20, 82, 201, 55, 233, 18, 32, 202, 182, 149, 66, 218, 239, 131, 67, 57, 71, 196, 146, 185, 194, 1, 24, 47, 201, 118, 155, 190]"; | |
| let digest1 = "[116, 202, 147, 155, 140, 60, 116, 170, 179, 195, 9, 102, 167, 136, 247, 116, 57, 81, 213, 74, 147, 106, 113, 29, 208, 20, 34, 240, 3, 255, 157, 246, 102, 111, 60, 197, 73, 117, 210, 228, 243, 92, 130, 152, 101, 88, 63, 15]"; | |
| // -------- Expected byte arrays for JSON #2 -------- | |
| let mrtd2 = "[160, 109, 253, 166, 220, 225, 207, 144, 77, 78, 43, 171, 29, 195, 112, 99, 76, 249, 92, 239, 162, 206, 178, 222, 46, 238, 18, 124, 147, 130, 105, 128, 144, 215, 164, 161, 62, 20, 197, 54, 236, 108, 156, 60, 143, 168, 112, 119]"; | |
| let rtmr02 = "[166, 115, 190, 47, 112, 190, 239, 183, 11, 72, 166, 16, 158, 237, 71, 21, 215, 39, 13, 70, 131, 179, 191, 53, 111, 162, 95, 175, 191, 26, 167, 110, 57, 233, 18, 126, 110, 104, 140, 205, 169, 139, 218, 177, 212, 212, 127, 70]"; | |
| let rtmr12 = "[215, 181, 35, 39, 141, 79, 145, 78, 232, 223, 14, 200, 12, 209, 195, 212, 152, 203, 241, 21, 43, 12, 94, 175, 101, 186, 217, 66, 80, 114, 135, 74, 63, 207, 137, 30, 139, 1, 113, 61, 61, 153, 55, 227, 224, 210, 108, 21]"; | |
| let rtmr22 = "[171, 244, 146, 76, 7, 245, 6, 111, 61, 198, 133, 152, 68, 24, 67, 68, 48, 106, 163, 38, 56, 23, 21, 61, 202, 238, 133, 175, 151, 210, 62, 12, 11, 150, 239, 224, 115, 29, 136, 101, 168, 116, 126, 81, 185, 227, 81, 172]"; | |
| let digest2 = "[100, 202, 147, 155, 140, 60, 116, 170, 179, 195, 9, 102, 167, 136, 247, 116, 57, 81, 213, 74, 147, 106, 113, 29, 208, 20, 34, 240, 3, 255, 157, 246, 102, 111, 60, 197, 73, 117, 210, 228, 243, 92, 130, 152, 101, 88, 63, 15]"; | |
| // -------- Assertions for entry #1 -------- | |
| assert!(generated.contains(mrtd1), "JSON1 mrtd mismatch"); | |
| assert!(generated.contains(rtmr01), "JSON1 rtmr0 mismatch"); | |
| assert!(generated.contains(rtmr11), "JSON1 rtmr1 mismatch"); | |
| assert!(generated.contains(rtmr21), "JSON1 rtmr2 mismatch"); | |
| assert!(generated.contains(digest1), "JSON1 digest mismatch"); | |
| // -------- Assertions for entry #2 -------- | |
| assert!(generated.contains(mrtd2), "JSON2 mrtd mismatch"); | |
| assert!(generated.contains(rtmr02), "JSON2 rtmr0 mismatch"); | |
| assert!(generated.contains(rtmr12), "JSON2 rtmr1 mismatch"); | |
| assert!(generated.contains(rtmr22), "JSON2 rtmr2 mismatch"); | |
| assert!(generated.contains(digest2), "JSON2 digest mismatch"); | |
| } |
| #[test] | ||
| fn test_generate_measurements_with_exact_values() { | ||
| // Create temporary input + output directories | ||
| let assets = tempdir().expect("tmp assets"); | ||
| let out = tempdir().expect("tmp out dir"); | ||
|
|
||
| // Fake input JSON matching your real measurement values | ||
| let fake_json = r#"{ | ||
| "mrtd": "f06dfda6dce1cf904d4e2bab1dc370634cf95cefa2ceb2de2eee127c9382698090d7a4a13e14c536ec6c9c3c8fa87077", | ||
| "rtmr0": "e673be2f70beefb70b48a6109eed4715d7270d4683b3bf356fa25fafbf1aa76e39e9127e6e688ccda98bdab1d4d47f46", | ||
| "rtmr1": "a7b523278d4f914ee8df0ec80cd1c3d498cbf1152b0c5eaf65bad9425072874a3fcf891e8b01713d3d9937e3e0d26c15", | ||
| "rtmr2": "24847f5c5a2360d030bc4f7b8577ce32e87c4d051452c937e91220cab69542daef83433947c492b9c201182fc9769bbe", | ||
| "event_log": [ | ||
| { | ||
| "event": "key-provider", | ||
| "digest": "74ca939b8c3c74aab3c30966a788f7743951d54a936a711dd01422f003ff9df6666f3cc54975d2e4f35c829865583f0f" | ||
| } | ||
| ] | ||
| }"#; | ||
|
|
||
| // Write fake JSON input file | ||
| let json_path = assets.path().join("tcb_info_test.json"); | ||
| fs::write(&json_path, fake_json).expect("write fake json"); | ||
|
|
||
| // Output Rust file path | ||
| let out_file = out.path().join("measurements_generated.rs"); | ||
|
|
||
| // Run the generator | ||
| generate_measurements(assets.path(), &out_file) | ||
| .expect("generation failed"); | ||
|
|
||
| // Read the generated Rust file | ||
| let generated = fs::read_to_string(&out_file).expect("read output"); | ||
|
|
||
| // Expected byte arrays (computed from your hex) | ||
| let expected_mrtd = "[240, 109, 253, 166, 220, 225, 207, 144, 77, 78, 43, 171, 29, 195, 112, 99, 76, 249, 92, 239, 162, 206, 178, 222, 46, 238, 18, 124, 147, 130, 105, 128, 144, 215, 164, 161, 62, 20, 197, 54, 236, 108, 156, 60, 143, 168, 112, 119]"; | ||
| let expected_rtmr0 = "[230, 115, 190, 47, 112, 190, 239, 183, 11, 72, 166, 16, 158, 237, 71, 21, 215, 39, 13, 70, 131, 179, 191, 53, 111, 162, 95, 175, 191, 26, 167, 110, 57, 233, 18, 126, 110, 104, 140, 205, 169, 139, 218, 177, 212, 212, 127, 70]"; | ||
| let expected_rtmr1 = "[167, 181, 35, 39, 141, 79, 145, 78, 232, 223, 14, 200, 12, 209, 195, 212, 152, 203, 241, 21, 43, 12, 94, 175, 101, 186, 217, 66, 80, 114, 135, 74, 63, 207, 137, 30, 139, 1, 113, 61, 61, 153, 55, 227, 224, 210, 108, 21]"; | ||
| let expected_rtmr2 = "[36, 132, 127, 92, 90, 35, 96, 208, 48, 188, 79, 123, 133, 119, 206, 50, 232, 124, 77, 5, 20, 82, 201, 55, 233, 18, 32, 202, 182, 149, 66, 218, 239, 131, 67, 57, 71, 196, 146, 185, 194, 1, 24, 47, 201, 118, 155, 190]"; | ||
| let expected_digest = "[116, 202, 147, 155, 140, 60, 116, 170, 179, 195, 9, 102, 167, 136, 247, 116, 57, 81, 213, 74, 147, 106, 113, 29, 208, 20, 34, 240, 3, 255, 157, 246, 102, 111, 60, 197, 73, 117, 210, 228, 243, 92, 130, 152, 101, 88, 63, 15]"; | ||
|
|
||
| // Assert exact matches appear in the generated code | ||
| assert!(generated.contains(expected_mrtd), "mrtd mismatch"); | ||
| assert!(generated.contains(expected_rtmr0), "rtmr0 mismatch"); | ||
| assert!(generated.contains(expected_rtmr1), "rtmr1 mismatch"); | ||
| assert!(generated.contains(expected_rtmr2), "rtmr2 mismatch"); | ||
| assert!(generated.contains(expected_digest), "digest mismatch"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should generally aim to write tests that are discriminative and check as much as possible. In this case, it would be ideal to assert the contents of the generated file. It’s not as hard as it may sound (see my suggestion below—you just need to apply it).
Also, you computed expected_mrtd, expected_rtmr0, expected_rtmr1, expected_rtmr2, and expected_digest separately and then copy-pasted the values into the code. It would be much better to derive them automatically from the JSON string above, which already contains all the necessary data. (I fixed this in the code suggestion below as well.)
| #[test] | |
| fn test_generate_measurements_with_exact_values() { | |
| // Create temporary input + output directories | |
| let assets = tempdir().expect("tmp assets"); | |
| let out = tempdir().expect("tmp out dir"); | |
| // Fake input JSON matching your real measurement values | |
| let fake_json = r#"{ | |
| "mrtd": "f06dfda6dce1cf904d4e2bab1dc370634cf95cefa2ceb2de2eee127c9382698090d7a4a13e14c536ec6c9c3c8fa87077", | |
| "rtmr0": "e673be2f70beefb70b48a6109eed4715d7270d4683b3bf356fa25fafbf1aa76e39e9127e6e688ccda98bdab1d4d47f46", | |
| "rtmr1": "a7b523278d4f914ee8df0ec80cd1c3d498cbf1152b0c5eaf65bad9425072874a3fcf891e8b01713d3d9937e3e0d26c15", | |
| "rtmr2": "24847f5c5a2360d030bc4f7b8577ce32e87c4d051452c937e91220cab69542daef83433947c492b9c201182fc9769bbe", | |
| "event_log": [ | |
| { | |
| "event": "key-provider", | |
| "digest": "74ca939b8c3c74aab3c30966a788f7743951d54a936a711dd01422f003ff9df6666f3cc54975d2e4f35c829865583f0f" | |
| } | |
| ] | |
| }"#; | |
| // Write fake JSON input file | |
| let json_path = assets.path().join("tcb_info_test.json"); | |
| fs::write(&json_path, fake_json).expect("write fake json"); | |
| // Output Rust file path | |
| let out_file = out.path().join("measurements_generated.rs"); | |
| // Run the generator | |
| generate_measurements(assets.path(), &out_file) | |
| .expect("generation failed"); | |
| // Read the generated Rust file | |
| let generated = fs::read_to_string(&out_file).expect("read output"); | |
| // Expected byte arrays (computed from your hex) | |
| let expected_mrtd = "[240, 109, 253, 166, 220, 225, 207, 144, 77, 78, 43, 171, 29, 195, 112, 99, 76, 249, 92, 239, 162, 206, 178, 222, 46, 238, 18, 124, 147, 130, 105, 128, 144, 215, 164, 161, 62, 20, 197, 54, 236, 108, 156, 60, 143, 168, 112, 119]"; | |
| let expected_rtmr0 = "[230, 115, 190, 47, 112, 190, 239, 183, 11, 72, 166, 16, 158, 237, 71, 21, 215, 39, 13, 70, 131, 179, 191, 53, 111, 162, 95, 175, 191, 26, 167, 110, 57, 233, 18, 126, 110, 104, 140, 205, 169, 139, 218, 177, 212, 212, 127, 70]"; | |
| let expected_rtmr1 = "[167, 181, 35, 39, 141, 79, 145, 78, 232, 223, 14, 200, 12, 209, 195, 212, 152, 203, 241, 21, 43, 12, 94, 175, 101, 186, 217, 66, 80, 114, 135, 74, 63, 207, 137, 30, 139, 1, 113, 61, 61, 153, 55, 227, 224, 210, 108, 21]"; | |
| let expected_rtmr2 = "[36, 132, 127, 92, 90, 35, 96, 208, 48, 188, 79, 123, 133, 119, 206, 50, 232, 124, 77, 5, 20, 82, 201, 55, 233, 18, 32, 202, 182, 149, 66, 218, 239, 131, 67, 57, 71, 196, 146, 185, 194, 1, 24, 47, 201, 118, 155, 190]"; | |
| let expected_digest = "[116, 202, 147, 155, 140, 60, 116, 170, 179, 195, 9, 102, 167, 136, 247, 116, 57, 81, 213, 74, 147, 106, 113, 29, 208, 20, 34, 240, 3, 255, 157, 246, 102, 111, 60, 197, 73, 117, 210, 228, 243, 92, 130, 152, 101, 88, 63, 15]"; | |
| // Assert exact matches appear in the generated code | |
| assert!(generated.contains(expected_mrtd), "mrtd mismatch"); | |
| assert!(generated.contains(expected_rtmr0), "rtmr0 mismatch"); | |
| assert!(generated.contains(expected_rtmr1), "rtmr1 mismatch"); | |
| assert!(generated.contains(expected_rtmr2), "rtmr2 mismatch"); | |
| assert!(generated.contains(expected_digest), "digest mismatch"); | |
| } | |
| #[test] | |
| fn test_generate_measurements_with_exact_values() { | |
| let assets = tempdir().unwrap(); | |
| let out = tempdir().unwrap(); | |
| let fake_json = r#"{ | |
| "mrtd": "f06dfda6dce1cf904d4e2bab1dc370634cf95cefa2ceb2de2eee127c9382698090d7a4a13e14c536ec6c9c3c8fa87077", | |
| "rtmr0": "e673be2f70beefb70b48a6109eed4715d7270d4683b3bf356fa25fafbf1aa76e39e9127e6e688ccda98bdab1d4d47f46", | |
| "rtmr1": "a7b523278d4f914ee8df0ec80cd1c3d498cbf1152b0c5eaf65bad9425072874a3fcf891e8b01713d3d9937e3e0d26c15", | |
| "rtmr2": "24847f5c5a2360d030bc4f7b8577ce32e87c4d051452c937e91220cab69542daef83433947c492b9c201182fc9769bbe", | |
| "event_log": [ | |
| { | |
| "event": "key-provider", | |
| "digest": "74ca939b8c3c74aab3c30966a788f7743951d54a936a711dd01422f003ff9df6666f3cc54975d2e4f35c829865583f0f" | |
| } | |
| ] | |
| }"#; | |
| let json_path = assets.path().join("tcb_info_test.json"); | |
| fs::write(&json_path, fake_json).unwrap(); | |
| let out_file = out.path().join("measurements_generated.rs"); | |
| generate_measurements(assets.path(), &out_file).unwrap(); | |
| let generated = fs::read_to_string(&out_file).unwrap(); | |
| let tcb: Value = serde_json::from_str(fake_json).unwrap(); | |
| let expected = format!( | |
| concat!( | |
| "// AUTO-GENERATED FILE. DO NOT EDIT.\n", | |
| "use attestation::measurements::*;\n", | |
| "pub const EXPECTED_MEASUREMENTS: &[ExpectedMeasurements] = &[\n", | |
| "\n", | |
| " ExpectedMeasurements {{ rtmrs: Measurements {{ mrtd: {:?}, rtmr0: {:?}, ", | |
| "rtmr1: {:?}, rtmr2: {:?} }}, key_provider_event_digest: {:?}, }},\n", | |
| "];\n" | |
| ), | |
| decode_measurement(&tcb, "mrtd").unwrap(), | |
| decode_measurement(&tcb, "rtmr0").unwrap(), | |
| decode_measurement(&tcb, "rtmr1").unwrap(), | |
| decode_measurement(&tcb, "rtmr2").unwrap(), | |
| extract_key_provider_digest(&tcb).unwrap() | |
| ); | |
| assert_eq!(generated, expected); | |
| } |
|
Also, one more remark: please make sure to remove AI-generated comments, as they’re often trivial or written in the second person. |
netrome
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice proof-of-concept but I have a feeling it can be greatly simplified. I'll try see if I can find an alternative simpler approach or convince myself that the current approach is the appropriate one.
One concrete change I'd like to see is that the generated code should be possible to include using normal imports (e.g. use statements). I'll circle back after playing some more with code generation/proc-macros.
| ```rust | ||
| include!(concat!(env!("OUT_DIR"), "/measurements_generated.rs")); | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should generate a crate that can be included using normal imports instead of using include!(...).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before addressing this, please see my alternative proposal in: #1659
|
As promised, here's an alternative approach based on proc-macros: #1659 I think this is more idiomatic, and would be easier to use and maintain. Curious to hear what you think. |
fixes #1634
This PR removes all runtime JSON parsing from the contract at runtime.
Instead, TCB measurements (MRTD, RTMR0–2, key-provider digest) are extracted at build time from readable JSON files and compiled into static Rust structures.
This makes attestation verification: