Skip to content

Conversation

@SimonRastikian
Copy link
Contributor

@SimonRastikian SimonRastikian commented Dec 1, 2025

Allows modifying the benchmark parameters Iterations and Latency.

Computes the schemes with modifiable latency in milliseconds.
Contributes to the latency section of #140 #134 and naturally to #136

@codecov-commenter
Copy link

codecov-commenter commented Dec 1, 2025

Codecov Report

❌ Patch coverage is 0% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 87.86%. Comparing base (cb3ec42) to head (224cb01).

Files with missing lines Patch % Lines
src/test_utils/protocol.rs 0.00% 4 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #240      +/-   ##
==========================================
- Coverage   87.90%   87.86%   -0.04%     
==========================================
  Files          51       51              
  Lines        9779     9783       +4     
  Branches     9779     9783       +4     
==========================================
  Hits         8596     8596              
- Misses        733      737       +4     
  Partials      450      450              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Base automatically changed from simon/simulation to main December 5, 2025 13:04
@gilcu3
Copy link
Contributor

gilcu3 commented Dec 5, 2025

@SimonRastikian whenever you can, please rebase this on main and ping me to review it

Copy link
Collaborator

@netrome netrome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how the current sleep call helps us simulate network latency. This looks like the wrong place to sleep in my view. I'd like to get more clarification on that when you have the time.

out = Some(output);
}
}
std::thread::sleep(duration);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we sleeping after running the protocol? I was under the impression that we wanted to inject latency in the networking layer, i.e. add a delay between sending and receiving messages.

In what sense does this help us simulate network latency?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a prior implementation, I tried injecting latency in the networking layer, but this was implying very weird numbers in the triple generation.
Chelsea's idea was to do as done right now to allow some simulation of the latency.
I am still not 100% convinced either and thus I was thinking of investigating more: see #247.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we merge this version of latency implementation as a simulation. Once I have a clear idea and I hope a better solution than what has been implemented earlier, then I can have a new PR.

Note that the previous implementation was giving good numbers for presign and sign but not for triple gen. So I still prefer to have something right but simulated instead of something not fully right but "real"

Copy link
Contributor

@gilcu3 gilcu3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have trouble understanding what we achieve with the PR.

  • It is not clear why we need a sample size parameter, instead of using criterion's default or a fixed amount of time as before.

  • It also looks like the benchmark latency measurement is not a measurement, but an estimation, in which case I think it should be in the docs but not included in the criterion benchmarks.

Comment on lines 2 to 4
source: src/ecdsa/ot_based_ecdsa/triples/generation.rs
assertion_line: 1197
expression: result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please remove this, same as in the last PR

Comment on lines 2 to 5
source: src/ecdsa/ot_based_ecdsa/presign.rs
assertion_line: 268
expression: result
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

Comment on lines 2 to 4
source: src/ecdsa/robust_ecdsa/presign.rs
assertion_line: 408
expression: result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

Comment on lines 172 to 203
///
/// Accepts a network latency in milliseconds say 100ms
pub fn run_simulated_protocol<T>(
real_participant: Participant,
mut real_prot: Box<dyn Protocol<Output = T>>,
simulator: Simulator,
network_latency: u64,
round_number: u64,
) -> Result<T, ProtocolError> {
let duration = std::time::Duration::from_millis(network_latency * round_number);
if simulator.real_participant() != real_participant {
return Err(ProtocolError::AssertionFailed(
"The given real participant does not match the simulator's internal real participant"
.to_string(),
));
}

// fill the real_participant's buffer with the recorded messages
for (from, data) in simulator.get_recorded_messages() {
real_prot.message(from, data);
}

let mut out = None;
while out.is_none() {
let action = real_prot.poke()?;
if let Action::Return(output) = action {
out = Some(output);
}
}
std::thread::sleep(duration);
out.ok_or_else(|| ProtocolError::Other("out is None".to_string()))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe adding this just makes the benchmark slower artificially, while giving us no extra information. It could also be misleading to users, who might believe we are actually measuring latency instead of simply estimating it based in the number of rounds. I think there latency values are certainly useful, and have a place in the docs where we explain the results of the benchmarks. But we don't need to put them in the code, as the computation is trivial do by hand, and does not depend on the implementation

Comment on lines +38 to +48
let latency = *LATENCY;
let max_malicious = *MAX_MALICIOUS;
let mut group = c.benchmark_group("sign");
group.measurement_time(std::time::Duration::from_secs(300));
group.sample_size(*SAMPLE_SIZE);

let preps = robust_ecdsa_prepare_presign(num, &mut rng);
let result = run_protocol(preps.protocols).expect("Prepare sign should not");
let pk = preps.key_packages[0].1.public_key;

group.bench_function(
format!("robust_ecdsa_sign_naive_MAX_MALICIOUS_{max_malicious}_PARTICIPANTS_{num}"),
format!("robust_ecdsa_sign_naive_MAX_MALICIOUS_{max_malicious}_PARTICIPANTS_{num}_LATENCY_{latency}"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before accepting or refusing this change I need to know what was the idea to include it. Before we had a constant time duration for the benchmark, and now we are letting the users set the sample size. As the parameter input method we use are not great (we are forced to use env variables by criterion), then I would prefer to remain as before (const total time) or move to a constant number of samples, or use criterion's default, instead of making the implementation more verbose.

If we have a good reason to add sample size let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Benchmark] [Robust-ECDSA] Generic Robust-Based ECDSA Benchmark Issue [Benchmark] [OT-ECDSA] Generic OT-Based ECDSA Benchmark Issue

5 participants