-
Notifications
You must be signed in to change notification settings - Fork 2
Adding Sizes #241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: simon/flexibility-in-bench
Are you sure you want to change the base?
Adding Sizes #241
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## simon/flexibility-in-bench #241 +/- ##
==============================================================
- Coverage 87.83% 87.80% -0.03%
==============================================================
Files 50 50
Lines 9757 9760 +3
Branches 9757 9760 +3
==============================================================
Hits 8570 8570
- Misses 737 740 +3
Partials 450 450 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
gilcu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't not figure if we are measuring the number of bytes sent by the participant being simulated. If we are not, we should certainly add it.
The rest looks good to me, except that we should use a proper crate instead of calculating statistics manually. I added a suggestion.
|
|
||
| #[allow(clippy::cast_precision_loss)] | ||
| /// Analyzes the size of the received data by a participant accross the entire protocol | ||
| /// # Panics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as we discussed in a previous PR, we should not need this "Panics" annotation in benchmarking code, we can make it a global exception, as benchmarking code panicking is not a problem (in the same sense as tests)
| let min = *sizes.iter().min().expect("Minimum should exist"); | ||
| let max = *sizes.iter().max().expect("Maximum should exist"); | ||
| let avg = sizes.iter().sum::<usize>() as f64 / sizes.len() as f64; | ||
|
|
||
| // median | ||
| sizes.sort_unstable(); | ||
| let mid = sizes.len() / 2; | ||
| let median = if sizes.len() % 2 == 0 { | ||
| (sizes[mid - 1] as f64 + sizes[mid] as f64) / 2.0 | ||
| } else { | ||
| sizes[mid] as f64 | ||
| }; | ||
|
|
||
| // variance & standard deviation | ||
| let mean = sizes.iter().sum::<usize>() as f64 / sizes.len() as f64; | ||
| let squared_sum = sizes | ||
| .iter() | ||
| .map(|v| { | ||
| let diff = *v as f64 - mean; | ||
| diff * diff | ||
| }) | ||
| .sum::<f64>(); | ||
| let variance = squared_sum / sizes.len() as f64; | ||
| let std_dev = variance.sqrt(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we can use a proper crate for all of this. I recommend using the statrs crate, like:
use statrs::statistics::Median;
use statrs::statistics::Data;
let x = [];
let x = Data::new(x);
assert!(x.median().is_nan());
let y = [0.0, 3.0, -2.0];
let y = Data::new(y);
assert_eq!(y.median(), 0.0);
| pub fn get_size(&self) -> usize { | ||
| self.view.iter().map(|(_, s)| s.len()).sum() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need a more descriptive name for this, the size of a Simulator is pretty hard to guess
This PR adds the analysis for the size of data sent over the wire in each of the different scheme protocols.
Closes #140 #134