-
Notifications
You must be signed in to change notification settings - Fork 12
feat: support for robust ecdsa signature in the mpc-node #1645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: support for robust ecdsa signature in the mpc-node #1645
Conversation
3c7347e to
95f32e1
Compare
| CKDTaskId(CKDTaskId), | ||
| } | ||
|
|
||
| pub fn participants_from_triples( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function was simply quite misplaced
| Secp256k1(threshold_signatures::ecdsa::KeygenOutput), | ||
| Ed25519(threshold_signatures::eddsa::KeygenOutput), | ||
| Bls12381(threshold_signatures::confidential_key_derivation::KeygenOutput), | ||
| V2Secp256k1(threshold_signatures::ecdsa::KeygenOutput), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not exactly optimal, but we had the same situation with Ckd, and this was the best we could come up with then
| let participants = match client.select_random_active_participants_including_me( | ||
| threshold, | ||
| &running_participants, | ||
| ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was probably the only not trivial part of the PR. Which participants to select to compute a presignature, which will be exactly the same used later for the corresponding signature? I opted to do exactly the same as we currently do for triples in Cait-Sith
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, we do have the robustness bug so right now we use the same set for both signatures and presignatures. Down the line we need to figure out how to leverage the robustness properties.
| e | ||
| ); | ||
| // that should not happen often, so sleeping here is okay | ||
| tokio::time::sleep(Duration::from_millis(100)).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should certainly have an issue for removing these sleep ops, not doing so just in case something actually breaks
netrome
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shallow skimming. Some nits, but so far so good. Happy to go deeper once the PR is ready.
| let participants = match client.select_random_active_participants_including_me( | ||
| threshold, | ||
| &running_participants, | ||
| ) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, we do have the robustness bug so right now we use the same set for both signatures and presignatures. Down the line we need to figure out how to leverage the robustness properties.
crates/node/src/tests/resharing.rs
Outdated
|
|
||
| // Test a simple resharing of one node joining a cluster of 4 nodes. | ||
| #[tokio::test] | ||
| async fn test_key_resharing_simple() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems I made this test too fat. It passes locally but fails in CI, will need to improve it somehow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could not improve, seems asset generation is too heavy for github CI, as the test when adding the new domain passes locally but not there
6d7a1a8 to
391706f
Compare
73dc604 to
3ebf0fe
Compare
3ebf0fe to
08fa259
Compare
| impl PortSeed { | ||
| // The base port number used, hoping the OS is not using ports in this range | ||
| pub const BASE_PORT: u16 = 10000; | ||
| // This constant must be equal to the total number of ports defined below | ||
| pub const TOTAL_DEFINED_PORTS: u16 = 19; | ||
| // Maximum number of nodes that can be handled without port collisions | ||
| pub const MAX_NODES: u16 = 10; | ||
| // Maximum number of cases that can be handled without port collisions | ||
| pub const MAX_CASES: u16 = 4; | ||
| // Each function below corresponds to a port per node. Each defines an offset, | ||
| // and all offsets must be different | ||
| pub const TOTAL_PORTS_PER_NODE: u16 = 3; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changes in this file were needed to be able to use rstest with different domains and still avoid port collisions
| const DEFAULT_MAX_PROTOCOL_WAIT_TIME: std::time::Duration = std::time::Duration::from_secs(30); | ||
| const DEFAULT_MAX_SIGNATURE_WAIT_TIME: std::time::Duration = std::time::Duration::from_secs(30); | ||
| const DEFAULT_MAX_PROTOCOL_WAIT_TIME: std::time::Duration = std::time::Duration::from_secs(60); | ||
| const DEFAULT_MAX_SIGNATURE_WAIT_TIME: std::time::Duration = std::time::Duration::from_secs(60); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately the old timeout is now too small
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given the needs of robust ecdsa (more nodes) I made a module with a shared cluster for those. Trying to make them work in the existing shared cluster seems impossible at the moment, because many tests would simply fail with timeouts. Also, it would make them considerably slower.
| def kill_all(self): | ||
| for node in self.mpc_nodes: | ||
| node.kill(False) | ||
| with ThreadPoolExecutor(max_workers=len(self.mpc_nodes)) as executor: | ||
| executor.map(lambda node: node.kill(False), self.mpc_nodes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we were losing many seconds in each test because of this, as the underlying implementation waits for 5 seconds for the process to die
| TRIPLES_TO_BUFFER = 20 | ||
| PRESIGNATURES_TO_BUFFER = 10 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these are sane values for the tests in this file
Closes #1640