Skip to content

Dkg, new HashToCurve, removed old circuits #21

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 36 commits into
base: main
Choose a base branch
from
Open

Dkg, new HashToCurve, removed old circuits #21

wants to merge 36 commits into from

Conversation

Scratch-net
Copy link
Contributor

@Scratch-net Scratch-net commented Mar 11, 2025

Summary by CodeRabbit

  • New Features

    • Launched a distributed key generation workflow for secure key sharing among nodes.
    • Introduced new client and server implementations for the Distributed Key Generation (DKG) protocol.
    • Added a new method for creating local shares in the DKG process, enhancing key management.
    • Implemented HTTP endpoints for the DKG server to facilitate node registration and share management.
  • Refactor

    • Streamlined cryptographic circuit implementations and reorganized package structures for enhanced clarity and performance.
    • Updated circuit references and imports to reflect new naming conventions for AES and ChaCha encryption.
  • Tests

    • Expanded test coverage to validate cryptographic operations and distributed key generation reliably.
    • Introduced new tests for the DKG process and updated existing tests to align with recent changes.
  • Chores

    • Upgraded Go version and refreshed dependencies to boost stability and security.
  • Bug Fixes

    • Improved error handling in secret generation to prevent silent failures.

Copy link

coderabbitai bot commented Mar 11, 2025

Walkthrough

This pull request introduces comprehensive updates and refactoring across the codebase. Notably, it adds new Distributed Key Generation (DKG) functionality with dedicated client and server implementations, including a new DKG struct and associated methods. The changes also refactor OPRF and cryptographic circuit implementations for AES and ChaCha protocols, including package renaming and type adjustments. Several legacy ChaChaV2/V3 files have been removed, and dependency versions in the Go module have been updated.

Changes

File(s) Change Summary
gnark/utils/toprf.go Added CreateLocalSharesDKG; updated TOPRFGenerateSharedKey to panic on error; removed redundant curve initialization.
gnark/circuits/toprf/testdata.go Modified PrepareTestData to call CreateLocalSharesDKG and updated the Params struct (added Counter and X fields).
gnark/circuits/toprf/toprf.go Changed constant Threshold from 1 to 2, affecting array sizes in Params.
gnark/utils/dkg.go Introduced new DKG struct with methods (NewDKG, GeneratePolynomials, GenerateShares, VerifyShares, ComputeFinalKeys, ReconstructMasterPublicKey) and helper functions.
gnark/utils/utils_test.go Updated expected output in TestOPRF and added new TestTOPRFDKG for multiple evaluations.
gnark/utils/oprf.go Changed OPRFRequest.X type from fr.Element to *big.Int; added CreateSecretElements; updated HashToPointPrecompute return type.
gnark/circuits/toprf/toprf_test.go Removed Y field from HasherCircuit and updated the call to hashToPoint along with test assertions.
gnark/libraries/prover/impl/provers.go
gnark/libraries/verifier/impl/verifiers.go
Updated import paths and references for AES and ChaCha circuits (renamed packages and adjusted instantiations).
gnark/circuits/aes/* Renamed packages from aes_v2 to aes across files (e.g. aes.go, aes128_test.go, aes256_test.go, common.go, tables.go).
gnark/circuits/aes_oprf/* Renamed packages from aes_v2_oprf to aes_oprf and updated import paths and array declarations.
gnark/circuits/chacha/* Refactored ChaChaCircuit implementation: updated types (from uints.U32 to frontend.Variable), added helper arithmetic functions, and streamlined the Define method.
gnark/circuits/chachaV2/*
gnark/circuits/chachaV3/*
Removed obsolete test and circuit files for legacy ChaCha implementations.
gnark/circuits/chacha_oprf/* Renamed package from chachaV3_oprf to chacha_oprf and updated import paths and constants.
gnark/expandergen/expandergen.go Updated import from chachaV3 to chacha and removed an unused CURVE variable.
gnark/keygen/keygen.go Revised algMappings to initialize circuits using updated package names.
gnark/libraries/core_test.go Modified variable naming and response handling in TestFullChaCha20OPRF.
gnark/dkg/client/client.go New DKG client implementation with HTTP helpers and methods for registration, commitments, shares, and public shares.
gnark/dkg/server/server.go New DKG server implementation using Echo; exposes endpoints for registration, commitments, shares, and public shares with graceful shutdown.
gnark/go.mod Upgraded Go version and toolchain; added new dependencies and updated existing dependency versions.
gnark/dkg/types.go New file defining data structures for DKG requests and responses.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Server

    Client->>Server: POST /dkg/register (Submit Public Key)
    Server-->>Client: RegisterResponse

    Client->>Server: POST /dkg/commitments (Send Commitments)
    Server-->>Client: CommitmentsResponse

    Client->>Server: POST /dkg/shares (Submit Shares)
    Server-->>Client: SharesResponse

    Client->>Server: POST /dkg/public_shares (Send Public Share)
    Server-->>Client: PublicSharesResponse
Loading

Poem

I'm a little rabbit, hopping through the code,
New DKG features and circuits light my humble node.
With keys and shares dancing in the air so bright,
OPRF and crypto changes make my day just right.
I nibble on fresh carrots and celebrate with glee,
CodeRabbit’s magic brings joy to little ones like me!

Tip

⚡🧪 Multi-step agentic review comment chat (experimental)
  • We're introducing multi-step agentic chat in review comments. This experimental feature enhances review discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments.
    - To enable this feature, set early_access to true under in the settings.
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.

Actionable comments posted: 12

🔭 Outside diff range comments (1)
gnark/utils/toprf.go (1)

128-133: ⚠️ Potential issue

Duplicate code block detected

There appears to be a duplication of the code block for generating the shared key and public key. Lines 128-133 repeat the same functionality as lines 121-127, which will result in the first generated values being overwritten.

 func TOPRFGenerateSharedKey(nodes, threshold int) *SharedKey {

 	curve := twistededwards.GetEdwardsCurve()
 	sk, err := rand.Int(rand.Reader, TNBCurveOrder)
 	if err != nil {
 		panic(err)
 	}
 	serverPublic := &twistededwards.PointAffine{}
 	serverPublic.ScalarMultiplication(&curve.Base, sk) // G*sk

-	sk, err := rand.Int(rand.Reader, TNBCurveOrder)
-	if err != nil {
-		panic(err)
-	}
-	serverPublic := &twistededwards.PointAffine{}
-	serverPublic.ScalarMultiplication(&curve.Base, sk) // G*sk
🧹 Nitpick comments (37)
gnark/circuits/aes_oprf/aes128_test.go (1)

9-9: Consider renaming the import alias for clarity.
The alias aes_v2 might be confusing now that the package has been renamed to aes. Renaming the alias to something like aesCircuit could help avoid confusion.

- aes_v2 "gnark-symmetric-crypto/circuits/aes"
+ aesCircuit "gnark-symmetric-crypto/circuits/aes"
gnark/utils/oprf.go (2)

72-81: Remove unused local variable or unify usage of x.
Currently, x := new(big.Int) is read from H.X but never used in constructing the request (which relies on origX). Removing that step or reconciling it with the final X field can reduce confusion.

-   x := new(big.Int)
-   H.X.BigInt(x)
    ...
    X:              *origX,
    Y:              H.Y,

162-205: Offer configurability for the 256-iteration limit and consider handling both sqrt signs.
The try-and-increment loop is correct for hashing to a curve, but the fixed 256 attempts could be made configurable. In addition, you might want to explicitly handle or discard the alternative square root (negative sign) in a consistent manner, if required by your protocol.

gnark/circuits/chacha/round.go (2)

10-10: Document the function signature
Consider adding a brief comment explaining that QR (Quarter Round) mutates the state in-place by applying the ChaCha quarter-round operations at positions (i, j, k, l). This would aid maintainers.


41-45: xor32 function name is consistent
The three-argument signature (a, b, c) clarifies the in-place result in c. Consider renaming c to out for extra clarity.

gnark/circuits/aes/common.go (4)

49-77: Circuit Definition in Define():
The Define method of AESBaseCircuit is well-organized. It iterates over the blocks correctly while updating and asserting the counter's value. One point to verify is the bit-ordering when constructing the IV (via the counter); please double-check that its ordering aligns with the encryption tests and standard CTR mode expectations.


87-95: VariableXor Implementation:
The implementation of bitwise XOR by converting the variables to binary, performing per-bit XOR, and reconstructing the value is clear and correct. Depending on performance considerations, you may later evaluate if a more direct bit-level operation can be applied.


97-127: Subword XOR Combination (XorSubWords):
Combining lookup results from the four tables via sequential XOR operations is correctly implemented. Please verify that the lookup outputs are exactly 32 bits and that the conversion back from binary preserves the intended word boundaries.


150-191: AES Encryption Rounds:
The Encrypt method organizes the AES rounds—including key addition, round transformations via XOR with subword lookups, and the final round key addition—in a clear manner. For future maintainability, consider adding inline comments that delineate the stages (i.e. initial AddRoundKey, main rounds, and final round) so that reviewers can quickly verify compliance with AES standards.

gnark/circuits/toprf/testdata.go (1)

74-74: Consider limiting master public key printing in production.

Although a public key is not secret, consider removing or gating this console output outside of testing.

gnark/circuits/toprf/toprf_test.go (1)

34-34: Function call excludes Y; verify necessity.

Removing the Y parameter suggests it's no longer needed, but consider removing the unused Y field from HasherCircuit to keep the struct clean.

 type HasherCircuit struct {
   Data            [2]frontend.Variable
   DomainSeparator frontend.Variable
   Counter         frontend.Variable
   X               frontend.Variable
-  Y               frontend.Variable
 }
gnark/circuits/toprf/toprf.go (2)

198-202: Make cofactor clearing more explicit
Doubling thrice to multiply the point by 8 is correct, but consider defining a constant (e.g., COFACTOR = 8) or a helper function to improve clarity.

- point = curve.Double(point) // p2
- point = curve.Double(point) // p4
- point = curve.Double(point) // p8
+ // Possibly:
+ for i := 0; i < 3; i++ {
+   point = curve.Double(point)
+ }

204-204: Verify both coordinates when cofactor clearing
Only checking point.Y might overlook discrepancies in point.X. Consider asserting both coordinates for greater certainty.

- api.AssertIsEqual(point.Y, yCleared)
+ api.AssertIsEqual(point.X, xOrig)
+ api.AssertIsEqual(point.Y, yCleared)
gnark/utils/utils_test.go (2)

40-42: Reduced node count and new share creation logic
Switching from larger values to nodes := 20 and threshold := 10 along with TOPRFCreateSharesDKG can speed up tests. Validate that these parameters still match production-level requirements.


51-54: Potential nondeterministic test
PickRandomIndexes may produce inconsistent results across runs. Consider seeding or making it deterministic for reliably reproducible tests.

gnark/libraries/prover/impl/provers.go (2)

84-85: Avoid panicking on invalid input length
Using log.Panicf forces a hard stop. Consider returning an error or applying a higher-level error-handling mechanism to allow graceful recovery.


223-224: Consistent error handling for input length
Similar to earlier checks, prefer returning an error over panic to handle invalid inputs gracefully in production.

gnark/circuits/chacha/chacha_test.go (6)

43-43: Clarify quarter-round indices
Calling QR(api, &c.In, 0, 1, 2, 3) is valid but might be more readable if the indices are named or documented for maintainability.


57-58: Zero-initializing arrays
The explicit zero fill ensures a clean baseline, but if overwritten later, consider removing it to simplify test code.


68-71: Struct naming consistency
roundCircuit parallels qrBlock. Ensure consistent naming patterns across circuit tests for easier comprehension.


94-98: Additional test coverage
Including known data for TestRound is beneficial. Consider introducing more edge cases to fully exercise arithmetic boundaries.


130-131: Hardcoded test counter
Using a fixed counter is valid for these tests, but consider parameterizing it for more robust coverage of various counters.


133-152: Prune large commented-out code
If these lines are no longer relevant, remove them for clarity. If you plan to revive them, update references and finalize them as active tests.

gnark/dkg/wip.go (5)

46-73: Consider domain separation for extended security.
While initializing the curve and constants is fine for a demo, incorporating a unique domain separator (e.g., protocol name, version) into any hash-based operations can strengthen security and help avoid replay or cross-protocol collisions.


92-137: Include participant or protocol-specific data in challenge hash.
When computing the DLEQ challenge (lines 115–128), consider adding participant identifiers or protocol metadata to the hash. This further mitigates risks of accidental collisions or reuse under different contexts.


139-156: Consider caching exponent values for performance.
In the inner loop (lines 145–151), you repeatedly compute x^i. If performance is critical or threshold/participant counts grow, caching or precomputing these powers could reduce computational overhead.


158-220: Gather all verification failures for better diagnostics.
Currently, an error is returned on the first failed share or proof (line 178 or 216). Collecting all failures before returning can help participants diagnose multiple issues in a single run.


244-267: Refactor from package main for library usage.
If this is intended for production or integration, consider placing the DKG logic in a dedicated package outside package main. This makes the code more reusable, testable, and modular.

gnark/libraries/verifier/impl/verifiers.go (2)

91-92: Consider removing placeholder key initialization.

The single-element key initialization in AESBaseCircuit is marked “avoid warnings.” If a real key is required, use the proper key size. Otherwise, consider removing or clarifying the placeholder to avoid confusion.


256-257: Placeholder key in AESTOPRFCircuit.

As with the AES circuit, verify if a one-element key is truly necessary. If not, consider removing or clarifying the rationale.

gnark/utils/dkg.go (4)

11-23: DKGParticipant struct details.

Struct fields align with typical DKG usage. Note that secrets and shares remain in memory; for production code, consider secure memory clearing if needed.


31-49: NewDKGState function: indexing participants from 1 to n.

Creating participants with IDs starting at 1 is functional but can cause off-by-one confusion when used with 0-based arrays. Verify usage consistency across the codebase.


50-75: GeneratePolynomials stage: consider non-panic error handling.

The code panics on random generation errors. In production, returning and handling these errors might be advisable to avoid abrupt termination.


111-125: ComputeFinalKeys stage: ephemeral memory.

While combining shares is correct, consider whether secure zeroing of sensitive data (e.g., ephemeral shares) is necessary after use in security-critical contexts.

gnark/dkg/dkg.go (3)

1-2: Consider renaming the package if this is a library.
Declaring package main is fine for an executable, but if this code is meant to be reused as a library, consider using a more descriptive package name (e.g., package dkg).


73-82: Encrypt or authenticate shares for distribution.
In real-world DKG protocols, shares should be encrypted or authenticated before distribution to prevent eavesdropping or tampering. If this is merely a proof-of-concept, just note that further security measures would be critical in production.


84-105: Consider more detailed error reporting and partial verification.
Currently, if one share verification fails, the function returns immediately and aborts the entire process. You might allow for partial verification results or continue verification for all participants before returning a comprehensive error report.

🛑 Comments failed to post (12)
gnark/utils/oprf.go (1)

56-59: 🛠️ Refactor suggestion

Investigate potential duplication in consecutive HashToPointPrecompute calls.
The calls at lines 42-45 and 56-59 appear to perform a similar operation with slight differences in parameters. Confirm if one is redundant or if they need to be consolidated to avoid confusion and repeated curve hashing.

gnark/keygen/keygen.go (1)

7-10: 💡 Verification agent

🧩 Analysis chain

Imports align with package renaming.

These updated import paths for AES and ChaCha circuits reflect the new naming scheme. Please verify that no references to the old package names (e.g., aes_v2, chachaV3) remain in the codebase, to avoid potential compilation or linking issues.


🏁 Script executed:

#!/bin/bash
# Description: Search for any leftover references that might still point to old package names.
rg 'circuits/(aes_v2|chachaV3)'

Length of output: 155


Attention: Update Documentation Reference

The import paths in gnark/keygen/keygen.go have been updated correctly per the new naming convention. However, the verification uncovered a lingering reference to the old package name in the documentation:

  • File: gnark/README.md
    Issue: The mention of circuits/chachaV3 still appears, which may cause confusion or mislead users.

Please update the documentation to reflect the new naming scheme (for example, replacing chachaV3 with the appropriate current name) to avoid any potential discrepancies between the codebase and its documentation.

gnark/circuits/aes_oprf/aes.go (1)

4-4: 💡 Verification agent

🧩 Analysis chain

Import path updated to new AES circuit.

Changing the import to "gnark-symmetric-crypto/circuits/aes" is correct. Confirm that all references to the old aes_v2 package have been removed from this file and elsewhere.


🏁 Script executed:

#!/bin/bash
# Description: Verify no references to the deprecated aes_v2 remain.
rg 'aes_v2'

Length of output: 2162


AES Circuit Import Update: Action Required

The import in gnark/circuits/aes_oprf/aes.go has been correctly updated to "gnark-symmetric-crypto/circuits/aes". However, our repository scan shows that the deprecated alias aes_v2 is still being used in several other files:

  • gnark/libraries/core_test.go
  • gnark/libraries/prover/impl/provers.go
  • gnark/circuits/aes_oprf/aes128_test.go
  • gnark/circuits/aes_oprf/aes256_test.go

Please verify if these references should also be removed or updated to ensure consistency across the codebase.

gnark/utils/utils_test.go (1)

57-59: 🛠️ Refactor suggestion

Commented-out final assertion
The final assertion is disabled, reducing test strictness. If the expected result changed, update the check; otherwise, re-enable it to validate correctness.

gnark/dkg/wip.go (1)

75-90: ⚠️ Potential issue

Use cryptographically random secrets.
Currently, the secret is set deterministically as p.ID * 10, making the DKG insecure. Use a random secret derived from a secure PRNG for production readiness.

- secret := big.NewInt(int64(p.ID * 10)) // Static secret
- secret.Mod(secret, s.P)
- p.Secret = secret
+ randSecret, err := rand.Int(rand.Reader, s.P)
+ if err != nil {
+     return fmt.Errorf("failed to generate random secret: %w", err)
+ }
+ p.Secret = randSecret
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

randSecret, err := rand.Int(rand.Reader, s.P)
if err != nil {
    return fmt.Errorf("failed to generate random secret: %w", err)
}
p.Secret = randSecret
gnark/utils/dkg.go (2)

191-223: 🛠️ Refactor suggestion

DKG orchestrator function: usage of panic and hardcoded subset.

While this example demonstrates a full DKG flow, consider returning errors instead of panicking and making the subset of participant IDs configurable rather than hardcoding in production.


127-152: 💡 Verification agent

🧩 Analysis chain

ReconstructMasterPublicKey: partial subset usage.

Using only the first t participants from the passed IDs is a valid approach. However, confirm that participant IDs exist in the state to prevent out-of-range access.


🏁 Script executed:

#!/bin/bash
# Scanning references to ReconstructMasterPublicKey to confirm call usage
rg -A 3 'ReconstructMasterPublicKey'

Length of output: 363


Attention: Validate Participant IDs in ReconstructMasterPublicKey

Even though using only the first t participants is generally fine, please ensure that every participant ID in the provided slice actually maps to an existing entry in state.Participants (i.e., check that id-1 is within the valid bounds) before indexing. This extra validation will safeguard against potential out-of-range errors when the function is activated or maintained in the future.

gnark/dkg/dkg.go (5)

51-71: ⚠️ Potential issue

Handle cryptographic randomness errors.
All calls to rand.Int(rand.Reader, &state.Curve.Order) ignore potential errors. In cryptographic code, if entropy gathering fails, it’s critical to handle the error gracefully rather than silently ignoring it.

- p.Secret, _ = rand.Int(rand.Reader, &state.Curve.Order)
+ secret, err := rand.Int(rand.Reader, &state.Curve.Order)
+ if err != nil {
+     panic(fmt.Errorf("failed to generate random secret: %w", err))
+ }
+ p.Secret = secret
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

secret, err := rand.Int(rand.Reader, &state.Curve.Order)
if err != nil {
    panic(fmt.Errorf("failed to generate random secret: %w", err))
}
p.Secret = secret

123-147: 🛠️ Refactor suggestion

Check participant ID validity when reconstructing keys.
Calling p := state.Participants[id-1] can cause an out-of-range panic if id is not in [1..n]. Verify that each participantIDs entry is within valid bounds.

 for _, id := range participantIDs[:state.Participants[0].Threshold] {
+   if id < 1 || id > len(state.Participants) {
+       panic(fmt.Errorf("invalid participant ID %d", id))
+   }
    p := state.Participants[id-1]
    points[p.ID] = &p.PublicKey.A
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

 for _, id := range participantIDs[:state.Participants[0].Threshold] {
+   if id < 1 || id > len(state.Participants) {
+       panic(fmt.Errorf("invalid participant ID %d", id))
+   }
    p := state.Participants[id-1]
    points[p.ID] = &p.PublicKey.A
 }

32-49: 🛠️ Refactor suggestion

Validate threshold and participant count.
The code does not check whether t <= n, which can lead to impossible thresholds when creating the DKG state. Consider adding a validation check here.

 func NewDKGState(n, t int) *DKGState {
+   if t > n {
+       panic("Threshold t must not exceed the number of participants n")
+   }
    curve := twistededwards.GetEdwardsCurve()
    ...
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

func NewDKGState(n, t int) *DKGState {
    if t > n {
        panic("Threshold t must not exceed the number of participants n")
    }
    curve := twistededwards.GetEdwardsCurve()
    // ... rest of the implementation
}

160-185: 🛠️ Refactor suggestion

Handle modular inverse errors without panic.
The code panics if the modular inverse does not exist. While a missing inverse is unlikely in valid configurations, consider returning an error instead of panicking for better robustness.

 if denInv == nil {
-   panic("Modular inverse does not exist")
+   return nil // or return an error string to handle gracefully
 }

Committable suggestion skipped: line range outside the PR's diff.


107-121: ⚠️ Potential issue

Avoid logging secret share material.
Line 113 prints each received share and the running total. This leaks critical data. In production systems, logging sensitive keys or intermediate values can compromise security.

- fmt.Printf("Participant %d: Share from %d = %s, Running SecretShare = %s\n",
-     p.ID, senderID, share.String(), secretShare.String())
+ // Remove or sanitize secret share printing in production.
+ fmt.Printf("Participant %d: Received a share from %d\n", p.ID, senderID)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

// ... (previous unchanged lines)
    // Remove or sanitize secret share printing in production.
-   fmt.Printf("Participant %d: Share from %d = %s, Running SecretShare = %s\n",
-       p.ID, senderID, share.String(), secretShare.String())
+   fmt.Printf("Participant %d: Received a share from %d\n", p.ID, senderID)
// ... (following unchanged lines)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
gnark/circuits/toprf/toprf.go (1)

176-187: ⚠️ Potential issue

Old hashToPoint implementation is incomplete

The old implementation of hashToPoint is incomplete - it sets up the hash but doesn't compute the point coordinates. This should be removed since it's now replaced by the new implementation.

-func hashToPoint(api frontend.API, curve twistededwards.Curve, data [2]frontend.Variable, domainSeparator, counter, xOrig frontend.Variable) (*twistededwards.Point, error) {
-	api.AssertIsLessOrEqual(counter, 255) // hash counter is 0..255
-
-	d := curve.Params().D
-
-	hField, err := mimc.NewMiMC(api)
-	if err != nil {
-		return nil, err
-	}
-	hField.Write(data[0])
-	return nil
-}
🧹 Nitpick comments (1)
gnark/circuits/toprf/toprf_test.go (1)

28-36: Implementation looks correct but consider returning the point

The Define method correctly initializes the curve and calls hashToPoint with the appropriate parameters. However, the method discards the returned point and only returns the error. This is fine for testing but consider capturing the point for more assertions if needed.

-	_, err = hashToPoint(api, curve, h.Data, h.DomainSeparator, h.Counter, h.X, h.Y)
+	point, err := hashToPoint(api, curve, h.Data, h.DomainSeparator, h.Counter, h.X, h.Y)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 970d332 and aea48fd.

📒 Files selected for processing (18)
  • gnark/utils/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (1 hunks)
  • gnark/utils/toprf.go (1 hunks)
  • gnark/utils/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (4 hunks)
  • gnark/circuits/toprf/toprf_test.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (3 hunks)
  • gnark/circuits/toprf/toprf_test.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (2 hunks)
  • gnark/circuits/toprf/toprf_test.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf.go (1 hunks)
  • gnark/circuits/toprf/toprf_test.go (1 hunks)
  • gnark/utils/toprf.go (0 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
  • gnark/utils/toprf.go
  • gnark/utils/toprf.go
  • gnark/utils/toprf.go
  • gnark/utils/toprf.go
  • gnark/circuits/toprf/toprf.go
  • gnark/circuits/toprf/toprf_test.go
  • gnark/circuits/toprf/toprf.go
  • gnark/circuits/toprf/toprf_test.go
  • gnark/circuits/toprf/toprf.go
  • gnark/circuits/toprf/toprf_test.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test-gnark-circuits (stable)
🔇 Additional comments (5)
gnark/circuits/toprf/toprf_test.go (2)

20-26: Good addition of the HasherCircuit struct

The HasherCircuit struct is well-designed with appropriate fields for testing the hash-to-curve functionality. All necessary parameters are included.


38-53: TestHashToPoint function looks good

The test correctly initializes the circuit with test data, compiles it, and verifies there are no errors. Printing the constraint count is a good practice for ZK circuits to track performance.

gnark/circuits/toprf/toprf.go (3)

28-30: Good addition of fields for hashToCurve

The new fields Counter, X, and Y are well-documented and appropriately added to the Params struct to support the enhanced hashToCurve functionality.


92-92: Updated hashToPoint call with new parameters

The VerifyTOPRF function has been properly updated to pass the new Counter, X, and Y parameters to the hashToPoint function.


189-222:

❓ Verification inconclusive

Strong implementation of HashToPoint with proper validation

The hashToPoint function has been significantly improved with:

  1. Proper domain separation with counter
  2. Curve equation calculation for computing x² from y
  3. Validation that the provided x matches the computed value
  4. Point doubling to derive the final point
  5. Validation that the final point is on the curve

The implementation follows secure practices for hashing to elliptic curve points.

One thing to verify - the triple point doubling operation (lines 214-217) should be documented to explain its purpose. Is this a standardized approach for this curve or a specific requirement?


🏁 Script executed:

#!/bin/bash
# Check if there's documentation or other instances of this triple doubling pattern
rg -A 3 -B 3 "point = curve.Double\(point\)" --type go

Length of output: 609


Action: Clarify the Triple Doubling Operation for Cofactor Clearing

The HashToPoint implementation is robust, with thorough domain separation, curve equation checks, and point validation. The triple doubling of the point in gnark/circuits/toprf/toprf.go (lines 214–217) is used to multiply the point by 8, effectively clearing the cofactor—a standard requirement for curves with a cofactor of 8. However, since this pattern appears only in this file, please add a clarifying comment explaining that the three successive doublings are intentionally performed for cofactor clearance. This will help future maintainers understand why exactly this operation is necessary and whether it reflects a standardized approach or a curve-specific requirement.

@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (17)
gnark/utils/toprf.go (1)

56-106: Consider refactoring the new CreateLocalSharesDKG function for modularity.
This function is quite long and combines multiple phases of the DKG flow (node initialization, distributing shares, verifying, final key computation). Splitting it into smaller, well-defined steps (e.g., distributing shares in a separate function) could improve maintainability and simplify testing.

 func CreateLocalSharesDKG(N, T int) ([]*Share, error) {
   // ...
-  // All phases are handled here
+  // Suggest factoring out distribution, verification, and final key computation 
+  // into separate helper functions for improved clarity.
   return result, nil
 }
gnark/utils/utils_test.go (1)

42-70: Great expanded test coverage with TestTOPRFDKG.
This multi-iteration random test ensures consistent outcomes. As a minor enhancement, test edge cases (e.g., threshold=1, threshold=nodes-1) to verify boundary correctness.

gnark/circuits/toprf/testdata.go (1)

72-73: Remove or clarify commented-out lines.
Leaving commented code (pk := ... and debug print) can cause confusion. If it's no longer needed, consider removing it.

-    // pk := utils.TOPRFThresholdMul(idxs, sharePublicKeysIn)
-    // fmt.Println("master public key X:", pk.X.String())
gnark/utils/dkg.go (4)

11-11: Consider avoiding a global curve variable.

Storing the curve globally may be acceptable for some usage patterns, but ensuring the curve is accessed consistently (and concurrency-safely, if applicable) is crucial. Alternatively, consider passing the curve as a dependency to the DKG struct or as a local configuration to allow better control and testing.


56-74: Enhance error reporting for share verification.

The method relies on successful parsing of node IDs and does not handle the case where SetString might fail if the node ID is invalid. Ensuring robust error handling can help diagnose protocol misuse and strengthen security.


76-84: Ensure threshold-based checks before computing final keys.

Although summing all received shares can be valid, verifying that the number of shares equals the threshold (or meets other protocol constraints) before computing the final key can catch incomplete or invalid states quicker.


108-116: Validate non-empty polynomial coefficients.

Ensure that coeffs is not empty before accessing coeffs[len(coeffs)-1] in case unexpected usage occurs. This check anticipates caller errors or empty polynomials.

gnark/dkg/client/client.go (3)

53-59: Consider adding inline documentation for Client fields.

The Client struct is central to DKG usage. Brief descriptive doc comments for each field (e.g., NodeID, PublicKeys) can help future maintainers understand its responsibility and usage.


260-292: Improve error resilience when decrypting shares.

Should decryption fail for a single share, the code prints a message and continues. Evaluate whether this partial failure should block the entire DKG process, or if partial success is acceptable, ensure the logic is fully documented.


325-363: Introduce error handling for partial phases.

In Run(), each phase is executed sequentially, and a single error terminates the flow for the client. This is a valid approach. However, consider whether partial successes in some phases (e.g., registering, but failing to submit shares) can or should be retried or recovered from automatically.

gnark/dkg/server/server.go (7)

1-2: Consider placing this logic in a dedicated package rather than main.
While having everything in main can be convenient for a quick demo, it's often more maintainable to keep library code in a named package for reusability, then create a separate main.go for the entry point.


99-130: Validate node ID usage and handle re-registration scenarios.
Registration currently assigns node IDs in sequence, but if a node attempts to re-register or if multiple clients attempt to register concurrently, it may create confusion or unintended states. Consider adding checks (e.g., preventing multiple registrations from the same public key) or supporting node re-registration.


154-158: HTTP status code 425 (“Too Early”) is unusual but acceptable for a “not ready” scenario.
While valid, it’s a rarely used status code. If interoperability is a concern, you might consider a more common status like 400 or 503.


160-177: Validate commitments more robustly if required.
The code successfully unmarshals the commitments but does not perform additional checks (e.g., ensuring points are in the correct subgroup or verifying the number of points). If stronger consistency or security checks are needed, consider verifying each point.


197-231: Consider restricting self-sharing and enhancing error clarity.
There is no check preventing FromNodeID from equaling ToNodeID. It might be nonsensical for a node to send a share to itself, so either disallow or clarify that scenario.


277-288: Threshold logic is partially enforced.
The code returns “Not enough public shares” if fewer than Threshold are present. Confirm that this threshold-based gating is desired for all use cases. Some designs might require partial data retrieval even below threshold.


290-352: Ensure persistency or backups if necessary.
The current implementation stores all DKG data in-memory. A server restart leads to data loss, which can disrupt the DKG process. If persistence is needed (e.g., for a production environment), consider integrating a storage layer or a replay mechanism.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aea48fd and 284c45e.

⛔ Files ignored due to path filters (1)
  • gnark/go.sum is excluded by !**/*.sum
📒 Files selected for processing (7)
  • gnark/circuits/toprf/testdata.go (3 hunks)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/dkg/server/server.go (1 hunks)
  • gnark/go.mod (1 hunks)
  • gnark/utils/dkg.go (1 hunks)
  • gnark/utils/toprf.go (4 hunks)
  • gnark/utils/utils_test.go (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test-js-lib (22, js)
  • GitHub Check: test-gnark-circuits (stable)
🔇 Additional comments (14)
gnark/utils/toprf.go (1)

5-5: Import of "fmt" is appropriate.
This import is needed for string formatting and error messages in subsequent lines.

gnark/utils/utils_test.go (2)

9-9: New import for github.com/stretchr/testify/assert is fine.
The assert package is used for improved test readability and easier debugging.


39-39: Confirm updated base64-encoded expected value.
The expected output is changed to "AaYom3iSNY0vhCDPIXt96j+oQAQdRFzdGfQTIcZQlsg=". Verify that it matches the newly computed OPRF final result for the given test inputs.

gnark/circuits/toprf/testdata.go (4)

36-36: Usage of CreateLocalSharesDKG looks correct.
This cleanly replaces the older single-secret logic with a threshold-based approach.


44-46: Arrays for storing response points and public keys are well-structured.
No concerns here; this organization supports threshold operations cleanly.


75-75: TOPRFFinalize usage is consistent.
Finalizing without a dedicated server key aligns with the new threshold approach.


84-85: New fields Counter and X in Params are appropriate.
These two fields enhance the test or circuit logic by capturing request data.

gnark/utils/dkg.go (2)

86-106: Clarify partial acceptance of public shares.

The code stops collecting public keys once used >= d.Threshold. This is correct for most threshold logic, but consider adding clarifying comments to ensure future maintainers understand that shares beyond the threshold are not needed.


118-135: Confirm correct modulo handling for negative values.

The approach of adding the modulus to num and den when they are negative is standard. Ensuring repeated modulo operations in intermediate steps can further guard against edge cases, but this code generally appears correct for threshold cryptography usage.

gnark/go.mod (1)

3-44:

❓ Verification inconclusive

Verify and tidy module dependencies.

Upgrading to Go 1.23.0 and updating dependencies can introduce changes in APIs and security patches. It's best practice to run a go mod tidy and check for conflicts or vulnerabilities. Ensure the new versions are thoroughly tested for compatibility and that they don’t break existing features.


🏁 Script executed:

#!/bin/bash
# Description: Validate and tidy the module.

go mod tidy
go list -m all

Length of output: 238


Below is a revised review comment based on the new verification context. The initial run of the dependency verification commands failed because they were executed from the repository root rather than from the location of the module file. Please re-run these commands from the appropriate directory.

Focus on Running Go Module Commands in the Correct Directory
It appears that running go mod tidy and go list -m all at the repository root resulted in errors because the gnark/go.mod file was not found there. Please ensure you are in the gnark directory when running these commands. For example, execute the following:

cd gnark && go mod tidy && go list -m all

Once executed from the correct directory, verify that:

  • The module dependencies are tidied without issues.
  • The list of modules is complete and in line with the expected dependency versions.

After confirming that the updated dependencies are compatible with Go 1.23.0 and that no existing features break, proceed with the integration.

gnark/dkg/client/client.go (1)

74-103: Gracefully handle all non-200 responses in POST requests.

You already handle resp.StatusCode != http.StatusOK (line 96), returning an error. Confirm that further error details in the response body (if any) are safely logged or surfaced for troubleshooting.

gnark/dkg/server/server.go (3)

59-67: Watch out for nested locking between Server and DKGInstance.
The Server and the embedded DKGInstance each contain their own sync.Mutex. If future changes require acquiring both locks in different orders, it could introduce a risk of deadlock. Currently, these locks appear to be used separately, but be mindful when expanding or modifying concurrency logic.


76-93: Parameter validation and defaults look good.
Your logic correctly ensures that threshold <= numNodes and both are positive. Everything here appears to comply with standard best practices for basic parameter validation.


242-275: Optional: Validate presence on the correct curve.
Public shares are unmarshaled into PointAffine, but we rely on the library’s internal validation. If you need stronger assertions (e.g., checking for the identity or ensuring membership in the subgroup), consider additional checks.

@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
gnark/dkg/client/client.go (2)

61-72: Consider error handling strategy in constructor.
Currently, NewClient uses panic on failure to generate an identity. If you expect the client to run in environments where key generation might fail unexpectedly, consider returning an error instead of panicking to allow the caller to handle it gracefully.

 func NewClient() (*Client, error) {
  identity, err := age.GenerateX25519Identity()
  if err != nil {
-    panic(fmt.Sprintf("Failed to generate age key pair: %v", err))
+    return nil, fmt.Errorf("Failed to generate key pair: %w", err)
  }
  ...
 }

105-132: Potential error handling improvement in get method.
Inside the polling calls, the second return variable from c.get(...) is discarded, which could hide transient errors (e.g., network issues). Consider logging or rechecking that error to avoid silently ignoring potential issues.

 ok, err := c.get("/dkg/nodes", &nodesResp)
 if err != nil {
     fmt.Printf("%s: error fetching nodes: %v\n", c.NodeID, err)
 }
 ...
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 284c45e and 77a0a95.

📒 Files selected for processing (2)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/dkg/server/server.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: test-js-lib (22, js)
  • GitHub Check: test-circom-circuits (22, circom)
  • GitHub Check: test-gnark-circuits (stable)
🔇 Additional comments (38)
gnark/dkg/client/client.go (15)

1-2: Well-structured package declaration.
No issues here; it clearly identifies the main package for an executable.


3-16: Imports review.
All imports appear relevant for HTTP requests, JSON handling, and cryptographic utilities. No immediate concerns identified.


18-59: Public data structures are logically defined.
The DKG-related response structs and the Client struct are well-organized, each capturing distinct protocol data. The approach to storing data in typed struct fields ensures clarity and ease of marshaling to/from JSON.


74-103: Robust request logic with clear error messages.
The post method provides thorough error handling for JSON marshaling, request creation, and response decoding, which improves maintainability and debuggability. Overall looks good.


134-147: Polling mechanism is straightforward.
Using a time-based polling loop with a timeout is clear and works for small-scale demos. For production, a more event-driven approach or a backoff strategy might be more robust.


150-195: Register method logic is solid.
It provides a clear flow for sending the public key and retrieving the node ID. The use of poll for waiting on all nodes is reasonable in a test/demonstration environment.


197-209: Commitment submission.
No concerns with the approach of marshaling polynomials to JSON and posting them. This step is well-isolated, which aids extensibility.


211-231: Fetch commitments.
Retrieves and unmarshals all commitments from a map into typed point structures. The simple map-based approach is effective for small-scale DKG setups.


233-264: Secure share submission with encryption.
Encryption using age.Encrypt is a solid step for confidentiality. Ensure that the receiving side always properly validates or handles partial writes. Currently, the code closes the writer as expected. Looks good.


266-298: Fetched shares are promptly decrypted.
Implementation correctly differentiates shares from each sender. Storing them in c.DKG.ReceivedShares is neat and easy to reference for the next verification step.


300-310: Submission of public share is concise.
Marshaling the final public share into a POST body is straightforward, and the server-side validation is likewise easy to track.


312-332: Fetch public shares.
It’s good that you use a threshold check on the server side. Ensure any potential leftover or missing shares don’t break subsequent usage. Current approach looks consistent for threshold-based DKG.


334-372: Run orchestration.
Each DKG phase is sequenced carefully. The code handles errors gracefully. This structure is helpful for debugging, as each call’s success/failure is logged distinctly.


374-388: waitForServer is simple and effective for local dev.
It loops until /health responds successfully. Adequate for your test environment, though production code may need more robust checks (TLS, etc.).


390-404: main entry point.
Spinning up multiple clients in goroutines is a neat demonstration of distributed behavior. Just ensure large-scale concurrency is handled in a more advanced way if needed.

gnark/dkg/server/server.go (23)

1-2: Appropriate package naming for a main server file.
No issues here; it clarifies this is an executable entry point.


3-19: Imports provide standard HTTP, JSON, OS, and cryptographic functionality.
All appear necessary for setting up an Echo-based web server and handling the DKG logic.


21-25: DKGResponse struct is suitably generic.
Providing a status, message, and optional data ensures flexible responses for all endpoints.


27-30: RegisterResponse
Encapsulates node ID and key, matching the client’s retrieval logic. Straightforward approach.


32-34: CommitmentsResponse
Holds a map of node IDs to serialized commitments. This symmetrical structure with the client fosters clarity.


36-38: SharesResponse
Grouping shares by [fromNodeID][toNodeID] in the server is a sensible approach for multi-party DKG.


40-42: PublicSharesResponse
Mirrors the client’s structure for collecting final public shares. Good consistency.


44-46: CommitmentData: no concerns.
Holds raw byte commitments from the client. The server unpacks them properly.


48-50: ShareData: secure by design.
Should remain encrypted until decrypted by the intended recipient. The server simply stores it, which is appropriate for a coordinated DKG.


52-54: PublicShareData: straightforward.
Carries the final serialized public key data.


56-65: DKGInstance: concurrency is handled with a sync.Mutex.
This central store for node registry, commitments, and shares ensures consistent state. The design is suitable for small demos.


67-72: Server struct.
Holds DKGInstance plus additional channels and maps for registration. Clear separation of responsibilities.


74-91: NewServer: parameter validation.
The code rightly rejects invalid numNodes or threshold. The approach to building the DKGInstance is clean.


93-95: health endpoint.
Quick readiness check is a typical approach for local dev and k8s probes. Straightforward.


97-128: register endpoint.
Locks are used properly to protect shared state. Closing RegistrationDone channel signals the system to proceed. This meets the client’s polling expectation.


130-140: getNodes endpoint.
Returns node list and public keys only after all nodes register. Using StatusTooEarly fosters clarity for incomplete states.


142-175: submitCommitment endpoint.
Checks for valid node ID and draws on RegistrationDone. The lock ensures correct insertion into DKG.Commitments. Looks fine.


177-193: getCommitments endpoint.
Serializes commitments into byte slices. Using StatusTooEarly for partial states is consistent.


195-232: submitShare endpoint.
Comprehensive checks: from-node, to-node, and ephemeral encryption data. Uses a lock to safely store into the Shares map. Good pattern.


234-241: getShares endpoint.
Ensures all shares have been submitted before returning success. This aligns with the client’s approach to poll.


243-276: submitPublicShare endpoint.
The approach for final public share registration parallels earlier endpoints. Node validation and concurrency locks are used defensively.


278-289: getPublicShares endpoint.
Blocks until the threshold is met. The server concurrency design remains consistent.


291-343: main initialization.
Reads environment variables, builds the server, and sets up Echo routes. Also gracefully shuts down on signals with a timeout. This is a neat demonstration of production readiness.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
gnark/utils/toprf.go (1)

173-176: ⚠️ Potential issue

Avoid panicking in cryptographic library code.

Panic calls can abruptly terminate the application, which may be problematic in library contexts. Consider returning an error instead:

 func TOPRFGenerateSharedKey(nodes, threshold int) *SharedKey {
 	sk, err := rand.Int(rand.Reader, TNBCurveOrder)
 	if err != nil {
-		panic(err)
+		// Return an error and let the caller handle it.
+		return nil // or modify the signature to return (*SharedKey, error)
 	}
gnark/utils/dkg.go (1)

28-38: 🛠️ Refactor suggestion

Validate node ID and threshold in NewDKG.

You’re ignoring potential conversion errors from strconv.Atoi(nodeID) and not verifying that threshold is within valid bounds (for example, nonzero and ≤ numNodes). Consider handling errors and panics here to catch misconfiguration early in the DKG process.

 func NewDKG(threshold, numNodes int, nodes []string, nodeID string) *DKG {
-	id, _ := strconv.Atoi(nodeID)
+	idVal, err := strconv.Atoi(nodeID)
+	if err != nil {
+		panic(fmt.Sprintf("invalid nodeID format %q: %v", nodeID, err))
+	}
+	if threshold < 1 || threshold > numNodes {
+		panic(fmt.Sprintf("Invalid threshold: %d (must be 1 <= threshold <= %d)", threshold, numNodes))
+	}
+	id := idVal
 	return &DKG{
 		Threshold:      threshold,
 		NumNodes:       numNodes,
🧹 Nitpick comments (7)
gnark/utils/dkg.go (1)

73-100: Monitor performance of share verification loops for large node counts.

VerifyShares loops over all received shares and each polynomial coefficient, multiplying curve points repeatedly. This is correct but can become expensive for large Threshold × NumNodes. If scaling to larger networks, consider optimizations like multi-exponentiation.

gnark/dkg/client/client.go (2)

133-146: Consider exponential backoff or maximum retries in poll.

Currently, the method polls every 500 ms for up to 30 seconds. If the server remains unreachable or the condition never becomes true, this leads to repeated requests. Implementing a backoff or request limit may help prevent undue load on the server.


261-294: Enhance logging or error messages for decryption failures.

When decrypting shares, lines 270–279 log an error but continue. This can be helpful for partial data retrieval, but consider more detailed logging or skipping the entire run if critical shares remain undecryptable.

gnark/dkg/server/server.go (4)

102-139: Consider adding protection against duplicate registrations

The current implementation assigns a unique ID to each node during registration but doesn't prevent the same client from registering multiple times with different public keys. This could potentially be exploited in a multi-party protocol.

Consider adding a mechanism to identify clients uniquely, such as:

func (s *Server) register(c echo.Context) error {
    var req struct {
        PublicKey string `json:"public_key"`
+       ClientID string `json:"client_id,omitempty"`    // Optional unique identifier
    }
    // ... existing validation code ...
    
+   // Check for duplicate client if ID provided
+   if req.ClientID != "" {
+       for existingID, _ := range s.RegisteredNodes {
+           if s.DKG.ClientIdentifiers[existingID] == req.ClientID {
+               return c.JSON(http.StatusConflict, DKGResponse{Status: "error", Message: "Client already registered"})
+           }
+       }
+   }
    
    // ... rest of the function ...
+   if req.ClientID != "" {
+       s.DKG.ClientIdentifiers[fmt.Sprintf("%d", nodeID)] = req.ClientID
+   }
}

You would need to add a ClientIdentifiers map to the DKGInstance struct.


248-260: Add timeout handling for getShares method

The current getShares implementation will keep returning a "too early" status until all shares are received, but there's no timeout mechanism to handle cases where one or more nodes fail to submit their shares.

Consider adding a timeout mechanism or a way to proceed with a subset of shares if the threshold is met:

func (s *Server) getShares(c echo.Context) error {
    s.DKG.Lock()
    defer s.DKG.Unlock()
    
    expectedShareCount := s.DKG.NumNodes * (s.DKG.NumNodes - 1)
    actualShareCount := 0
    for _, shares := range s.DKG.Shares {
        actualShareCount += len(shares)
    }
    
+   // Check if we've received enough shares to meet the threshold
+   // Even if not all shares are received, we may be able to proceed if 
+   // we have t*n shares where t is the threshold
+   minRequiredShares := s.DKG.Threshold * s.DKG.NumNodes
+   
+   if actualShareCount < expectedShareCount {
+       // Optional: Check if force parameter is provided and enough shares are available
+       forceParam := c.QueryParam("force")
+       if forceParam == "true" && actualShareCount >= minRequiredShares {
+           log.Warnf("Proceeding with partial shares: %d/%d (minimum required: %d)", 
+                    actualShareCount, expectedShareCount, minRequiredShares)
+       } else {
            return c.JSON(http.StatusTooEarly, DKGResponse{
                Status: "error", 
                Message: fmt.Sprintf("Not all shares received: got %d, expected %d", 
                           actualShareCount, expectedShareCount)
            })
+       }
    }
    
    return c.JSON(http.StatusOK, DKGResponse{
        Status: "success", 
        Data: SharesResponse{Shares: s.DKG.Shares}
    })
}

Similar logic could be applied to the getPublicShares method as well.


72-77: Consider adding a state machine for better protocol flow control

The current implementation uses channels and mutex locks to manage the DKG protocol flow, but a more explicit state machine approach might make the code more maintainable and easier to reason about.

Consider enhancing the Server struct to include a state field:

type Server struct {
    DKG              *DKGInstance
    RegisteredNodes  map[string]bool
    RegistrationDone chan struct{}
+   state            string // e.g., "registration", "commitment", "sharing", "complete"
+   stateLock        sync.RWMutex
    sync.Mutex
}

+func (s *Server) setState(newState string) {
+   s.stateLock.Lock()
+   defer s.stateLock.Unlock()
+   s.state = newState
+}
+
+func (s *Server) getState() string {
+   s.stateLock.RLock()
+   defer s.stateLock.RUnlock()
+   return s.state
+}

This could help enforce protocol order and make it easier to track the current stage of the DKG process.


344-349: Add context cancellation to server start goroutine

The server start goroutine could potentially leak if the program exits before the server has a chance to start, although this is mitigated by the signal handling.

Consider passing a context to the server start goroutine:

+    ctx, cancel := context.WithCancel(context.Background())
+    defer cancel()
     
     go func() {
-        if err := e.Start(":" + port); err != nil && err != http.ErrServerClosed {
+        if err := e.StartServer(&http.Server{
+            Addr:    ":" + port,
+            BaseContext: func(_ net.Listener) context.Context { return ctx },
+        }); err != nil && err != http.ErrServerClosed {
             log.Fatalf("Server failed: %v", err)
         }
     }()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 77a0a95 and 0121fa6.

📒 Files selected for processing (4)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/dkg/server/server.go (1 hunks)
  • gnark/utils/dkg.go (1 hunks)
  • gnark/utils/toprf.go (4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: test-gnark-circuits (stable)
  • GitHub Check: test-js-lib (22, js)
🔇 Additional comments (7)
gnark/utils/toprf.go (2)

57-107: Consider concurrency or parallel usage constraints in CreateLocalSharesDKG.

While this function initializes multiple DKG instances and loops through them to distribute, verify, and compute final keys, it relies on successive loops and direct indexing. If there's a plan to run these steps in parallel or a large number of nodes, be mindful of concurrency controls or step synchronization. Otherwise, this sequential flow appears fine for smaller setups.


117-130: Handle zero-denominator cases when computing Lagrange coefficients.

This logic modifies num and den without checking if the subtraction yields zero before performing the inversion (gf.Inv). A zero in the denominator will raise an error in most field operations. This issue was previously mentioned in older review comments.

gnark/utils/dkg.go (2)

40-50: Check for errors from rand.Int in GeneratePolynomials.

Line 44 calls rand.Int ignoring the returned error. In cryptographic operations, failing to handle randomness errors can compromise security.


102-112: Validate that each node stores its own share.

When computing final keys, the code aggregates shares from other nodes. Double-check that each node's self-share (if any) is properly handled or intentionally excluded. If no self-share is needed, this is fine.

gnark/dkg/client/client.go (1)

392-404: Check concurrency constraints in main.

You launch multiple clients with goroutines, each performing a full DKG run. This is permissible as long as each client’s data structures and network endpoints are independent. Confirm that any global state (e.g., server endpoint) has enough concurrency handling to prevent race conditions.

gnark/dkg/server/server.go (2)

1-19: LGTM: Imports and package structure look good

The package structure and imports are well-organized, with appropriate external dependencies for the HTTP server (Echo framework) and cryptographic operations (age, gnark-crypto).


21-70: Good data structure design with proper JSON tags

The response and data structures are well-defined with appropriate JSON tags for serialization/deserialization. The use of a mutex in the DKGInstance struct ensures thread safety for concurrent operations.

Comment on lines 311 to 326
func main() {
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
numNodesStr := os.Getenv("NUM_NODES")
thresholdStr := os.Getenv("THRESHOLD")
numNodes, err := strconv.Atoi(numNodesStr)
if err != nil {
log.Fatalf("Invalid NUM_NODES: %v", err)
}
threshold, err := strconv.Atoi(thresholdStr)
if err != nil {
log.Fatalf("Invalid THRESHOLD: %v", err)
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add default values or better error handling for environment variables

The current implementation will crash with a cryptic error message if NUM_NODES or THRESHOLD environment variables are not set. Consider providing default values or more descriptive error messages.

func main() {
    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }
    numNodesStr := os.Getenv("NUM_NODES")
+   if numNodesStr == "" {
+       log.Fatal("NUM_NODES environment variable is required")
+   }
    thresholdStr := os.Getenv("THRESHOLD")
+   if thresholdStr == "" {
+       log.Fatal("THRESHOLD environment variable is required")
+   }
    numNodes, err := strconv.Atoi(numNodesStr)
    if err != nil {
        log.Fatalf("Invalid NUM_NODES: %v", err)
    }
    threshold, err := strconv.Atoi(thresholdStr)
    if err != nil {
        log.Fatalf("Invalid THRESHOLD: %v", err)
    }
    // ...
}

Alternatively, consider using a configuration library that supports default values.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func main() {
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
numNodesStr := os.Getenv("NUM_NODES")
thresholdStr := os.Getenv("THRESHOLD")
numNodes, err := strconv.Atoi(numNodesStr)
if err != nil {
log.Fatalf("Invalid NUM_NODES: %v", err)
}
threshold, err := strconv.Atoi(thresholdStr)
if err != nil {
log.Fatalf("Invalid THRESHOLD: %v", err)
}
func main() {
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
numNodesStr := os.Getenv("NUM_NODES")
if numNodesStr == "" {
log.Fatal("NUM_NODES environment variable is required")
}
thresholdStr := os.Getenv("THRESHOLD")
if thresholdStr == "" {
log.Fatal("THRESHOLD environment variable is required")
}
numNodes, err := strconv.Atoi(numNodesStr)
if err != nil {
log.Fatalf("Invalid NUM_NODES: %v", err)
}
threshold, err := strconv.Atoi(thresholdStr)
if err != nil {
log.Fatalf("Invalid THRESHOLD: %v", err)
}
// ...
}

@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (7)
gnark/dkg/client/client.go (3)

1-1: Consider renaming the package for clarity.

Although this file is under gnark/dkg/client, it is declared as package main. If the intention is to build a client library or reuse these methods elsewhere, adjusting the package name (e.g., package client) helps reflect its purpose.


282-292: Add encryption error handling detail.

When encrypting shares, a partial write to the buffer could occur if an error arises midstream. Currently, the code attempts to handle it but consider logging detailed context (e.g., share index or node ID in the error message) to facilitate troubleshooting.


398-405: Consider partial retries for ephemeral failures.

If SubmitCommitments or SubmitShares partially fail for network reasons in Run(), the entire routine halts. You might benefit from an exponential backoff or partial retry mechanism, especially in distributed scenarios where temporary network issues can occur.

If you’d like, I can provide example code illustrating an exponential backoff retry pattern.

gnark/dkg/server/server.go (4)

161-169: Reverse usage of context cancelation for registration.

Using s.Cancel() to indicate "registration complete" is slightly counterintuitive because typically context cancelation signals termination. Although the code works, consider a separate “registration done” flag or channel for clarity.


177-191: Improve error reporting for invalid public keys.

When a node provides an invalid public key, the message "Invalid public key format" is returned, but the server logs the actual parsing error. Consider including the reason (e.g., length mismatch) in the response (if it’s not sensitive) to aid the client’s debugging.


309-329: Ensure share validation is robust for future expansions.

The code checks for nil shares, self-sending shares, empty encrypted shares, and unregistered node IDs. If future logic requires more constraints (e.g., share size boundaries), consider centralizing validation logic in a separate function or struct method to keep it maintainable.


463-481: Add graceful shutdown logs or fallback.

When shutting down, you currently log the event, attempt graceful shutdown with a 10-second timeout, and then log again. Consider adding a final fallback (emergency shutdown) if the server fails to stop within the 10-second grace period to guarantee termination in some corner cases.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 84d7457 and 18d966b.

📒 Files selected for processing (2)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/dkg/server/server.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test-gnark-circuits (stable)
🔇 Additional comments (5)
gnark/dkg/client/client.go (3)

80-82: Avoid panicking on key generation.

The code panics if age.GenerateX25519Identity() fails. This can crash the entire application. Returning an error or logging a warning is preferable for library-grade robustness.


144-162: Verify polling conditions.

The function poll stops polling only if the condition callback returns true or the 30s timeout is reached. Ensure that the polled endpoint reliably transitions to a success state. Also confirm that 30 seconds is sufficient for all DKG phases in higher latency environments.

Would you like a script to test slower/failure scenarios and verify that poll handles them gracefully?


308-320: Validate node index resolution more robustly.

The method findUUIDByIndex returns an empty string if parsing or the lookup fails. Ensure that such conditions are handled in the calling code, as continuing with an empty node ID for encryption or posting shares might lead to cryptic errors.

gnark/dkg/server/server.go (2)

207-215: Race potential in node indexing.

Assigning i + 1 as the node index in a loop might risk concurrency issues if register is called from multiple goroutines (though currently you lock the server). This is fine if the locking is always maintained—just confirm there's no path that modifies Nodes or RegisteredNodes without the lock.


351-369: Clarify share readiness conditions.

Your logic calculates expectedShareCount = s.NumNodes * (s.NumNodes - 1) to confirm that all shares are present. Ensure that the system can't proceed if partial or duplicate shares appear, or if there are out-of-band shares from unregistered nodes. Currently, the concurrency model plus server locking likely enforces correctness.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (10)
gnark/dkg/types.go (2)

3-6: Consider adding validation or additional metadata for PublicKey.
While these types are generally self-explanatory, if there's a need to ensure that a PublicKey field meets certain cryptographic format requirements (e.g., prefix, length), you might embed validation logic or strongly type this field to help prevent potential misuse down the road.


14-20: Ensure consistency in naming for node-related fields.
Fields such as Nodes, NodeIndices, Threshold could conceptually be grouped under a single parent struct that represents the DKG context if that enhances clarity. Currently, they work fine, but consider grouping them for improved scalability or clarity if these structs grow in complexity.

gnark/dkg/server/server.go (4)

153-160: Watch out for potential race conditions with simultaneous registrations.
Although s.Lock() is used here, multiple concurrent calls can stack up before the lock is acquired. This is not necessarily incorrect, but be aware that near-simultaneous nodes might pass the len(s.RegisteredNodes) >= s.NumNodes check concurrently. If that’s undesirable, you might consider an atomic check or additional logic.


175-177: Consider returning more contextual data if registration is slow.
When all nodes are registered, you call s.Cancel() to indicate completion. If there’s a large time gap or unexpected delay, it may be useful for clients to see partial states (e.g., “4/5 nodes registered”) via an endpoint. This can help debug issues in large-scale distributed setups.


310-323: Validate threshold-based logic outside of full group shares.
Here, you check if actualShareCount < expectedShareCount and return HTTP 425 (Too Early) if not enough shares are collected. In some DKG approaches, a threshold is sufficient to proceed. If your design requires full n-of-n shares, this is correct. Otherwise, consider adjusting the logic for threshold-based acceptance.


374-400: Improve error messages for missing environment variables or invalid states.
While defaults are assigned for NUM_NODES and THRESHOLD, if the user provides partial or nonsensical environment variables, it might be beneficial to include more explicit warnings or logs.

gnark/dkg/client/client.go (4)

52-55: Avoid panicking on key generation failures.
This panic is appropriate for demos, but in a production library, gracefully handling the error (e.g., returning it to the caller) is typically more robust.

 if err != nil {
-    panic(fmt.Sprintf("Failed to generate age key pair: %v", err))
+    return nil, fmt.Errorf("Failed to generate age key pair: %w", err)
 }

114-134: Revisit polling approach for improved resiliency.
Using a fixed 30s timeout with repeated short sleeps is simple and works in most small-scale scenarios. For larger deployments or network latencies, this might be insufficient. Consider exponential backoff or a user-defined max timeout if the connection is slow or the server is under high load.


262-268: Handle encryption writer closure carefully.
If a partial write or an error occurs, cleaning up resources (e.g., calling w.Close()) might need additional logic to avoid data corruption or blocked buffers. With the current pattern, you're closing in the error case but ensure that partial writes can't leak data.


392-406: Evaluate partial share verification vs. full share verification.
Right now, the code verifies all shares after they’re fetched. If a single node’s shares fail, the entire process stops. For large networks or partial re-sharing protocols, you might consider a strategy that logs partial failures without halting other correctly functioning nodes.

Do you want me to propose an approach where partial verification errors don’t break the entire workflow?

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 18d966b and 98a4544.

📒 Files selected for processing (3)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/dkg/server/server.go (1 hunks)
  • gnark/dkg/types.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: test-gnark-circuits (stable)
  • GitHub Check: test-js-lib (22, js)
  • GitHub Check: test-circom-circuits (22, circom)
🔇 Additional comments (3)
gnark/dkg/types.go (1)

58-66: Validate usage of byte slices in JSON.
Fields like PublicShare []byte and Commitment []byte will often appear as base64-encoded when marshalled to JSON. Ensure that server and client code handle them correctly, as inconsistent encoding/decoding can cause subtle integration bugs.

Please confirm that all client and server endpoints consistently encode and decode byte slices (e.g., base64) to avoid mismatches.

gnark/dkg/server/server.go (2)

121-133: Confirm concurrency approach for checking registration completion.
Using s.Lock() then immediately returning HTTP 425 (Too Early) if registration isn’t complete is a good idea, but be mindful that once the lock is released, external state may change. If you need stronger guarantees (e.g. preventing a subsequent request from partially avoiding the check), consider holding the lock for the duration of the relevant logic.


229-248: Verify that only the correct node can retrieve or overwrite these commitments.
Currently, you check whether a node's commitments are resubmitted and return an error. Ensure that there's no trick allowing a different Node-ID to overwrite existing commitments. Using the same map key with a forged Node-ID might pose a risk unless there's a strong identity check.

@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 12, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (2)
gnark/utils/dkg.go (1)

44-55: ⚠️ Potential issue

Do not ignore errors from rand.Int.

Line 48: coef, _ := rand.Int(...) discards the error in cryptographic code. This can lead to silent failures if RNG generation fails.

- coef, _ := rand.Int(rand.Reader, &curve.Order)
+ coef, err := rand.Int(rand.Reader, &curve.Order)
+ if err != nil {
+     return // or handle the error properly
+ }
gnark/dkg/client/client.go (1)

53-56: 🛠️ Refactor suggestion

Consider returning an error instead of panicking.
For better stability and library friendliness, it’s often safer to return an error rather than panicking when generating a key pair fails, especially if the client is meant to be embedded in other systems.

 func NewClient() *Client {
 	identity, err := age.GenerateX25519Identity()
 	if err != nil {
-		panic(fmt.Sprintf("Failed to generate age key pair: %v", err))
+		// Return nil or a typed error, depending on your needs
+		fmt.Printf("Failed to generate age key pair: %v\n", err)
+		return nil
 	}
 	...
 }
🧹 Nitpick comments (7)
gnark/circuits/toprf/testdata.go (2)

44-46: Consider naming consistency for the new slices.

respsIn and sharePublicKeysIn are similarly structured arrays of *tbn254.PointAffine. Adopting consistent or more descriptive naming can improve clarity, e.g., evaluatedPoints / publicKeysBN254, to avoid confusion with resps and sharePublicKeys.


72-73: Remove or clarify commented-out code.

These commented-out lines (“pk := utils.TOPRFThresholdMul(...)”) suggest partial debugging or leftover scaffolding. Either remove them if no longer needed or include a clear explanation for leaving them commented out.

gnark/utils/toprf.go (1)

235-238: Consider returning an error instead of panic on random generation failure.

Panic can be disruptive in a library context, making graceful error handling impossible for callers.

gnark/utils/dkg.go (1)

27-42: Avoid panics in NewDKG.

Converting nodeID to int and panicking on error forces the caller to handle abrupt exits. Consider returning an error for more flexibility.

gnark/dkg/client/client.go (3)

116-135: Potential risk of silent failure in poll.
The polling logic retries on any error and may silently mask consistent request failures until the timeout. This might be acceptable for short-term resiliency, but consider logging all error messages more distinctly or using exponential backoff.


302-324: Validate parsing of big.Int shares.
When parsing shares via SetString, consider trimming whitespace or performing stricter validations to reduce errors if the share contains unexpected formatting.

- secret, ok := new(big.Int).SetString(decrypted.String(), 10)
+ rawShare := strings.TrimSpace(decrypted.String())
+ secret, ok := new(big.Int).SetString(rawShare, 10)
  if !ok {
      fmt.Printf("%s: failed to parse decrypted share from %s: %s\n", c.NodeID, fromNodeID, rawShare)
      continue
  }

445-490: Add HTTP client timeouts or contexts in main.
Long-running or stuck requests can block the DKG process. Setting a Timeout in http.Client or using context with cancellation ensures the program can recover gracefully.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 98a4544 and b178597.

📒 Files selected for processing (6)
  • gnark/circuits/toprf/testdata.go (3 hunks)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/libraries/prover/impl/provers.go (8 hunks)
  • gnark/libraries/verifier/impl/verifiers.go (6 hunks)
  • gnark/utils/dkg.go (1 hunks)
  • gnark/utils/toprf.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • gnark/libraries/prover/impl/provers.go
  • gnark/libraries/verifier/impl/verifiers.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test-gnark-circuits (stable)
🔇 Additional comments (16)
gnark/circuits/toprf/testdata.go (5)

36-36: Looks good for share generation call.

Calling CreateLocalSharesDKG(nodes, threshold) and immediately handling errors via panic(err) is consistent with the rest of the code. No issues found.


62-64: Implementation appears correct.

Storing the evaluated points and public keys in the respective slices is fine, and the error from OPRFEvaluate is handled properly.


66-66: Double-check zero-denominator cases in LagrangeCoefficient.

This usage of LagrangeCoefficient(...) can encounter a zero denominator if share IDs collide or certain values coincide. This was raised in a previous review.


75-75: Finalize call usage is fine.

No issues observed. The error is properly checked and handled with a panic.


84-85: Addition of Counter and X fields looks good.

Storing these fields in Params ensures they’re accessible downstream. No concerns.

gnark/utils/toprf.go (2)

5-9: New imports look fine.

Importing "fmt" and "strconv" is consistent with the added print statements and integer conversions. No issues found.


201-201: Lagrange coefficient usage repeats past concern.

Using LagrangeCoefficient can pose zero-denominator risks if indices overlap. Please ensure the code cannot trigger invalid inversions.

gnark/utils/dkg.go (8)

1-26: No major issues in the DKG struct definition.

Ensure the Secret field is clearly distinguished from the master private key if they differ. Otherwise, looks good.


56-68: Share generation logic is straightforward.

No major concerns; the polynomial evaluation is standard. The iteration properly excludes the current node’s own ID.


70-78: Horner’s method implementation is correct.

Implementation of polynomial evaluation modulo the curve order looks fine. Be mindful if polynomials can be empty, but that seems prevented by caller constraints.


80-113: Verification flow looks correct.

The loop for i := 0; i < d.Threshold; i++ ensures each coefficient is applied in the polynomial-based check. The change from <= to < is correct for threshold-based polynomial degrees.


138-169: Public key reconstruction logic is sound.

No issues. The usage of Lagrange interpolation for partial public keys is standard. The indexing logic also looks correct.


171-190: Reconstructing private key is handled appropriately.

The approach using Lagrange interpolation matches the threshold scheme. No major concerns here.


192-208: Revisit zero-denominator check in LagrangeCoefficient.

Similar concern from past reviews—if any two indices match, denominator goes zero. Confirm your usage ensures no duplicates or collisions.


210-217: Commitment marshaling logic is fine.

Serializing public commitments with json.Marshal is straightforward. No issues found.

gnark/dkg/client/client.go (1)

324-324: Validate threshold logic for share collection.
(c.DKG.NumNodes - 1) might conflict with scenarios where the threshold is less than all other nodes. If partial shares suffice, consider adjusting the condition to ensure it aligns with the actual threshold needed.

Would you like a helper function to confirm the correct number of shares before proceeding with verification?

Comment on lines 57 to 74
func reconstructPrivateKey(shares []*Share, T int) *big.Int {
if len(shares) < T {
return nil
}
indices := make([]int, T)
for i, share := range shares[:T] {
indices[i] = share.Index
}
result := new(big.Int)
for _, share := range shares[:T] {
lambda := LagrangeCoefficient(share.Index, indices)
term := new(big.Int).Mul(share.PrivateKey, lambda)
term.Mod(term, &curve.Order)
result.Add(result, term)
result.Mod(result, &curve.Order)
}
return result
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling when len(shares) < T.

Returning nil instead of an error can lead to unexpected nil pointer usage. Consider returning an error or panicking to make the failure mode explicit in cryptographic code.

Comment on lines 76 to 93
func reconstructPublicKey(shares []*Share, T int) *twistededwards.PointAffine {
if len(shares) < T {
return nil
}
indices := make([]int, T)
for i, share := range shares[:T] {
indices[i] = share.Index
}
result := new(twistededwards.PointAffine)
result.X.SetZero()
result.Y.SetOne()
for _, share := range shares[:T] {
lambda := LagrangeCoefficient(share.Index, indices)
term := new(twistededwards.PointAffine).ScalarMultiplication(share.PublicKey, lambda)
result.Add(result, term)
}
return result
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Same concern about returning nil for insufficient shares.

Similar to reconstructPrivateKey, returning nil in reconstructPublicKey can cause unexpected runtime errors. It’s safer to return an error or handle it explicitly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
gnark/utils/toprf.go (2)

76-94: 🛠️ Refactor suggestion

Similar caution for public key reconstruction.
Returning nil if len(shares) < T may cause unexpected runtime behavior. An error return or panic might be safer here.

 func reconstructPublicKey(shares []*Share, T int) *twistededwards.PointAffine {
     if len(shares) < T {
-        return nil
+        panic("insufficient shares to reconstruct public key")
     }
     ...
 }

57-74: 🛠️ Refactor suggestion

Handle edge conditions when reconstructing private keys.
The function returns nil if len(shares) < T; consider returning an error instead or explicitly panicking to avoid silent nil usage. Also confirm Lagrange errors aren’t ignored.

 func reconstructPrivateKey(shares []*Share, T int) *big.Int {
     if len(shares) < T {
-        return nil
+        panic("insufficient shares to reconstruct private key")
     }
     ...
 }
gnark/utils/dkg.go (1)

44-54: ⚠️ Potential issue

GeneratePolynomials method
Generating random coefficients and their public commits is correct. However, ignoring the error from rand.Int (line 48) can hide cryptographic entropy failures.

coef, randErr := rand.Int(rand.Reader, &curve.Order)
if randErr != nil {
    panic(fmt.Errorf("failed to generate random coefficient: %w", randErr))
}
🧹 Nitpick comments (7)
gnark/circuits/toprf/testdata.go (2)

62-64: Validate that resp.EvaluatedPoint and shares[idx].PublicKey are not nil.
Storing them directly in respsIn and sharePublicKeysIn assumes they are non-nil. Consider defensive checks to avoid runtime panics if an upstream function fails.


72-73: Remove or justify commented-out code.
These lines may be vestigial debug statements. If they're not planned for future usage, removing them helps maintain cleaner code.

gnark/utils/toprf.go (1)

201-204: Check for errors in LagrangeCoefficient call.
lambda, _ := LagrangeCoefficient(...) ignores possible errors. Exposing errors may help debug interpolation failures.

gnark/utils/dkg.go (4)

27-42: NewDKG function creation
Initialization is straightforward. The panic on invalid nodeID ensures consistent usage, but returning an error might be more library-friendly.


80-113: VerifyShares

  1. Good approach verifying each received share with the public commits.
  2. Strict equality check ensures correctness but consider returning partial errors or continuing verification for other shares.

115-137: ComputeFinalKeys
Adding received shares plus evaluating own polynomial is correct. Logging final secret at line 135 can be a security concern if not for debug only.


265-271: MarshalCommitments
Marshaling public commits into JSON is straightforward. Confirm that no private data is being serialized inadvertently.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b178597 and 81775ed.

📒 Files selected for processing (5)
  • gnark/circuits/toprf/testdata.go (3 hunks)
  • gnark/libraries/prover/impl/provers.go (8 hunks)
  • gnark/libraries/verifier/impl/verifiers.go (6 hunks)
  • gnark/utils/dkg.go (1 hunks)
  • gnark/utils/toprf.go (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • gnark/libraries/prover/impl/provers.go
  • gnark/libraries/verifier/impl/verifiers.go
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: test-circom-circuits (22, circom)
  • GitHub Check: test-gnark-circuits (stable)
  • GitHub Check: test-js-lib (22, js)
🔇 Additional comments (11)
gnark/circuits/toprf/testdata.go (3)

44-47: Initialization of slices looks fine.
These slices correctly store data for threshold sharing without apparent logic errors.


75-78: Error handling appears consistent.
Upon failure, the panic ensures early termination. No further changes needed here.


84-85: Double-check usage of Counter and X.
These new fields are now retained in Params. If they’re cryptography-sensitive, confirm that their usage and range are properly validated in upstream or downstream code.

gnark/utils/toprf.go (2)

5-5: Imports look consistent with usage.
The addition of fmt and strconv seems justified by new logging statements and string-to-int conversions. No issues found.

Also applies to: 9-9


95-193: Logging master private key is a security risk.
Printing the private key (line 183) can leak secrets. Consider removing or masking logs in production.

- fmt.Printf("\nReconstructed Master Private Key: %s\n", masterPrivate.String())
+ // Please remove or mask this print to prevent key exposure.
gnark/utils/dkg.go (6)

14-25: Struct design is clear but consider more explicit naming.
Properties like Nodes and ID are well-defined for a DKG. Confirm that ID never conflicts with or duplicates other node IDs.


56-68: GenerateShares method
Avoiding share generation for the current node is correct. Make sure each node eventually obtains its own share to form a final secret if needed.


70-78: EvaluatePolynomial
Uses Horner’s method, which is efficient. Properly modded by curve order. Implementation is correct.


138-169: ReconstructMasterPublicKey
Implementation uses Lagrange interpolation with partial usage of available shares. This approach is valid. Confirm that ignoring the rest of the shares after reaching Threshold is intended.


171-190: ReconstructPrivateKey approach
Same partial usage of up to Threshold shares. Ensure no duplication of shares is present.


192-263: LagrangeCoefficient
Robust zero-denominator checks and duplication checks. Good practice to send an error rather than ignoring these edge cases. Implementation is thorough.

@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 13, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 13, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 13, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 13, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (6)
gnark/utils/toprf.go (2)

57-74: 🛠️ Refactor suggestion

Improve error handling in reconstructPrivateKey

The function returns nil when there are insufficient shares instead of returning an error or panicking. This can lead to unexpected nil pointer dereferences if callers don't check the return value.

func reconstructPrivateKey(shares []*Share, T int) *big.Int {
	if len(shares) < T {
-		return nil
+		panic(fmt.Errorf("insufficient shares: got %d, need at least %d", len(shares), T))
	}
	indices := make([]int, T)
	for i, share := range shares[:T] {
		indices[i] = share.Index
	}
	result := new(big.Int)
	for _, share := range shares[:T] {
		lambda, _ := LagrangeCoefficient(share.Index, indices)
		term := new(big.Int).Mul(share.PrivateKey, lambda)
		term.Mod(term, &curve.Order)
		result.Add(result, term)
		result.Mod(result, &curve.Order)
	}
	return result
}

76-93: 🛠️ Refactor suggestion

Improve error handling in reconstructPublicKey

Similar to reconstructPrivateKey, this function returns nil instead of an error when there are insufficient shares, which can lead to nil pointer dereferences.

func reconstructPublicKey(shares []*Share, T int) *twistededwards.PointAffine {
	if len(shares) < T {
-		return nil
+		panic(fmt.Errorf("insufficient shares: got %d, need at least %d", len(shares), T))
	}
	indices := make([]int, T)
	for i, share := range shares[:T] {
		indices[i] = share.Index
	}
	result := new(twistededwards.PointAffine)
	result.X.SetZero()
	result.Y.SetOne()
	for _, share := range shares[:T] {
		lambda, _ := LagrangeCoefficient(share.Index, indices)
		term := new(twistededwards.PointAffine).ScalarMultiplication(share.PublicKey, lambda)
		result.Add(result, term)
	}
	return result
}
gnark/utils/dkg.go (4)

115-136: ⚠️ Potential issue

Avoid logging secrets in production code.
Line 135 directly logs the final secret, posing a security risk. Secrets must not be exposed in console or log output.

-fmt.Printf("Node %d: Included own share %s, final secret %s\n", d.ID, ownShare.String(), d.Secret.String())
+// Remove or mask the secret.
+fmt.Printf("Node %d: Included own share successfully.\n", d.ID)

56-68: 🛠️ Refactor suggestion

Consider returning an error instead of panicking.
Similar to NewDKG, GenerateShares uses panic(err) for invalid node IDs. If GenerateShares is part of a library, providing an error return helps callers handle unexpected input more gracefully.

-if err != nil {
-    panic(err)
+if err != nil {
+    return // or return an error if changing the function signature
 }

29-42: 🛠️ Refactor suggestion

Refactor to avoid panicking in library code.
Panicking on unexpected input (e.g., strconv.Atoi(nodeID)) prevents graceful error handling in upstream callers. It's safer to return an error and let the caller handle it.

Here’s an illustrative patch:

-func NewDKG(threshold, numNodes int, nodes []string, nodeID string) *DKG {
-   id, err := strconv.Atoi(nodeID)
-   if err != nil {
-       panic(err)
-   }
-   return &DKG{
-       Threshold:      threshold,
-       NumNodes:       numNodes,
-       Nodes:          nodes,
-       ID:             id,
-       Shares:         make(map[string]*big.Int),
-       ReceivedShares: make(map[string]*big.Int),
-   }
+func NewDKG(threshold, numNodes int, nodes []string, nodeID string) (*DKG, error) {
+   id, err := strconv.Atoi(nodeID)
+   if err != nil {
+       return nil, fmt.Errorf("invalid node ID %q: %w", nodeID, err)
+   }
+   return &DKG{
+       Threshold:      threshold,
+       NumNodes:       numNodes,
+       Nodes:          nodes,
+       ID:             id,
+       Shares:         make(map[string]*big.Int),
+       ReceivedShares: make(map[string]*big.Int),
+   }, nil
 }

44-54: ⚠️ Potential issue

Handle cryptographic errors from rand.Int.
Ignoring the returned error (coef, _ := rand.Int) can lead to insecure or uninitialized keys if rand.Int fails. Always check and handle the error.

-for i := 0; i < d.Threshold; i++ { // 0 to t-1
-    coef, _ := rand.Int(rand.Reader, &curve.Order)
-    d.Polynomial[i] = coef
-    var commit twistededwards.PointAffine
-    commit.ScalarMultiplication(&curve.Base, coef)
-    d.PublicCommits[i] = &commit
+for i := 0; i < d.Threshold; i++ {
+    coef, err := rand.Int(rand.Reader, &curve.Order)
+    if err != nil {
+        return // or panic, or return an error from this method
+    }
+    d.Polynomial[i] = coef
+    var commit twistededwards.PointAffine
+    commit.ScalarMultiplication(&curve.Base, coef)
+    d.PublicCommits[i] = &commit
 }
🧹 Nitpick comments (4)
gnark/circuits/toprf/testdata.go (1)

72-74: Consider removing commented code

These commented debug lines are no longer needed and could be removed to improve code clarity.

-	// pk := utils.TOPRFThresholdMul(idxs, sharePublicKeysIn)
-	// fmt.Println("master public key X:", pk.X.String())
gnark/utils/toprf.go (1)

95-193: Well-implemented DKG functionality with verification

The CreateLocalSharesDKG function properly implements a Distributed Key Generation protocol with:

  1. Good parameter validation
  2. Proper simulation of multiple nodes with polynomial generation
  3. Share generation and distribution
  4. Final share computation
  5. Verification of reconstruction

There are a few minor improvements that could be made:

  1. Add documentation explaining the function's purpose and parameters
  2. Remove or gate the debug print on line 187 behind a debug flag
  3. Consider handling potential errors from LagrangeCoefficient calls

Overall, this is a solid implementation of DKG.

+// CreateLocalSharesDKG implements a Distributed Key Generation (DKG) protocol.
+// It creates N shares with threshold T, ensuring that any T shares can reconstruct
+// the master private/public key pair. Returns the generated shares, the master
+// public key, and any error encountered during the process.
 func CreateLocalSharesDKG(N, T int) ([]*Share, *twistededwards.PointAffine, error) {
-	fmt.Println("Verification successful: Reconstructed keys match!")
+	// Verification successful: Reconstructed keys match!
gnark/utils/dkg.go (2)

80-113: Use structured logging or debug flags instead of printing.
In cryptographic code, printing informational messages (e.g., “Verified share…”) might clutter logs or expose operational details. Consider using a controlled logging mechanism or a debug flag.


192-266: Unify error handling rather than mixing panics and returns.
In LagrangeCoefficient, line 205 calls panic("indices cannot be zero"), whereas other conditions return errors. Consistency in error handling (e.g., returning an error) simplifies code and usage patterns.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 81775ed and 0fc4f00.

📒 Files selected for processing (6)
  • gnark/circuits/toprf/testdata.go (3 hunks)
  • gnark/circuits/toprf/toprf.go (4 hunks)
  • gnark/libraries/core_test.go (4 hunks)
  • gnark/utils/dkg.go (1 hunks)
  • gnark/utils/toprf.go (4 hunks)
  • gnark/utils/utils_test.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • gnark/libraries/core_test.go
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: test-gnark-circuits (stable)
🔇 Additional comments (10)
gnark/utils/utils_test.go (2)

39-40: OPRF expected output has been updated

The expected output value has been changed in the OPRF test, which suggests a modification in the OPRF algorithm or its parameters. Ensure that this change is intentional and aligns with the new HashToCurve implementation mentioned in the PR description.


42-70: Great addition of DKG test coverage

This new test function properly validates the Distributed Key Generation functionality by:

  1. Creating local shares with a threshold setup (10 nodes, 5 threshold)
  2. Testing multiple evaluations (200 iterations) with different random selections of nodes
  3. Verifying consistent output across all iterations

The test provides good coverage of the new DKG functionality.

gnark/circuits/toprf/testdata.go (4)

36-40: Adapted to use DKG for test data generation

The function now uses the new DKG-based approach to generate shares instead of directly creating a server key. This change aligns with the distributed trust model introduced in this PR.


44-47: Added new arrays for DKG compatibility

The addition of respsIn and sharePublicKeysIn arrays properly supports the new DKG implementation requirements.


75-78: Updated finalization to match DKG approach

The function now correctly calls TOPRFFinalize with indexes, response points, secret elements, and mask, which aligns with the DKG implementation.


80-86: Added new counter and X fields to Params

The addition of the Counter and X fields to the Params struct aligns with changes in the TOPRF implementation, ensuring proper data flow between components.

gnark/utils/toprf.go (1)

237-241: Improved error handling in TOPRFGenerateSharedKey

The function now correctly handles errors from random number generation by panicking with the specific error instead of silently ignoring it.

gnark/circuits/toprf/toprf.go (2)

27-29: Added Counter and X fields to Params struct

These new fields are required for the updated hashToCurve implementation and align with the changes in the test data generation. They store the counter used in the hash-to-curve operation and the original X value.


15-15:

❓ Verification inconclusive

Changed threshold from 1 to 2

The threshold constant has been increased from 1 to 2, which affects the size of arrays in the Params struct. This change impacts the circuit size and might affect compatibility with existing code using this package.


🏁 Script executed:

#!/bin/bash
# Find any code that might depend on the previous threshold value
rg -A 3 -B 3 "Threshold.*=.*1" --glob "!gnark/circuits/toprf/toprf.go"

Length of output: 72


Threshold update requires manual verification

The threshold constant in gnark/circuits/toprf/toprf.go has been updated from 1 to 2, which changes the array size within the Params struct. An automated search outside this file did not yield any usages of the previous value (1), but the output was inconclusive. Please manually verify that no dependent code relies on the threshold being 1, ensuring that the updated array size does not adversely affect circuit compatibility.

  • Verify that no external code expects an array size corresponding to a threshold value of 1.
  • Confirm that the change to const Threshold = 2 does not break compatibility with usages of the package.
gnark/utils/dkg.go (1)

268-275: Looks good to me.
No issues found in the marshaling logic. Returning the JSON-encoded commitments is straightforward and correct.

Comment on lines +176 to 210
func hashToPoint(api frontend.API, curve twistededwards.Curve, data [2]frontend.Variable, domainSeparator, counter, xOrig frontend.Variable) (*twistededwards.Point, error) {
api.AssertIsLessOrEqual(counter, 255) // hash counter is 0..255

d := curve.Params().D

hField, err := mimc.NewMiMC(api)
if err != nil {
return nil, err
}
hField.Write(data[0])
hField.Write(data[1])
hField.Write(domainSeparator)
hashedSecretData := hField.Sum()
hField.Write(counter)
y := hField.Sum() // original Y is data hash
hField.Reset()

basePoint := twistededwards.Point{
X: curve.Params().Base[0],
Y: curve.Params().Base[1],
}
dataPoint := curve.ScalarMul(basePoint, hashedSecretData)
return &dataPoint, nil
// calculate X
y2 := api.Mul(y, y)
num := api.Sub(1, y2)
denom := api.Mul(d.String(), y2)
denom = api.Add(denom, 1)
denom = api.Neg(denom)
x2 := api.Div(num, denom)
api.AssertIsEqual(x2, api.Mul(xOrig, xOrig)) // check calculated X^2 against passed original X

// clear cofactor by p*8
point := twistededwards.Point{X: xOrig, Y: y} // original point
point = curve.Double(point) // p2
point = curve.Double(point) // p4
point = curve.Double(point) // p8

curve.AssertIsOnCurve(point)

return &point, nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Modified hashToPoint with updated curve point calculation

The function now:

  1. Includes counter and original X value as parameters
  2. Validates the counter is within range (≤ 255)
  3. Calculates a point on the curve based on the input data and counter
  4. Verifies the calculated X² matches the provided X value
  5. Clears the cofactor by multiplying by 8

The implementation checks that the calculation matches the provided X value, but there's a potential issue:

The denominator calculation on line 197-198 doesn't verify that denom is non-zero before division. While this is unlikely in cryptographic contexts, it's good practice to validate this to prevent potential runtime errors.

 	denom = api.Add(denom, 1)
 	denom = api.Neg(denom)
+	api.AssertIsDifferent(denom, 0) // Ensure denominator is non-zero
 	x2 := api.Div(num, denom)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func hashToPoint(api frontend.API, curve twistededwards.Curve, data [2]frontend.Variable, domainSeparator, counter, xOrig frontend.Variable) (*twistededwards.Point, error) {
api.AssertIsLessOrEqual(counter, 255) // hash counter is 0..255
d := curve.Params().D
hField, err := mimc.NewMiMC(api)
if err != nil {
return nil, err
}
hField.Write(data[0])
hField.Write(data[1])
hField.Write(domainSeparator)
hashedSecretData := hField.Sum()
hField.Write(counter)
y := hField.Sum() // original Y is data hash
hField.Reset()
basePoint := twistededwards.Point{
X: curve.Params().Base[0],
Y: curve.Params().Base[1],
}
dataPoint := curve.ScalarMul(basePoint, hashedSecretData)
return &dataPoint, nil
// calculate X
y2 := api.Mul(y, y)
num := api.Sub(1, y2)
denom := api.Mul(d.String(), y2)
denom = api.Add(denom, 1)
denom = api.Neg(denom)
x2 := api.Div(num, denom)
api.AssertIsEqual(x2, api.Mul(xOrig, xOrig)) // check calculated X^2 against passed original X
// clear cofactor by p*8
point := twistededwards.Point{X: xOrig, Y: y} // original point
point = curve.Double(point) // p2
point = curve.Double(point) // p4
point = curve.Double(point) // p8
curve.AssertIsOnCurve(point)
return &point, nil
}
func hashToPoint(api frontend.API, curve twistededwards.Curve, data [2]frontend.Variable, domainSeparator, counter, xOrig frontend.Variable) (*twistededwards.Point, error) {
api.AssertIsLessOrEqual(counter, 255) // hash counter is 0..255
d := curve.Params().D
hField, err := mimc.NewMiMC(api)
if err != nil {
return nil, err
}
hField.Write(data[0])
hField.Write(data[1])
hField.Write(domainSeparator)
hField.Write(counter)
y := hField.Sum() // original Y is data hash
hField.Reset()
// calculate X
y2 := api.Mul(y, y)
num := api.Sub(1, y2)
denom := api.Mul(d.String(), y2)
denom = api.Add(denom, 1)
denom = api.Neg(denom)
api.AssertIsDifferent(denom, 0) // Ensure denominator is non-zero
x2 := api.Div(num, denom)
api.AssertIsEqual(x2, api.Mul(xOrig, xOrig)) // check calculated X^2 against passed original X
// clear cofactor by p*8
point := twistededwards.Point{X: xOrig, Y: y} // original point
point = curve.Double(point) // p2
point = curve.Double(point) // p4
point = curve.Double(point) // p8
curve.AssertIsOnCurve(point)
return &point, nil
}

Comment on lines +70 to +78
func EvaluatePolynomial(polynomial []*big.Int, x *big.Int) *big.Int {
result := new(big.Int).Set(polynomial[len(polynomial)-1])
for i := len(polynomial) - 2; i >= 0; i-- {
result.Mul(result, x)
result.Add(result, polynomial[i])
result.Mod(result, &curve.Order)
}
return result
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Ensure the polynomial is non-empty before accessing the last element.
If d.Polynomial is ever empty, accessing polynomial[len(polynomial)-1] causes a panic. Verify it’s safe or ensure the threshold is never zero.


🏁 Script executed:

#!/bin/bash
# Check usage of EvaluatePolynomial to ensure polynomial is never empty
rg -A 5 'EvaluatePolynomial'

Length of output: 1597


Ensure Non-Empty Polynomial in EvaluatePolynomial

The function directly accesses the last element of the polynomial slice without checking if the slice is non-empty. If an empty polynomial is passed, this will cause a panic. Please address this by either:

  • Adding a Guard: Insert a check at the beginning of EvaluatePolynomial (e.g., return early or raise an error if len(polynomial) == 0).

  • Documenting Invariants: Clearly document and enforce the invariant that any polynomial passed to this function (in all call sites, such as in gnark/utils/dkg.go and gnark/utils/toprf.go) must be non-empty because the threshold is guaranteed to be non-zero.

Review and update the code accordingly to ensure safe handling of this edge case.

Comment on lines +138 to +169
func (d *DKG) ReconstructMasterPublicKey(publicShares map[int][]byte) *twistededwards.PointAffine {
result := new(twistededwards.PointAffine)
result.X.SetZero()
result.Y.SetOne()
points := make(map[int]*twistededwards.PointAffine)
var indices []int
used := 0
var ids []int
for nodeID := range publicShares {
ids = append(ids, nodeID)
}
sort.Ints(ids)
for _, id := range ids {
if used >= d.Threshold {
break
}
pubShare := &twistededwards.PointAffine{}
err := pubShare.Unmarshal(publicShares[id])
if err != nil {
return nil
}
points[id] = pubShare
indices = append(indices, id)
used++
}
for j := range points {
lambda, _ := LagrangeCoefficient(j, indices)
term := new(twistededwards.PointAffine).ScalarMultiplication(points[j], lambda)
result.Add(result, term)
}
return result
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Return an error instead of nil for unmarshal failures.
In ReconstructMasterPublicKey, errors from pubShare.Unmarshal result in return nil. This loses details about the error state and can confuse callers.

-func (d *DKG) ReconstructMasterPublicKey(publicShares map[int][]byte) *twistededwards.PointAffine {
+func (d *DKG) ReconstructMasterPublicKey(publicShares map[int][]byte) (*twistededwards.PointAffine, error) {
    ...
    err := pubShare.Unmarshal(publicShares[id])
    if err != nil {
-       return nil
+       return nil, fmt.Errorf("failed to unmarshal public share from node %d: %w", id, err)
    }
    ...
-   return result
+   return result, nil
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (d *DKG) ReconstructMasterPublicKey(publicShares map[int][]byte) *twistededwards.PointAffine {
result := new(twistededwards.PointAffine)
result.X.SetZero()
result.Y.SetOne()
points := make(map[int]*twistededwards.PointAffine)
var indices []int
used := 0
var ids []int
for nodeID := range publicShares {
ids = append(ids, nodeID)
}
sort.Ints(ids)
for _, id := range ids {
if used >= d.Threshold {
break
}
pubShare := &twistededwards.PointAffine{}
err := pubShare.Unmarshal(publicShares[id])
if err != nil {
return nil
}
points[id] = pubShare
indices = append(indices, id)
used++
}
for j := range points {
lambda, _ := LagrangeCoefficient(j, indices)
term := new(twistededwards.PointAffine).ScalarMultiplication(points[j], lambda)
result.Add(result, term)
}
return result
}
func (d *DKG) ReconstructMasterPublicKey(publicShares map[int][]byte) (*twistededwards.PointAffine, error) {
result := new(twistededwards.PointAffine)
result.X.SetZero()
result.Y.SetOne()
points := make(map[int]*twistededwards.PointAffine)
var indices []int
used := 0
var ids []int
for nodeID := range publicShares {
ids = append(ids, nodeID)
}
sort.Ints(ids)
for _, id := range ids {
if used >= d.Threshold {
break
}
pubShare := &twistededwards.PointAffine{}
err := pubShare.Unmarshal(publicShares[id])
if err != nil {
return nil, fmt.Errorf("failed to unmarshal public share from node %d: %w", id, err)
}
points[id] = pubShare
indices = append(indices, id)
used++
}
for j := range points {
lambda, _ := LagrangeCoefficient(j, indices)
term := new(twistededwards.PointAffine).ScalarMultiplication(points[j], lambda)
result.Add(result, term)
}
return result, nil
}

@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 14, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (7)
gnark/dkg/server/server.go (4)

142-161: Harden node ID registration checks.
The code uses a global nodeIds map and server-level RegisteredNodes. Consider unifying or clarifying the difference between these structures to reduce confusion. Potentially, you could also check that node IDs conform to a certain pattern (e.g., UUID) before marking them valid.


213-244: Consider a more compact commitments POST format.
You’re currently JSON-unmarshaling into a [][]byte. It works, but for large thresholds, you might consider compression or a more compact representation (e.g., hex or base64 strings).


304-315: Distinguish self-share from unregistered node early.
Both checks produce errors, but you might prefer to handle the self-share scenario before verifying RegisteredNodes[toNodeID]. This is just a minor flow optimization.


340-345: Check success of public share submission.
Currently, if SubmitPublicShare receives partial data or the server discards it, the client only sees “success” or error. Consider returning more detail or requiring a minimal length to flag incomplete data.

gnark/dkg/client/client.go (3)

180-181: Allow zero-based index if legitimate.
The condition c.NodeIndex > 0 at line 180 might exclude a node with index 0. If node 0 is not valid, no change is needed. Otherwise, adjust the condition to >= 0.


250-287: Handle partial encryption failures distinctly.
When encrypting multiple shares, if one share fails, you immediately abort. Consider partial success handling or logging which shares succeeded and which failed if that scenario is important.


421-425: Recheck format of final Master Public Key.
You print raw X and Y coordinates at the end. If these need to be shared widely in a specific encoding (base64, hex, etc.), consider a consistent format.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0fc4f00 and 9c199de.

📒 Files selected for processing (3)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/dkg/server/server.go (1 hunks)
  • gnark/utils/dkg.go (1 hunks)
🔇 Additional comments (8)
gnark/utils/dkg.go (4)

44-54: Handle potential errors from rand.Int to avoid silent failures.
This issue was previously flagged, so referencing it again: ignoring the error from rand.Int(rand.Reader, &curve.Order) might lead to weak or predictable randomness. Proper error handling is crucial in cryptographic code.

 func (d *DKG) GeneratePolynomials() {
     d.Polynomial = make([]*big.Int, d.Threshold)
     d.PublicCommits = make([]*twistededwards.PointAffine, d.Threshold)
     for i := 0; i < d.Threshold; i++ {
-        coef, _ := rand.Int(rand.Reader, &curve.Order)
+        coef, err := rand.Int(rand.Reader, &curve.Order)
+        if err != nil {
+            panic(fmt.Errorf("failed to generate coefficient: %w", err))
+        }
         d.Polynomial[i] = coef
         ...
     }
 }

70-78: Ensure the polynomial is non-empty in EvaluatePolynomial.
Directly referencing polynomial[len(polynomial)-1] can panic with an empty slice. Either document the invariant that len(polynomial) > 0 everywhere or guard against an empty slice. This was mentioned in a past review as well.


138-169: Return error rather than nil when unmarshal fails.
If Unmarshal fails at lines 155–158, returning just nil can obscure the error. Returning an error instead of nil allows clearer troubleshooting. This was previously suggested in older commits.

 func (d *DKG) ReconstructMasterPublicKey(publicShares map[int][]byte) *twistededwards.PointAffine {
     ...
     err := pubShare.Unmarshal(publicShares[id])
     if err != nil {
-        return nil
+        // return an error instead if possible:
+        // return nil, fmt.Errorf("failed to unmarshal public share: %w", err)
     }
     ...
 }

268-274: Confirm serialized commitments correctness.
This code chunk simply marshals the PublicCommits. If these commitments are later used for verification, you may want to confirm the serialization format is consistent across all participating nodes to prevent subtle byte ordering issues.

gnark/dkg/server/server.go (3)

103-121: Validate threshold and number of nodes in NewServer for safety.
You already check threshold <= numNodes and numNodes > 0. Consider adding stricter logic or returning a descriptive error if these conditions fail, ensuring no silent misconfigurations in production.


123-135: Registration middleware logic might be inverted.
Currently, if s.Ctx.Err() == nil, the code returns a 429 Too Early. Ensure that’s the intended behavior and that you truly mean to block requests until all nodes are registered (i.e., Ctx.Err() != nil) rather than the other way around.


88-101:

❓ Verification inconclusive

Ensure no concurrency race conditions with shared maps.
While you lock Server in multiple places, verify that all mutations to RegisteredNodes, Commitments, etc. consistently occur under the same lock to avoid data races.


🏁 Script executed:

#!/bin/bash
# We'll look for direct writes to maps in the Server struct
ast-grep --pattern $'$_SERVER.$_MAP[$_] = $_' 

Length of output: 3584


Action: Verify Consistent Locking for Shared Map Mutations in Server

The ast-grep output confirms that direct assignments to key maps occur at several points in the Server code:

  • Line 177: s.RegisteredNodes[nodeID] = true
  • Line 179: s.PublicKeys[nodeID] = req.PublicKey
  • Line 184: s.NodeIndices[nid] = i + 1 // 1-based indexing
  • Line 241: s.Commitments[nodeID] = commits
  • Line 318: s.Shares[FromNodeID] = make(map[string]*ShareData)
  • Line 366: s.PublicShares[nodeID] = req.PublicShare

Please verify that all these mutations are executed within a consistent locking scope (e.g., via the embedded mutex using s.Lock()/s.Unlock()). If any of these assignments occur outside of the protected critical sections, they may lead to concurrency race conditions and should be updated accordingly.

gnark/dkg/client/client.go (1)

303-338: Ensure share decryption logs success or failure consistently.
Decryption errors are printed, but the code continues for other shares. If you must fail the entire DKG on a single decryption error, enforce that instead of continuing.

Comment on lines +14 to +25
type DKG struct {
Threshold int
NumNodes int
Nodes []string // ["1", "2", ...]
ID int // Numeric ID: 1, 2, ...
Polynomial []*big.Int
PublicCommits []*twistededwards.PointAffine
Shares map[string]*big.Int // Keyed by string ID
ReceivedShares map[string]*big.Int // Keyed by string ID
PublicKey *twistededwards.PointAffine
Secret *big.Int
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate DKG configuration fields for edge cases.
Currently, there is no explicit check that ensures Threshold <= NumNodes or that Threshold > 0 for the DKG struct’s fields. Even if the caller should ensure it, you may want a safeguard in the constructor or an initialization method to detect misconfiguration early.

Comment on lines +29 to +33
func NewDKG(threshold, numNodes int, nodes []string, nodeID string) *DKG {
id, err := strconv.Atoi(nodeID)
if err != nil {
panic(err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid panicking when converting node IDs.
Instead of calling panic(err), returning an error or handling it more gracefully helps avoid abrupt terminations, especially if this code is reused as a library.

Comment on lines +197 to +211
// Step 1: Validate inputs
if len(indices) == 0 {
return nil, fmt.Errorf("indices cannot be empty")
}
// Check for duplicates in indices, as they would lead to a zero denominator
seen := make(map[int]bool)
for _, idx := range indices {
if idx == 0 {
panic("indices cannot be zero")
}
if seen[idx] {
return nil, fmt.Errorf("duplicate index %d in indices", idx)
}
seen[idx] = true
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider returning errors for invalid indices instead of panicking.
Panics in library code limit graceful error handling by the caller. For example, line 205 calls panic("indices cannot be zero"). Shifting to an error-based approach improves reliability.

Comment on lines +407 to +410
if err := c.DKG.VerifyShares(commitments, strconv.Itoa(c.NodeIndex)); err != nil {
fmt.Printf("%s: Failed to verify shares: %v\n", c.NodeID, err)
return
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider returning an error from VerifyShares.
The code prints a message and exits on error. For more sophisticated scenarios, you might prefer a higher-level error propagation rather than simply printing.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (22)
gnark/circuits/toprf/testdata.go (4)

36-36: Ensure robust error handling.
It's recommended to handle errors gracefully rather than panicking, to facilitate better fault-tolerance in production systems.


44-46: Consider checking for nil usage.
Using arrays of pointer references might create potential nil pointer usage later if not carefully handled or assigned. Confirm that each element is assigned properly before usage.


77-78: Remove commented-out code if unnecessary.
If these lines are no longer needed, consider removing them to reduce clutter.


79-81: Repetitive indexing conversion.
This code block duplicates the index conversion logic from lines 66-71. Consider refactoring to a helper function or reusing the earlier array to improve clarity.

gnark/utils/toprf.go (4)

5-9: Imports introduced for debugging and string conversion.
Only keep these if they are genuinely used after debugging to avoid unused import overhead.


95-193: Robust DKG share creation.
This function significantly enhances the code by simulating N nodes, generating polynomials, and distributing shares. Consider adding validation for repeated node IDs or polynomials. Also ensure logs or stats are generated for easier debugging since it deals with cryptographic operations.


195-195: Remove commented-out heading if no longer applicable.
If this heading is outdated, removing it can help keep the codebase clean.


202-205: Graceful error handling.
Panicking on Lagrange coefficient failure might be too aggressive for production. Consider returning an error to the caller.

gnark/dkg/server/server.go (6)

103-121: Validate server fields.
Adding a stronger parameter check or structured configuration might help, especially if there's a future need for advanced server options.


137-139: Minimal health endpoint.
Consider adding more extensive health checks (e.g., readiness checks or version info).

🧰 Tools
🪛 golangci-lint (1.62.2)

137-137: undefined: echo

(typecheck)


213-245: Ensure data integrity.
After unmarshalling commitments, verifying the size or structure might be beneficial to ensure no malformed data can cause cryptographic errors.


246-265: Potential performance overhead in repeated marshalling.
Storing the raw commitments instead of re-serializing them might be more efficient if the server must repeatedly serve them.


327-345: Large share set management.
Storing all shares in memory might become expensive at scale. Consider adding a limit or streaming approach for extremely large DKG configurations.


371-389: Uniform data structure.
The code uses a map from nodeID to []byte. Ensure the client can interpret the data reliably. Consider verifying minimal length or format of each []byte.

gnark/dkg/client/client.go (8)

34-37: Use command-line flags or config for host and node ID.
Globals can hamper testability; consider passing these as parameters or environment variables for easier local development and CI integration.


73-100: Error boundary.
Returning an error if resp.StatusCode != http.StatusOK is correct, but consider also capturing the server's response body for debugging.


123-143: Potential indefinite loop.
Add a maximum retry or fail-fast condition to avoid indefinite blocking if the server continuously returns invalid data.


189-210: DKG polynomials generation.
Storing local commitments is good for verification, but consider logging or referencing them in debug mode only to avoid exposing them inadvertently.


249-287: Missing compression or chunking.
If a share is large, consider chunking or compressing the data before encryption to handle large messages.


303-338: Decryption error handling.
The function logs decryption failures but still attempts to proceed with other shares. Evaluate if a single failure should block the entire DKG or not.


385-432: No error returns from Run method.
This method logs errors and returns. Consider returning an aggregated error or status so the caller can determine success or failure in an automated environment.


454-509: Allow multiple node IDs.
Currently, the code sets numNodes := 1. Expand support for launching multiple clients by passing a dynamic parameter.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9c199de and cf0a965.

📒 Files selected for processing (6)
  • gnark/circuits/toprf/testdata.go (3 hunks)
  • gnark/dkg/client/client.go (1 hunks)
  • gnark/dkg/server/server.go (1 hunks)
  • gnark/libraries/core_test.go (9 hunks)
  • gnark/utils/toprf.go (4 hunks)
  • gnark/utils/utils_test.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • gnark/utils/utils_test.go
  • gnark/libraries/core_test.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
gnark/dkg/server/server.go

123-123: undefined: echo

(typecheck)


137-137: undefined: echo

(typecheck)


141-141: undefined: echo

(typecheck)

🔇 Additional comments (20)
gnark/circuits/toprf/testdata.go (6)

41-41: Verify uniform distribution of random indices.
Since this function heavily influences the test scenario, ensure PickRandomIndices truly provides uniform distribution over the index space.


62-62: LGTM!
Storing the evaluated point directly into the array is consistent with the usage.


64-64: Double-check the index usage.
Ensure that idx is always within the bounds of the shares slice to prevent out-of-range errors.


66-71: Validate 1-based indexing for Lagrange interpolation.
Ensuring the indexes are consistently converted to 1-based is critical for correct polynomial reconstruction. Double-check all callsites to avoid off-by-one errors.


83-83: Validate final output.
Ensure the returned out from TOPRFFinalize is used consistently and tested to confirm correctness of the final OPRF result.


92-93: Good addition for detailed request tracking.
Inclusion of Counter and X fields aligns with the new DKG approach. Ensure that the fields are used consistently in subsequent logic and tests.

gnark/utils/toprf.go (1)

207-207: LGTM!
Correct usage of ScalarMultiplication with the computed coefficient.

gnark/dkg/server/server.go (6)

1-21: Kudos for explicit imports and new main package.
It showcases a well-structured new server for DKG. Ensure echo usage is properly tested, as static analysis warns about possible "undefined: echo."


41-45: LGTM.
A base response structure clarifies standard fields for server responses.


88-101: Thread-safe usage.
sync.Mutex is used for concurrency, but ensure all writes to maps like RegisteredNodes, Shares are locked consistently to avoid data races.


196-211: Wait logic for node registration.
Returning 425 status is a creative usage of StatusTooEarly, ensure it is recognized by the client.


267-326: Detailed share validation.
The code thoroughly checks share validity. Evaluate concurrency aspects if multiple submissions occur simultaneously for the same FromNodeID.


391-463: Graceful shutdown logic.
The code is well structured for graceful termination. Ensure the DKG process stops cleanly if shutdown occurs during critical operations.

gnark/dkg/client/client.go (7)

1-21: Initialize client package with specialized imports.
Confirm that external packages are pinned to known stable versions to avoid cryptographic breakage.


38-43: Insightful typed response.
Generics help maintain consistent response handling, but ensure older Go versions are not needed, as generics are only available in Go 1.18+.


102-121: LGTM for param usage.
The code is consistent with the post function.


145-187: Check node synchronization logic.
The poll ensures node sync, but if the server never becomes 'success', we might get stuck. Possibly add a time-based or attempt-based fallback.


212-247: Commitment verification.
This robust verification ensures local vs server commitments match. Great approach to detect tampering or mismatch.


289-301: Potential concurrency issues.
If IndexToUUID is used by multiple goroutines, ensure access is protected.


340-350: LGTM.
Submits the public share with minimal overhead.

Comment on lines +57 to +74
func ReconstructPrivateKey(shares []*Share, T int) *big.Int {
if len(shares) < T {
return nil
}
indices := make([]int, T)
for i, share := range shares[:T] {
indices[i] = share.Index
}
result := new(big.Int)
for _, share := range shares[:T] {
lambda, _ := LagrangeCoefficient(share.Index, indices)
term := new(big.Int).Mul(share.PrivateKey, lambda)
term.Mod(term, &curve.Order)
result.Add(result, term)
result.Mod(result, &curve.Order)
}
return result
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Check for nil returns and handle potential zero denominators.
Currently, the function returns nil if there are fewer shares than threshold, which can lead to nil pointer dereferences downstream. Consider returning an error instead. Also, ensure the internal call to LagrangeCoefficient handles zero denominators gracefully.

@@ -153,7 +274,8 @@
return i.Uint64()
}

func PickRandomIndexes(n, k int) []int {
// PickRandomIndices produces 0-based randomized array
func PickRandomIndices(n, k int) []int {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use cryptographically secure RNG if needed.
This function uses math/rand/v2, which might not be cryptographically secure. Consider using crypto/rand if security is crucial for random index selection.

Comment on lines +23 to +37
// Import shared types
type (
RegisterRequest = types.RegisterRequest
RegisterResponse = types.RegisterResponse
NodesResponse = types.NodesResponse
CommitmentData = types.CommitmentData
CommitmentsResponse = types.CommitmentsResponse
ShareData = types.ShareData
ShareBatchRequest = types.ShareBatchRequest
SharesResponse = types.SharesResponse
SharesValidationErrors = types.SharesValidationErrors
ValidationError = types.ValidationError
PublicShareData = types.PublicShareData
PublicSharesResponse = types.PublicSharesResponse
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider removing global state.
var nodeIds = map[string]bool{} might produce concurrency issues if accessed by multiple goroutines. Evaluate if placing it inside Server struct is more consistent.

Comment on lines +123 to +135
func ensureRegistrationComplete(s *Server) echo.MiddlewareFunc {
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
s.Lock()
if s.Ctx.Err() == nil {
s.Unlock()
return c.JSON(http.StatusTooEarly, &BaseResponse{Status: "error", Message: "Registration not complete"})
}
s.Unlock()
return next(c)
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential concurrency pitfalls.
Locking then immediately unlocking might still create a timing gap. Reassess if the ephemeral locking approach ensures consistent state, especially for registration completion checks.

🧰 Tools
🪛 golangci-lint (1.62.2)

123-123: undefined: echo

(typecheck)

Comment on lines +141 to +194
func (s *Server) register(c echo.Context) error {
req := &RegisterRequest{} // Use named struct from types
if err := c.Bind(req); err != nil {
log.Errorf("Failed to bind request: %v", err)
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Invalid request format"})
}

nodeID := c.Request().Header.Get("Node-ID")
if nodeID == "" {
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Node-ID header is required"})
}

if _, ok := nodeIds[nodeID]; !ok {
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Invalid Node-ID"})
}

if nodeIds[nodeID] {
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Node-ID already registered"})
} else {
nodeIds[nodeID] = true
}

if req.PublicKey == "" {
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Public key is required"})
}
if _, err := age.ParseX25519Recipient(req.PublicKey); err != nil {
log.Errorf("Invalid public key: %v", err)
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Invalid public key format"})
}

s.Lock()
defer s.Unlock()
if len(s.RegisteredNodes) >= s.NumNodes {
return c.JSON(http.StatusForbidden, &BaseResponse{Status: "error", Message: "Registration closed"})
}

s.RegisteredNodes[nodeID] = true
s.Nodes = append(s.Nodes, nodeID)
s.PublicKeys[nodeID] = req.PublicKey
log.Infof("Registered node %s with public key %s", nodeID, req.PublicKey)

if len(s.RegisteredNodes) == s.NumNodes {
for i, nid := range s.Nodes {
s.NodeIndices[nid] = i + 1 // 1-based indexing
}
s.Cancel()
log.Infof("All %d nodes registered, starting DKG", s.NumNodes)
}

return c.JSON(http.StatusOK, &RegisterResponseFull{
BaseResponse: BaseResponse{Status: "success"},
Data: &RegisterResponse{NodeID: nodeID, PublicKey: req.PublicKey},
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Registration logic with potential ID mismatch.
While Node-ID is validated, consider cross-checking if the submitted RegisterRequest also references the same node. Additionally, storing the node ID in a separate global nodeIds map plus Server.RegisteredNodes may cause duplication or confusion.

🧰 Tools
🪛 golangci-lint (1.62.2)

141-141: undefined: echo

(typecheck)

Comment on lines +347 to +369
func (s *Server) submitPublicShare(c echo.Context) error {
req := &PublicShareData{}
if err := c.Bind(req); err != nil {
log.Errorf("Failed to bind public share: %v", err)
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Invalid request format"})
}
if len(req.PublicShare) == 0 {
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Public share data is required"})
}

s.Lock()
defer s.Unlock()
nodeID := c.Request().Header.Get("Node-ID")
if nodeID == "" {
return c.JSON(http.StatusBadRequest, &BaseResponse{Status: "error", Message: "Node-ID header is required"})
}
if _, ok := s.RegisteredNodes[nodeID]; !ok {
return c.JSON(http.StatusUnauthorized, &BaseResponse{Status: "error", Message: "Unregistered node"})
}
s.PublicShares[nodeID] = req.PublicShare
log.Infof("Received public share from %s", nodeID)
return c.JSON(http.StatusOK, &PublicSharesResponseFull{BaseResponse: BaseResponse{Status: "success"}})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Allow updates or final submission only?
If nodes can resubmit public shares, define how it merges or overrides existing data. If not, enforce prevention logic.

Comment on lines +352 to +376
func (c *Client) FetchPublicShares() (map[int][]byte, error) {
err := poll(c, "/dkg/public_shares", func(resp *DKGResponse[PublicSharesResponse]) bool {
return resp.Status == "success" && len(resp.Data.PublicShares) == c.DKG.NumNodes
})
if err != nil {
return nil, err
}
resp, err := get[PublicSharesResponse](c, "/dkg/public_shares")
if err != nil {
return nil, fmt.Errorf("%s: failed to fetch public shares: %v", c.NodeID, err)
}
publicShares := make(map[int][]byte)
for nodeID, pubBytes := range resp.Data.PublicShares {
index, ok := c.NodeIndices[nodeID]
if !ok {
return nil, fmt.Errorf("%s: missing NodeIndex for node %s", c.NodeID, nodeID)
}
publicShares[index] = pubBytes
}
if len(publicShares) != c.DKG.NumNodes {
return nil, fmt.Errorf("%s: expected %d public shares, got %d", c.NodeID, c.DKG.NumNodes, len(publicShares))
}
fmt.Printf("%s: Fetched %d public shares total\n", c.NodeID, len(publicShares))
return publicShares, nil
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Mapping indices to public shares.
Logic is consistent, but consider ensuring the public shares are validated or well-formed for cryptographic correctness.

@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 20, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 20, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 20, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 20, 2025
@reclaimprotocol reclaimprotocol deleted a comment from coderabbitai bot Mar 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant