Skip to content

feat: implement peerDAS on fulu #6353

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 74 commits into
base: unstable
Choose a base branch
from
Draft

feat: implement peerDAS on fulu #6353

wants to merge 74 commits into from

Conversation

Copy link
Contributor

github-actions bot commented Jun 22, 2024

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 574808e Previous: 0329edb Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 1.0214 ms/op 923.37 us/op 1.11
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 39.033 us/op 34.595 us/op 1.13
BLS verify - blst 882.53 us/op 1.0821 ms/op 0.82
BLS verifyMultipleSignatures 3 - blst 1.2992 ms/op 2.5478 ms/op 0.51
BLS verifyMultipleSignatures 8 - blst 1.8667 ms/op 2.3786 ms/op 0.78
BLS verifyMultipleSignatures 32 - blst 4.9951 ms/op 7.3623 ms/op 0.68
BLS verifyMultipleSignatures 64 - blst 11.520 ms/op 10.999 ms/op 1.05
BLS verifyMultipleSignatures 128 - blst 20.772 ms/op 17.011 ms/op 1.22
BLS deserializing 10000 signatures 737.02 ms/op 683.47 ms/op 1.08
BLS deserializing 100000 signatures 7.0564 s/op 6.8029 s/op 1.04
BLS verifyMultipleSignatures - same message - 3 - blst 872.08 us/op 1.5708 ms/op 0.56
BLS verifyMultipleSignatures - same message - 8 - blst 1.0371 ms/op 1.5790 ms/op 0.66
BLS verifyMultipleSignatures - same message - 32 - blst 1.7353 ms/op 1.9381 ms/op 0.90
BLS verifyMultipleSignatures - same message - 64 - blst 2.6460 ms/op 2.8095 ms/op 0.94
BLS verifyMultipleSignatures - same message - 128 - blst 4.3792 ms/op 4.5692 ms/op 0.96
BLS aggregatePubkeys 32 - blst 19.582 us/op 19.279 us/op 1.02
BLS aggregatePubkeys 128 - blst 70.048 us/op 69.161 us/op 1.01
notSeenSlots=1 numMissedVotes=1 numBadVotes=10 60.159 ms/op 69.104 ms/op 0.87
notSeenSlots=1 numMissedVotes=0 numBadVotes=4 55.517 ms/op 54.795 ms/op 1.01
notSeenSlots=2 numMissedVotes=1 numBadVotes=10 46.424 ms/op 58.417 ms/op 0.79
getSlashingsAndExits - default max 77.100 us/op 70.090 us/op 1.10
getSlashingsAndExits - 2k 280.47 us/op 308.20 us/op 0.91
proposeBlockBody type=full, size=empty 7.5332 ms/op 6.9423 ms/op 1.09
isKnown best case - 1 super set check 201.00 ns/op 193.00 ns/op 1.04
isKnown normal case - 2 super set checks 198.00 ns/op 190.00 ns/op 1.04
isKnown worse case - 16 super set checks 196.00 ns/op 190.00 ns/op 1.03
InMemoryCheckpointStateCache - add get delete 2.3490 us/op 2.3590 us/op 1.00
validate api signedAggregateAndProof - struct 1.3503 ms/op 2.5959 ms/op 0.52
validate gossip signedAggregateAndProof - struct 1.3491 ms/op 2.6071 ms/op 0.52
batch validate gossip attestation - vc 640000 - chunk 32 115.60 us/op 111.77 us/op 1.03
batch validate gossip attestation - vc 640000 - chunk 64 101.73 us/op 100.32 us/op 1.01
batch validate gossip attestation - vc 640000 - chunk 128 93.824 us/op 92.902 us/op 1.01
batch validate gossip attestation - vc 640000 - chunk 256 92.629 us/op 92.360 us/op 1.00
pickEth1Vote - no votes 944.27 us/op 961.72 us/op 0.98
pickEth1Vote - max votes 6.1935 ms/op 5.1190 ms/op 1.21
pickEth1Vote - Eth1Data hashTreeRoot value x2048 12.855 ms/op 10.383 ms/op 1.24
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 18.078 ms/op 14.481 ms/op 1.25
pickEth1Vote - Eth1Data fastSerialize value x2048 420.57 us/op 434.01 us/op 0.97
pickEth1Vote - Eth1Data fastSerialize tree x2048 2.9741 ms/op 3.0414 ms/op 0.98
bytes32 toHexString 355.00 ns/op 361.00 ns/op 0.98
bytes32 Buffer.toString(hex) 233.00 ns/op 237.00 ns/op 0.98
bytes32 Buffer.toString(hex) from Uint8Array 331.00 ns/op 326.00 ns/op 1.02
bytes32 Buffer.toString(hex) + 0x 233.00 ns/op 232.00 ns/op 1.00
Object access 1 prop 0.11000 ns/op 0.11200 ns/op 0.98
Map access 1 prop 0.11800 ns/op 0.11600 ns/op 1.02
Object get x1000 5.9010 ns/op 6.3200 ns/op 0.93
Map get x1000 6.7220 ns/op 6.8740 ns/op 0.98
Object set x1000 27.815 ns/op 28.006 ns/op 0.99
Map set x1000 18.647 ns/op 19.125 ns/op 0.98
Return object 10000 times 0.28050 ns/op 0.28180 ns/op 1.00
Throw Error 10000 times 4.2517 us/op 4.0882 us/op 1.04
toHex 130.77 ns/op 135.14 ns/op 0.97
Buffer.from 119.78 ns/op 123.07 ns/op 0.97
shared Buffer 74.463 ns/op 85.418 ns/op 0.87
fastMsgIdFn sha256 / 200 bytes 2.1110 us/op 2.2090 us/op 0.96
fastMsgIdFn h32 xxhash / 200 bytes 198.00 ns/op 289.00 ns/op 0.69
fastMsgIdFn h64 xxhash / 200 bytes 263.00 ns/op 269.00 ns/op 0.98
fastMsgIdFn sha256 / 1000 bytes 7.0890 us/op 7.2050 us/op 0.98
fastMsgIdFn h32 xxhash / 1000 bytes 329.00 ns/op 474.00 ns/op 0.69
fastMsgIdFn h64 xxhash / 1000 bytes 335.00 ns/op 345.00 ns/op 0.97
fastMsgIdFn sha256 / 10000 bytes 63.550 us/op 67.485 us/op 0.94
fastMsgIdFn h32 xxhash / 10000 bytes 1.7770 us/op 1.7980 us/op 0.99
fastMsgIdFn h64 xxhash / 10000 bytes 1.1750 us/op 1.1460 us/op 1.03
send data - 1000 256B messages 11.939 ms/op 11.963 ms/op 1.00
send data - 1000 512B messages 17.413 ms/op 15.602 ms/op 1.12
send data - 1000 1024B messages 26.286 ms/op 24.386 ms/op 1.08
send data - 1000 1200B messages 22.824 ms/op 19.141 ms/op 1.19
send data - 1000 2048B messages 24.158 ms/op 21.717 ms/op 1.11
send data - 1000 4096B messages 24.460 ms/op 23.504 ms/op 1.04
send data - 1000 16384B messages 73.030 ms/op 65.343 ms/op 1.12
send data - 1000 65536B messages 209.65 ms/op 206.36 ms/op 1.02
enrSubnets - fastDeserialize 64 bits 864.00 ns/op 874.00 ns/op 0.99
enrSubnets - ssz BitVector 64 bits 316.00 ns/op 327.00 ns/op 0.97
enrSubnets - fastDeserialize 4 bits 127.00 ns/op 130.00 ns/op 0.98
enrSubnets - ssz BitVector 4 bits 330.00 ns/op 324.00 ns/op 1.02
prioritizePeers score -10:0 att 32-0.1 sync 2-0 113.99 us/op 115.82 us/op 0.98
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 140.71 us/op 138.31 us/op 1.02
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 199.94 us/op 197.69 us/op 1.01
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 372.59 us/op 372.40 us/op 1.00
prioritizePeers score 0:0 att 64-1 sync 4-1 460.89 us/op 451.61 us/op 1.02
array of 16000 items push then shift 1.5762 us/op 1.6239 us/op 0.97
LinkedList of 16000 items push then shift 6.7910 ns/op 6.9110 ns/op 0.98
array of 16000 items push then pop 74.100 ns/op 73.719 ns/op 1.01
LinkedList of 16000 items push then pop 6.7570 ns/op 6.8160 ns/op 0.99
array of 24000 items push then shift 2.3492 us/op 2.3761 us/op 0.99
LinkedList of 24000 items push then shift 6.9440 ns/op 6.8910 ns/op 1.01
array of 24000 items push then pop 100.71 ns/op 99.570 ns/op 1.01
LinkedList of 24000 items push then pop 6.7410 ns/op 6.7790 ns/op 0.99
intersect bitArray bitLen 8 6.2200 ns/op 6.2930 ns/op 0.99
intersect array and set length 8 37.811 ns/op 37.310 ns/op 1.01
intersect bitArray bitLen 128 29.290 ns/op 29.385 ns/op 1.00
intersect array and set length 128 608.73 ns/op 626.58 ns/op 0.97
bitArray.getTrueBitIndexes() bitLen 128 1.0040 us/op 984.00 ns/op 1.02
bitArray.getTrueBitIndexes() bitLen 248 1.7730 us/op 1.7200 us/op 1.03
bitArray.getTrueBitIndexes() bitLen 512 3.6420 us/op 3.4960 us/op 1.04
Buffer.concat 32 items 630.00 ns/op 605.00 ns/op 1.04
Uint8Array.set 32 items 954.00 ns/op 1.4200 us/op 0.67
Buffer.copy 1.9570 us/op 2.0350 us/op 0.96
Uint8Array.set - with subarray 1.6460 us/op 2.4860 us/op 0.66
Uint8Array.set - without subarray 864.00 ns/op 1.0120 us/op 0.85
getUint32 - dataview 199.00 ns/op 190.00 ns/op 1.05
getUint32 - manual 122.00 ns/op 120.00 ns/op 1.02
Set add up to 64 items then delete first 2.6175 us/op 2.1687 us/op 1.21
OrderedSet add up to 64 items then delete first 3.6605 us/op 3.2217 us/op 1.14
Set add up to 64 items then delete last 2.5007 us/op 2.2901 us/op 1.09
OrderedSet add up to 64 items then delete last 3.7941 us/op 3.7142 us/op 1.02
Set add up to 64 items then delete middle 2.5688 us/op 2.3092 us/op 1.11
OrderedSet add up to 64 items then delete middle 5.4113 us/op 5.1420 us/op 1.05
Set add up to 128 items then delete first 5.4358 us/op 4.8572 us/op 1.12
OrderedSet add up to 128 items then delete first 8.2109 us/op 7.2261 us/op 1.14
Set add up to 128 items then delete last 5.1406 us/op 4.8902 us/op 1.05
OrderedSet add up to 128 items then delete last 8.5420 us/op 7.4340 us/op 1.15
Set add up to 128 items then delete middle 6.2822 us/op 4.7119 us/op 1.33
OrderedSet add up to 128 items then delete middle 15.505 us/op 13.323 us/op 1.16
Set add up to 256 items then delete first 11.208 us/op 9.7294 us/op 1.15
OrderedSet add up to 256 items then delete first 20.962 us/op 14.737 us/op 1.42
Set add up to 256 items then delete last 12.048 us/op 9.4150 us/op 1.28
OrderedSet add up to 256 items then delete last 16.579 us/op 14.877 us/op 1.11
Set add up to 256 items then delete middle 10.706 us/op 9.4019 us/op 1.14
OrderedSet add up to 256 items then delete middle 43.234 us/op 39.332 us/op 1.10
transfer serialized Status (84 B) 2.3130 us/op 2.2950 us/op 1.01
copy serialized Status (84 B) 1.2620 us/op 1.2270 us/op 1.03
transfer serialized SignedVoluntaryExit (112 B) 2.3820 us/op 2.3040 us/op 1.03
copy serialized SignedVoluntaryExit (112 B) 1.2770 us/op 1.2360 us/op 1.03
transfer serialized ProposerSlashing (416 B) 2.4000 us/op 2.4620 us/op 0.97
copy serialized ProposerSlashing (416 B) 1.3450 us/op 1.5210 us/op 0.88
transfer serialized Attestation (485 B) 2.4030 us/op 2.4170 us/op 0.99
copy serialized Attestation (485 B) 1.3470 us/op 2.0770 us/op 0.65
transfer serialized AttesterSlashing (33232 B) 2.5700 us/op 2.5920 us/op 0.99
copy serialized AttesterSlashing (33232 B) 4.0490 us/op 3.6130 us/op 1.12
transfer serialized Small SignedBeaconBlock (128000 B) 3.0890 us/op 3.0640 us/op 1.01
copy serialized Small SignedBeaconBlock (128000 B) 9.1000 us/op 8.5980 us/op 1.06
transfer serialized Avg SignedBeaconBlock (200000 B) 3.5020 us/op 3.6440 us/op 0.96
copy serialized Avg SignedBeaconBlock (200000 B) 14.407 us/op 12.667 us/op 1.14
transfer serialized BlobsSidecar (524380 B) 3.6030 us/op 3.5740 us/op 1.01
copy serialized BlobsSidecar (524380 B) 70.381 us/op 140.05 us/op 0.50
transfer serialized Big SignedBeaconBlock (1000000 B) 3.8850 us/op 3.8740 us/op 1.00
copy serialized Big SignedBeaconBlock (1000000 B) 123.54 us/op 220.98 us/op 0.56
pass gossip attestations to forkchoice per slot 2.8039 ms/op 2.7470 ms/op 1.02
forkChoice updateHead vc 100000 bc 64 eq 0 464.20 us/op 455.51 us/op 1.02
forkChoice updateHead vc 600000 bc 64 eq 0 3.1456 ms/op 2.7979 ms/op 1.12
forkChoice updateHead vc 1000000 bc 64 eq 0 4.9782 ms/op 4.7744 ms/op 1.04
forkChoice updateHead vc 600000 bc 320 eq 0 2.8815 ms/op 2.8091 ms/op 1.03
forkChoice updateHead vc 600000 bc 1200 eq 0 2.9029 ms/op 2.8497 ms/op 1.02
forkChoice updateHead vc 600000 bc 7200 eq 0 3.1629 ms/op 3.0398 ms/op 1.04
forkChoice updateHead vc 600000 bc 64 eq 1000 10.636 ms/op 10.699 ms/op 0.99
forkChoice updateHead vc 600000 bc 64 eq 10000 10.597 ms/op 10.628 ms/op 1.00
forkChoice updateHead vc 600000 bc 64 eq 300000 13.951 ms/op 13.394 ms/op 1.04
computeDeltas 500000 validators 300 proto nodes 3.9279 ms/op 3.7813 ms/op 1.04
computeDeltas 500000 validators 1200 proto nodes 4.0114 ms/op 3.7735 ms/op 1.06
computeDeltas 500000 validators 7200 proto nodes 4.0224 ms/op 3.7948 ms/op 1.06
computeDeltas 750000 validators 300 proto nodes 5.8475 ms/op 5.6464 ms/op 1.04
computeDeltas 750000 validators 1200 proto nodes 5.8567 ms/op 5.6506 ms/op 1.04
computeDeltas 750000 validators 7200 proto nodes 6.0177 ms/op 5.7279 ms/op 1.05
computeDeltas 1400000 validators 300 proto nodes 11.574 ms/op 10.574 ms/op 1.09
computeDeltas 1400000 validators 1200 proto nodes 11.796 ms/op 10.590 ms/op 1.11
computeDeltas 1400000 validators 7200 proto nodes 12.368 ms/op 10.794 ms/op 1.15
computeDeltas 2100000 validators 300 proto nodes 17.172 ms/op 16.040 ms/op 1.07
computeDeltas 2100000 validators 1200 proto nodes 16.533 ms/op 16.176 ms/op 1.02
computeDeltas 2100000 validators 7200 proto nodes 16.153 ms/op 16.596 ms/op 0.97
altair processAttestation - 250000 vs - 7PWei normalcase 2.0748 ms/op 2.0309 ms/op 1.02
altair processAttestation - 250000 vs - 7PWei worstcase 2.9688 ms/op 2.8672 ms/op 1.04
altair processAttestation - setStatus - 1/6 committees join 123.84 us/op 125.05 us/op 0.99
altair processAttestation - setStatus - 1/3 committees join 240.70 us/op 244.85 us/op 0.98
altair processAttestation - setStatus - 1/2 committees join 355.40 us/op 336.69 us/op 1.06
altair processAttestation - setStatus - 2/3 committees join 443.95 us/op 433.50 us/op 1.02
altair processAttestation - setStatus - 4/5 committees join 610.08 us/op 595.18 us/op 1.03
altair processAttestation - setStatus - 100% committees join 726.31 us/op 706.10 us/op 1.03
altair processBlock - 250000 vs - 7PWei normalcase 4.8800 ms/op 5.5121 ms/op 0.89
altair processBlock - 250000 vs - 7PWei normalcase hashState 38.282 ms/op 51.381 ms/op 0.75
altair processBlock - 250000 vs - 7PWei worstcase 31.824 ms/op 39.099 ms/op 0.81
altair processBlock - 250000 vs - 7PWei worstcase hashState 78.974 ms/op 108.97 ms/op 0.72
phase0 processBlock - 250000 vs - 7PWei normalcase 1.5104 ms/op 1.7164 ms/op 0.88
phase0 processBlock - 250000 vs - 7PWei worstcase 20.867 ms/op 27.993 ms/op 0.75
altair processEth1Data - 250000 vs - 7PWei normalcase 341.45 us/op 337.10 us/op 1.01
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:15 5.2460 us/op 7.5980 us/op 0.69
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:219 32.590 us/op 40.259 us/op 0.81
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:42 10.713 us/op 11.676 us/op 0.92
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:18 6.0350 us/op 7.4700 us/op 0.81
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1020 146.85 us/op 168.82 us/op 0.87
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11777 1.1135 ms/op 1.9204 ms/op 0.58
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 1.4660 ms/op 2.6074 ms/op 0.56
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 1.4700 ms/op 2.0207 ms/op 0.73
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 3.8850 ms/op 3.5329 ms/op 1.10
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 1.5854 ms/op 2.1573 ms/op 0.73
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 3.9046 ms/op 3.3624 ms/op 1.16
Tree 40 250000 create 445.60 ms/op 412.68 ms/op 1.08
Tree 40 250000 get(125000) 148.35 ns/op 146.67 ns/op 1.01
Tree 40 250000 set(125000) 1.4471 us/op 1.4285 us/op 1.01
Tree 40 250000 toArray() 17.904 ms/op 15.529 ms/op 1.15
Tree 40 250000 iterate all - toArray() + loop 20.221 ms/op 15.097 ms/op 1.34
Tree 40 250000 iterate all - get(i) 57.860 ms/op 51.691 ms/op 1.12
Array 250000 create 2.7836 ms/op 2.3538 ms/op 1.18
Array 250000 clone - spread 1.4272 ms/op 803.70 us/op 1.78
Array 250000 get(125000) 0.41500 ns/op 0.41200 ns/op 1.01
Array 250000 set(125000) 0.43600 ns/op 0.42000 ns/op 1.04
Array 250000 iterate all - loop 102.20 us/op 79.079 us/op 1.29
phase0 afterProcessEpoch - 250000 vs - 7PWei 43.454 ms/op 40.969 ms/op 1.06
Array.fill - length 1000000 3.4444 ms/op 3.3579 ms/op 1.03
Array push - length 1000000 13.147 ms/op 11.841 ms/op 1.11
Array.get 0.27733 ns/op 0.26040 ns/op 1.07
Uint8Array.get 0.44822 ns/op 0.42377 ns/op 1.06
phase0 beforeProcessEpoch - 250000 vs - 7PWei 17.488 ms/op 14.412 ms/op 1.21
altair processEpoch - mainnet_e81889 299.76 ms/op 316.18 ms/op 0.95
mainnet_e81889 - altair beforeProcessEpoch 19.968 ms/op 20.623 ms/op 0.97
mainnet_e81889 - altair processJustificationAndFinalization 5.4410 us/op 5.3300 us/op 1.02
mainnet_e81889 - altair processInactivityUpdates 4.7083 ms/op 4.5231 ms/op 1.04
mainnet_e81889 - altair processRewardsAndPenalties 39.635 ms/op 52.410 ms/op 0.76
mainnet_e81889 - altair processRegistryUpdates 652.00 ns/op 712.00 ns/op 0.92
mainnet_e81889 - altair processSlashings 180.00 ns/op 183.00 ns/op 0.98
mainnet_e81889 - altair processEth1DataReset 174.00 ns/op 172.00 ns/op 1.01
mainnet_e81889 - altair processEffectiveBalanceUpdates 1.2436 ms/op 1.2141 ms/op 1.02
mainnet_e81889 - altair processSlashingsReset 850.00 ns/op 1.1200 us/op 0.76
mainnet_e81889 - altair processRandaoMixesReset 1.1220 us/op 1.1830 us/op 0.95
mainnet_e81889 - altair processHistoricalRootsUpdate 180.00 ns/op 183.00 ns/op 0.98
mainnet_e81889 - altair processParticipationFlagUpdates 524.00 ns/op 515.00 ns/op 1.02
mainnet_e81889 - altair processSyncCommitteeUpdates 142.00 ns/op 136.00 ns/op 1.04
mainnet_e81889 - altair afterProcessEpoch 44.645 ms/op 43.405 ms/op 1.03
capella processEpoch - mainnet_e217614 1.0198 s/op 1.0608 s/op 0.96
mainnet_e217614 - capella beforeProcessEpoch 65.010 ms/op 64.064 ms/op 1.01
mainnet_e217614 - capella processJustificationAndFinalization 5.3260 us/op 5.6260 us/op 0.95
mainnet_e217614 - capella processInactivityUpdates 16.126 ms/op 16.364 ms/op 0.99
mainnet_e217614 - capella processRewardsAndPenalties 176.48 ms/op 221.78 ms/op 0.80
mainnet_e217614 - capella processRegistryUpdates 6.3280 us/op 7.1790 us/op 0.88
mainnet_e217614 - capella processSlashings 177.00 ns/op 185.00 ns/op 0.96
mainnet_e217614 - capella processEth1DataReset 173.00 ns/op 182.00 ns/op 0.95
mainnet_e217614 - capella processEffectiveBalanceUpdates 4.2474 ms/op 4.2455 ms/op 1.00
mainnet_e217614 - capella processSlashingsReset 885.00 ns/op 1.0030 us/op 0.88
mainnet_e217614 - capella processRandaoMixesReset 1.1790 us/op 1.3500 us/op 0.87
mainnet_e217614 - capella processHistoricalRootsUpdate 177.00 ns/op 183.00 ns/op 0.97
mainnet_e217614 - capella processParticipationFlagUpdates 537.00 ns/op 528.00 ns/op 1.02
mainnet_e217614 - capella afterProcessEpoch 115.54 ms/op 116.97 ms/op 0.99
phase0 processEpoch - mainnet_e58758 293.27 ms/op 304.90 ms/op 0.96
mainnet_e58758 - phase0 beforeProcessEpoch 66.298 ms/op 75.814 ms/op 0.87
mainnet_e58758 - phase0 processJustificationAndFinalization 5.7080 us/op 6.7670 us/op 0.84
mainnet_e58758 - phase0 processRewardsAndPenalties 35.942 ms/op 38.653 ms/op 0.93
mainnet_e58758 - phase0 processRegistryUpdates 3.0910 us/op 3.3740 us/op 0.92
mainnet_e58758 - phase0 processSlashings 174.00 ns/op 190.00 ns/op 0.92
mainnet_e58758 - phase0 processEth1DataReset 172.00 ns/op 207.00 ns/op 0.83
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.1297 ms/op 1.2877 ms/op 0.88
mainnet_e58758 - phase0 processSlashingsReset 882.00 ns/op 1.2220 us/op 0.72
mainnet_e58758 - phase0 processRandaoMixesReset 1.1550 us/op 1.2420 us/op 0.93
mainnet_e58758 - phase0 processHistoricalRootsUpdate 175.00 ns/op 181.00 ns/op 0.97
mainnet_e58758 - phase0 processParticipationRecordUpdates 869.00 ns/op 936.00 ns/op 0.93
mainnet_e58758 - phase0 afterProcessEpoch 35.680 ms/op 37.075 ms/op 0.96
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.3211 ms/op 1.3777 ms/op 0.96
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 1.8487 ms/op 1.9811 ms/op 0.93
altair processInactivityUpdates - 250000 normalcase 17.637 ms/op 22.043 ms/op 0.80
altair processInactivityUpdates - 250000 worstcase 18.345 ms/op 19.614 ms/op 0.94
phase0 processRegistryUpdates - 250000 normalcase 6.9290 us/op 6.0570 us/op 1.14
phase0 processRegistryUpdates - 250000 badcase_full_deposits 221.13 us/op 300.21 us/op 0.74
phase0 processRegistryUpdates - 250000 worstcase 0.5 104.42 ms/op 124.50 ms/op 0.84
altair processRewardsAndPenalties - 250000 normalcase 28.301 ms/op 33.494 ms/op 0.84
altair processRewardsAndPenalties - 250000 worstcase 26.947 ms/op 33.639 ms/op 0.80
phase0 getAttestationDeltas - 250000 normalcase 5.8455 ms/op 7.4859 ms/op 0.78
phase0 getAttestationDeltas - 250000 worstcase 6.8700 ms/op 6.1628 ms/op 1.11
phase0 processSlashings - 250000 worstcase 83.566 us/op 125.13 us/op 0.67
altair processSyncCommitteeUpdates - 250000 10.868 ms/op 10.897 ms/op 1.00
BeaconState.hashTreeRoot - No change 234.00 ns/op 232.00 ns/op 1.01
BeaconState.hashTreeRoot - 1 full validator 83.864 us/op 101.84 us/op 0.82
BeaconState.hashTreeRoot - 32 full validator 812.08 us/op 1.0095 ms/op 0.80
BeaconState.hashTreeRoot - 512 full validator 10.079 ms/op 12.919 ms/op 0.78
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 95.052 us/op 121.42 us/op 0.78
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 1.3821 ms/op 1.3651 ms/op 1.01
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 24.912 ms/op 30.679 ms/op 0.81
BeaconState.hashTreeRoot - 1 balances 76.989 us/op 90.755 us/op 0.85
BeaconState.hashTreeRoot - 32 balances 784.12 us/op 1.1233 ms/op 0.70
BeaconState.hashTreeRoot - 512 balances 7.3620 ms/op 10.845 ms/op 0.68
BeaconState.hashTreeRoot - 250000 balances 163.97 ms/op 201.36 ms/op 0.81
aggregationBits - 2048 els - zipIndexesInBitList 21.101 us/op 22.012 us/op 0.96
byteArrayEquals 32 52.618 ns/op 52.322 ns/op 1.01
Buffer.compare 32 18.938 ns/op 16.688 ns/op 1.13
byteArrayEquals 1024 1.5627 us/op 1.5455 us/op 1.01
Buffer.compare 1024 32.938 ns/op 24.018 ns/op 1.37
byteArrayEquals 16384 25.059 us/op 25.195 us/op 0.99
Buffer.compare 16384 187.25 ns/op 175.88 ns/op 1.06
byteArrayEquals 123687377 193.79 ms/op 196.26 ms/op 0.99
Buffer.compare 123687377 7.2809 ms/op 8.2484 ms/op 0.88
byteArrayEquals 32 - diff last byte 54.739 ns/op 53.264 ns/op 1.03
Buffer.compare 32 - diff last byte 20.889 ns/op 17.378 ns/op 1.20
byteArrayEquals 1024 - diff last byte 1.6153 us/op 1.6207 us/op 1.00
Buffer.compare 1024 - diff last byte 34.203 ns/op 25.393 ns/op 1.35
byteArrayEquals 16384 - diff last byte 25.132 us/op 25.856 us/op 0.97
Buffer.compare 16384 - diff last byte 195.53 ns/op 204.79 ns/op 0.95
byteArrayEquals 123687377 - diff last byte 189.04 ms/op 192.02 ms/op 0.98
Buffer.compare 123687377 - diff last byte 6.1944 ms/op 6.1288 ms/op 1.01
byteArrayEquals 32 - random bytes 5.0450 ns/op 5.0890 ns/op 0.99
Buffer.compare 32 - random bytes 19.850 ns/op 17.053 ns/op 1.16
byteArrayEquals 1024 - random bytes 5.0610 ns/op 5.0890 ns/op 0.99
Buffer.compare 1024 - random bytes 19.527 ns/op 17.126 ns/op 1.14
byteArrayEquals 16384 - random bytes 5.0240 ns/op 5.0870 ns/op 0.99
Buffer.compare 16384 - random bytes 20.243 ns/op 16.977 ns/op 1.19
byteArrayEquals 123687377 - random bytes 6.4900 ns/op 6.5000 ns/op 1.00
Buffer.compare 123687377 - random bytes 21.180 ns/op 18.330 ns/op 1.16
regular array get 100000 times 43.908 us/op 32.428 us/op 1.35
wrappedArray get 100000 times 43.881 us/op 32.429 us/op 1.35
arrayWithProxy get 100000 times 12.642 ms/op 11.782 ms/op 1.07
ssz.Root.equals 47.227 ns/op 45.436 ns/op 1.04
byteArrayEquals 46.161 ns/op 44.569 ns/op 1.04
Buffer.compare 12.103 ns/op 10.461 ns/op 1.16
processSlot - 1 slots 10.437 us/op 10.760 us/op 0.97
processSlot - 32 slots 2.1233 ms/op 2.2605 ms/op 0.94
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 2.9979 ms/op 2.8652 ms/op 1.05
getCommitteeAssignments - req 1 vs - 250000 vc 2.1319 ms/op 2.1171 ms/op 1.01
getCommitteeAssignments - req 100 vs - 250000 vc 4.0833 ms/op 4.0669 ms/op 1.00
getCommitteeAssignments - req 1000 vs - 250000 vc 4.3335 ms/op 4.3343 ms/op 1.00
findModifiedValidators - 10000 modified validators 738.43 ms/op 731.05 ms/op 1.01
findModifiedValidators - 1000 modified validators 653.62 ms/op 686.43 ms/op 0.95
findModifiedValidators - 100 modified validators 244.74 ms/op 183.81 ms/op 1.33
findModifiedValidators - 10 modified validators 187.34 ms/op 148.20 ms/op 1.26
findModifiedValidators - 1 modified validators 147.99 ms/op 144.49 ms/op 1.02
findModifiedValidators - no difference 152.00 ms/op 152.33 ms/op 1.00
compare ViewDUs 6.2613 s/op 6.1790 s/op 1.01
compare each validator Uint8Array 1.7402 s/op 1.9555 s/op 0.89
compare ViewDU to Uint8Array 961.31 ms/op 981.97 ms/op 0.98
migrate state 1000000 validators, 24 modified, 0 new 880.58 ms/op 943.58 ms/op 0.93
migrate state 1000000 validators, 1700 modified, 1000 new 1.1558 s/op 1.2456 s/op 0.93
migrate state 1000000 validators, 3400 modified, 2000 new 1.2657 s/op 1.3422 s/op 0.94
migrate state 1500000 validators, 24 modified, 0 new 922.01 ms/op 984.07 ms/op 0.94
migrate state 1500000 validators, 1700 modified, 1000 new 1.1136 s/op 1.2375 s/op 0.90
migrate state 1500000 validators, 3400 modified, 2000 new 1.3668 s/op 1.3157 s/op 1.04
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 4.4300 ns/op 4.1300 ns/op 1.07
state getBlockRootAtSlot - 250000 vs - 7PWei 603.44 ns/op 510.38 ns/op 1.18
naive computeProposerIndex 100000 validators 57.317 ms/op 50.273 ms/op 1.14
computeProposerIndex 100000 validators 1.6663 ms/op 1.4616 ms/op 1.14
naiveGetNextSyncCommitteeIndices 1000 validators 10.048 s/op 7.3761 s/op 1.36
getNextSyncCommitteeIndices 1000 validators 163.82 ms/op 114.48 ms/op 1.43
naiveGetNextSyncCommitteeIndices 10000 validators 10.857 s/op 7.3520 s/op 1.48
getNextSyncCommitteeIndices 10000 validators 216.26 ms/op 114.03 ms/op 1.90
naiveGetNextSyncCommitteeIndices 100000 validators 12.082 s/op 7.2632 s/op 1.66
getNextSyncCommitteeIndices 100000 validators 146.23 ms/op 117.04 ms/op 1.25
naive computeShuffledIndex 100000 validators 36.661 s/op 24.056 s/op 1.52
cached computeShuffledIndex 100000 validators 614.22 ms/op 547.14 ms/op 1.12
naive computeShuffledIndex 2000000 validators 480.34 s/op 481.52 s/op 1.00
cached computeShuffledIndex 2000000 validators 28.467 s/op 29.271 s/op 0.97
computeProposers - vc 250000 588.44 us/op 583.61 us/op 1.01
computeEpochShuffling - vc 250000 42.055 ms/op 41.632 ms/op 1.01
getNextSyncCommittee - vc 250000 10.408 ms/op 10.209 ms/op 1.02
computeSigningRoot for AttestationData 21.778 us/op 20.830 us/op 1.05
hash AttestationData serialized data then Buffer.toString(base64) 1.5745 us/op 1.5781 us/op 1.00
toHexString serialized data 1.1045 us/op 1.0963 us/op 1.01
Buffer.toString(base64) 167.94 ns/op 164.13 ns/op 1.02
nodejs block root to RootHex using toHex 132.59 ns/op 148.13 ns/op 0.90
nodejs block root to RootHex using toRootHex 83.032 ns/op 86.133 ns/op 0.96
browser block root to RootHex using the deprecated toHexString 206.71 ns/op 210.08 ns/op 0.98
browser block root to RootHex using toHex 169.07 ns/op 170.19 ns/op 0.99
browser block root to RootHex using toRootHex 156.60 ns/op 159.64 ns/op 0.98

by benchmarkbot/action

@g11tech g11tech force-pushed the peerDAS branch 4 times, most recently from 5fbbdb2 to 81aaeb5 Compare August 12, 2024 14:46
commit e8bc729
Author: Matthew Keil <[email protected]>
Date:   Tue Dec 3 10:08:30 2024 -0500

    refactor: peerdas types (#7243)

    * refactor: organize peerDAS types

    * refactor: DataColumnsData

    * refactor: rename BlockInputBlobs BlockInputColumnData

    * refactor: split up and rename BlockInputData

    * refactor: clean up BlobsData

    * refactor: clean up CachedData types

    * refactor: change from interface to type and update enum values for grafana

    * chore: lint

    * fix: remove extraneous lint fix

commit c8075d0
Author: Matthew Keil <[email protected]>
Date:   Mon Nov 25 15:41:41 2024 +0800

    feat: log peer disconnect info (#7231)

    * feat: log disconnect reason

    * feat: log peerScore update

    * fix: pretty print peerId

    * fix: use prettyPrintPeerId

commit 8689c76
Author: Matthew Keil <[email protected]>
Date:   Tue Oct 22 06:11:13 2024 -0400

    feat: check for no commitments on block or column in sidecar validation (#7184)

    * feat: check for no commitments on block or column in sidecar validation

    * test: add sanity check for empty blob commitments in column validation

    * fix: test bug

    * fix: but in test passing commitments

commit fccf9a2
Author: harkamal <[email protected]>
Date:   Tue Oct 8 12:44:11 2024 +0530

    add prevdownload tracker

commit 513bccc
Author: harkamal <[email protected]>
Date:   Wed Oct 2 16:46:40 2024 +0530

    fix

commit 1c08ab3
Author: harkamal <[email protected]>
Date:   Wed Oct 2 16:40:40 2024 +0530

    some fixing of beacon params

commit 7c9a01c
Author: harkamal <[email protected]>
Date:   Wed Oct 2 14:58:11 2024 +0530

    add debug and fix datacolumns migration and improve log

commit b04aaef
Author: harkamal <[email protected]>
Date:   Wed Oct 2 12:52:24 2024 +0530

    migrate datacolumns to finalized

commit a0e0087
Author: harkamal <[email protected]>
Date:   Wed Oct 2 00:36:03 2024 +0530

    more log

commit 2736b8c
Author: harkamal <[email protected]>
Date:   Tue Oct 1 19:43:59 2024 +0530

    add enhance datacolumn serving logs

commit cce193b
Author: harkamal <[email protected]>
Date:   Tue Oct 1 17:36:14 2024 +0530

    improve logging for debugging

commit a3de70f
Author: harkamal <[email protected]>
Date:   Thu Sep 26 14:43:49 2024 +0530

    turn persisting network identity to default true

commit cec27d6
Author: harkamal <[email protected]>
Date:   Sat Sep 21 22:07:58 2024 +0530

    handle edge case

commit 6a77828
Author: harkamal <[email protected]>
Date:   Sat Sep 21 20:33:53 2024 +0530

    add debug console log

commit 574837a
Author: harkamal <[email protected]>
Date:   Sat Sep 21 18:39:36 2024 +0530

    use sample subnets for data availability

commit fee7c08
Author: harkamal <[email protected]>
Date:   Tue Sep 17 16:42:45 2024 +0530

    validate inclusion proof

commit 20ef4c6
Author: Matthew Keil <[email protected]>
Date:   Tue Sep 17 05:33:40 2024 -0400

    feat: validate data column sidecars (#7073)

    * feat: update c-kzg to final DAS version

    * refactor: use trusted-setup from c-kzg package

    * feat: implement validateDataColumnsSidecars

    * feat: check block and column commitments match

    * test: add unit test for validateDataColumnsSidecars

    * fix: invalid build and update validity condition of validateDataColumnsSidecars

    * fix: make error messages better

    * fix: electra vs peerdas type conflict

commit b1940ee
Author: Matthew Keil <[email protected]>
Date:   Tue Sep 17 05:26:05 2024 -0400

    fix: remove ckzg build script (#7089)

    * fix: remove unused ckzg build script

    * fix: remove unused rsync dep from Dockerfile

commit bd4f7f9
Author: Matthew Keil <[email protected]>
Date:   Mon Sep 16 06:31:52 2024 -0400

    feat: update ckzg to final DAS version (#7050)

    * feat: update c-kzg to final DAS version

    * refactor: use trusted-setup from c-kzg package

commit a33303f
Author: Matthew Keil <[email protected]>
Date:   Mon Sep 16 04:08:29 2024 -0400

    feat: refactor and unit test getDataColumnSidecars (#7072)

    * refactor: getDataColumnSidecars

    * test: unit test getDataColumnSidecars with mocks from c-kzg library

    * refactor: use fromHex util

    * chore: update numbering on mocks

    * chore: update c-kzg to latest version

    * chore: fix type export syntax

    * test: add verification for cells from sidecars

    * test: add verification to DataColumnSidecars tests

    * refactor: getDataColumnSidecars for PR comments

    * feat: narrow type and remove unnecessary conditional

    * fix: getDataColumnSidecars param type

    * refactor: rename to computeDataColumnSidecars

commit 4ec7aff
Author: harkamal <[email protected]>
Date:   Fri Sep 13 18:19:45 2024 +0530

    edge case optimization

commit 3470076
Author: harkamal <[email protected]>
Date:   Fri Sep 13 17:31:52 2024 +0530

    more debug log

commit c4d04ee
Author: harkamal <[email protected]>
Date:   Fri Sep 13 15:35:32 2024 +0530

    fix the column id compute

commit cdd9bae
Author: harkamal <[email protected]>
Date:   Thu Sep 12 17:04:07 2024 +0530

    update compute spec tests

commit 2b10e4d
Author: harkamal <[email protected]>
Date:   Thu Sep 12 15:25:00 2024 +0530

    datacolumns retrival fix

commit 56c8c6e
Author: harkamal <[email protected]>
Date:   Thu Sep 12 13:57:51 2024 +0530

    custodied column fetch debugging log

commit af933fb
Author: harkamal <[email protected]>
Date:   Wed Sep 11 22:11:16 2024 +0530

    some fixes

commit 74d8122
Author: harkamal <[email protected]>
Date:   Wed Sep 11 21:57:41 2024 +0530

    add some log for debugging inbound data columns request

commit f7571f4
Author: harkamal <[email protected]>
Date:   Wed Sep 11 17:07:52 2024 +0530

    add some more loggig and availaibility tracking

commit d35873e
Author: harkamal <[email protected]>
Date:   Wed Sep 11 15:02:31 2024 +0530

    further wait till cutoff for all data to be available

commit 8c21168
Author: harkamal <[email protected]>
Date:   Wed Sep 11 00:02:09 2024 +0530

    make pull a little less agressive

commit de341b5
Author: harkamal <[email protected]>
Date:   Tue Sep 10 22:21:37 2024 +0530

    add send more log

commit bd84892
Author: harkamal <[email protected]>
Date:   Tue Sep 10 22:01:43 2024 +0530

    more log

commit d7721f8
Author: harkamal <[email protected]>
Date:   Tue Sep 10 21:19:11 2024 +0530

    fix bug

commit 5e1de6f
Author: harkamal <[email protected]>
Date:   Tue Sep 10 20:45:05 2024 +0530

    trying some fix

commit 2bc1a0d
Author: harkamal <[email protected]>
Date:   Tue Sep 10 20:11:05 2024 +0530

    add cache tracking

commit 387da88
Author: harkamal <[email protected]>
Date:   Tue Sep 10 19:27:05 2024 +0530

    add more log

commit aece0ab
Author: harkamal <[email protected]>
Date:   Tue Sep 10 17:46:52 2024 +0530

    fix add missing data availability resolutions

commit 006e781
Author: harkamal <[email protected]>
Date:   Sat Sep 7 20:35:00 2024 +0530

    add debug log

commit bf08852
Author: harkamal <[email protected]>
Date:   Sat Sep 7 20:08:10 2024 +0530

    resolve availability when datacolumns are downloaded and matched

commit 2833ac0
Author: harkamal <[email protected]>
Date:   Thu Sep 5 20:17:04 2024 +0530

    make the csc encoding updates as per latest spec

commit 585165e
Author: harkamal <[email protected]>
Date:   Wed Aug 28 17:35:31 2024 +0530

    remove banning unknown block, addmore log

commit a33a72f
Author: harkamal <[email protected]>
Date:   Tue Aug 27 21:59:03 2024 +0530

    subnet count 128

commit ae7678e
Author: harkamal <[email protected]>
Date:   Tue Aug 27 21:29:50 2024 +0530

    fx

commit 4b6f167
Author: harkamal <[email protected]>
Date:   Tue Aug 27 19:37:37 2024 +0530

    fix bug

commit 180f7d8
Author: harkamal <[email protected]>
Date:   Tue Aug 27 18:58:26 2024 +0530

    fix log

commit 54579b0
Author: harkamal <[email protected]>
Date:   Tue Aug 27 17:21:07 2024 +0530

    add more info for debugging

commit a3533f8
Author: harkamal <[email protected]>
Date:   Tue Aug 27 17:13:41 2024 +0530

    add supernode flag to configure node custody requirement and make it not required for validator

commit e6c613f
Author: harkamal <[email protected]>
Date:   Tue Aug 27 15:51:20 2024 +0530

    rename electra fork to peerdas for rebase and make csc in metadata uint8

commit 81aaeb5
Author: harkamal <[email protected]>
Date:   Mon Aug 12 15:43:39 2024 +0530

    feat: add and use metadatav3 for peer custody subnet

    fixes for metadata, working locally

    change the condition to update metadata csc change

commit c7f6341
Author: harkamal <[email protected]>
Date:   Fri Aug 9 17:06:52 2024 +0530

    fix the types/test

    rebase fixes

commit a0c5d27
Author: harkamal <[email protected]>
Date:   Tue Jul 16 18:54:18 2024 +0530

    fix: refactor to add and use nodeid computation and clear out nodeid tracking

    nodeid cleanup for network

commit d423004
Author: harkamal <[email protected]>
Date:   Mon Jul 15 03:12:52 2024 +0530

    feat: add the modifications to work with devnet2

    some network options to control peering behavior

    allow setting node custody capability via --params

    use eip754 names for the peerdas config

commit 47eedae
Author: harkamal <[email protected]>
Date:   Sun Jul 14 17:56:59 2024 +0530

    feat: get various sync mechanisms working with/without sharded data

commit 156ef53
Author: matthewkeil <[email protected]>
Date:   Fri Jun 21 17:09:44 2024 +0200

    fix: docker build issue for c-kzg

    wip: REPLACE THIS COMMIT

    commit yarn lock

    rebase fixes

    fix: update c-zkg install workflow

    feat: add trustedSetupPrecompute cli flag

    fix: update trusted-setup for testing

    fix: update c-zkg install workflow to remove sudo

    fix: add rsync to apk deps

commit 499d93c
Author: harkamal <[email protected]>
Date:   Wed Jan 24 18:40:25 2024 +0530

    feat: implement peerDAS on electra

    add some presets

    add further params and types

    add data column to types repo and network

    move to max request data columns to preset

    add the datacolumns data in blockinput and fix breaking errors in seen gossip blockinput

    handle data columns in gossip and the seengossip

    further propagate forkaware blockdata and resolve build/type issues

    further handle datacolumns sync by range by root and forkaware data handling

    fix issues

    chore: update c-kzg to peerDas version

    feat: add peerDas ckzg functions to interface

    fix the lookups

    handle the publishing flow

    various sync try fixes

    fixes

    compute blob side car

    various misl debuggings and fixes

    debug and apply fixes and get range and by root sync to work will full custody

    enable syncing with lower custody requirement

    use node peerid rather than a dummy string

    get and use the nodeid from enr and correctly compute subnets and column indexes

    filterout and connect to peers only matching out custody requiremnt

    try adding custody requirement

    add protection for subnet calc

    get the sync working with devnet 0

    correctly set the enr with custody subnet info

    rebase fixes

    small refactor

commit 4805a2e
Author: harkamal <[email protected]>
Date:   Wed Jan 24 17:38:11 2024 +0530

    feat: placeholder PR for electra

    add types stub and epoch config

    fix types
twoeths and others added 30 commits April 7, 2025 12:23
**Motivation**

- we want to have a single CustodyConfig as a preparation for validator
custody config work see
#7607 (review)
- but since we need data on both, I designed to have CustodyConfig
created on BeaconChain (main thread) and NetworkCore (network thread)
- when we have a change on number of connected validators, need to
update both

**Description**

- on main thread store CustodyConfig on BeaconChain
- on network thread, create a wrapped NetworkGlobal to store
CustodyConfig and node id there. In the future we can consider storing
more data there
- add more data to CustodyConfig: sampleGroups, sampledSubnets
- use that CustodyConfig everywhere

---------

Co-authored-by: Tuyen Nguyen <[email protected]>
**Motivation**

Get a lot of errors like this when running devnet-6 in our nodes

```

debug: Block error slot=640, code=BLOCK_ERROR_BEACON_CHAIN_ERROR, error=blobKzgCommitmentsLen exceeds limit=9 |  
-- | --
  |   | 2025-04-13 16:03:35.062 | Error: blobKzgCommitmentsLen exceeds limit=9


```

**Description**

- also log the blobKzgCommitmentsLen to give us more information of the
error

Co-authored-by: Tuyen Nguyen <[email protected]>
**Motivation**

- lodestar does not work on peerdas-devnet-6

```
verbose: Batch process error id=Finalized, startEpoch=20, status=Processing, code=BLOCK_ERROR_BEACON_CHAIN_ERROR, error=blobKzgCommitmentsLen of 12 exceeds limit=9
```

**Description**

- add and use MAX_BLOBS_PER_BLOCK_FULU

Co-authored-by: Tuyen Nguyen <[email protected]>
**Motivation**

add `engine_getBlobsV2` to the execution API in preparation for
implementation of [distributed blob
publishing](https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#distributed-blob-publishing-using-blobs-retrieved-from-local-execution-layer-client)

@dguenther and I wanted to get early feedback on the API change before
moving forward with the rest of the implementation

**Description**

upcoming spec changes will add `engine_getBlobsV2` to the execution API
to fetch blobs and cell proofs from the execution layer
(ethereum/execution-apis#630)

* add `engine_getBlobsV2` to execution API
* add type definition for `BlobAndProofV2`

**Not included**

We'll follow up with additional PR(s) for these as we move forward with
distributed blob publishing:

* fetch blobs from the EL in two places: on first seen block input
gossip and on unknown blocks during syncing
* reconstruct blobs from cell proofs
* publish data column sidecars on subscribed topics after

Relates to #7638

---------

Co-authored-by: Derek Guenther <[email protected]>
**Motivation**

@hughy and I have a basic implementation of [validator
custody](https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/validator.md)
for peerDAS.

We're planning to do more testing on this, but would appreciate a review
on the architecture since we're still pretty new!

This relates to #7632 - If it merges, we'll update our PR to account for
it.

**Description**

* Centralizes custody values like `sampledGroups` and `custodyGroups`
into `CustodyConfig`. `CustodyConfig` is now treated as a singleton.
* Creates a new `advertisedGroupCount` in `CustodyConfig`, used for the
custody group count in the node's metadata/ENR.
* Adds `setSamplingGroupCount` and `setAdvertisedGroupCount` to
NetworkCore API. Updated by an `EventEmitter` on `CustodyConfig`.
* Adds LocalValidatorRegistry to track connected validators.
* Updates custody requirement in `chain.onForkChoiceFinalized`.

**Not Included**

I'll open separate issues for these if we're okay merging this PR
without them.

* Backfilling groups when the target custody group count increases
* Handling changes in other peers' custody group counts
* Race conditions around group count changing during syncing

<!-- A clear and concise general description of the changes of this PR
commits -->

<!-- If applicable, add screenshots to help explain your solution -->

<!-- Link to issues: Resolves #111, Resolves #222 -->

Closes #7619

---------

Co-authored-by: Hugh Cunningham <[email protected]>
**Motivation**

Adds some of the peerDAS beacon metrics from
ethereum/beacon-metrics#14.

We don't do reconstruction or single-proof verification yet, so not able
to add those.

cc @KatyaRyazantseva
**Motivation**

Running a cluster of three nodes in Kurtosis, was seeing this error
before the Fulu fork:

```
[cl-1-lodestar-geth] Error: Request to send to protocol /eth2/beacon_chain/req/metadata/3/ssz_snappy but it has not been declared
[cl-1-lodestar-geth]     at ReqRespBeaconNode.sendRequest (file:///usr/app/packages/reqresp/lib/ReqResp.js:104:23)
```

The error is because the client is attempting to send Metadatav3
requests. There's a check in the sender that prevents sending messages
to versions that the client hasn't registered, and Metadatav3 is not
registered until the fork.

[The Fulu
spec](https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#getmetadata-v3)
says to follow the same semantics as Altair, which allows registering
the new metadata endpoint before the fork.

I think that's a good idea anyway, to allow clients to prioritize peers
pre-fork based on their expected custody groups.

**Description**

* Updates MetadataV3 to always be registered
* Updates MetadataV2 to unregister at Fulu fork, like MetadataV1 does at
Altair
**Motivation**

Add support for [distributed blob
publishing](https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#distributed-blob-publishing-using-blobs-retrieved-from-local-execution-layer-client)
(When a block or first data column for a block is received, fetch blobs
from EL to reconstruct columns, then publish columns to the network).

Depends on #7675 -- the last commit contains changes from this branch.

Fixes #7638

**Description**

* Adds a ColumnReconstructor to Sync that takes a chain and a network
* Adds new chain events for dataColumnGossip and blockGossip
* When either event is fired and it's the first time a block root is
seen, call engine_getBlobsV2 to fetch all blobs/cell proofs in the
block.
* If received, add them to `seenGossipBlockInput`.

Still TODO: Figure out metrics tracking for this.

---------

Co-authored-by: matthewkeil <[email protected]>
**Motivation**

<!-- Why is this PR exists? What are the goals of the pull request? -->

**Description**

Configure a CI workflow to automatically publish npm packages and Docker
images on push events to any nextfork branches, including the currently
active `peerDAS` branch.

Closes #issue_number

**Steps to test or reproduce**

<!--Steps to reproduce the behavior:
```sh
git checkout <feature_branch>
lodestar beacon --new-flag option1
```
-->
**Motivation**

- we're using noble on peerDAS, need to merge unstable to fix this but
got libp2p dial issue there (see #7698) so cherry-pick #7621 to unblock
peerDAS

**Description**

- use the latest `persistent-merkle-tree` and `as-sha256` everywhere

Co-authored-by: Tuyen Nguyen <[email protected]>
**Motivation**

<!-- Why is this PR exists? What are the goals of the pull request? -->

**Description**

<!-- A clear and concise general description of the changes of this PR
commits -->

<!-- If applicable, add screenshots to help explain your solution -->

<!-- Link to issues: Resolves #111, Resolves #222 -->

Closes #issue_number

**Steps to test or reproduce**

<!--Steps to reproduce the behavior:
```sh
git checkout <feature_branch>
lodestar beacon --new-flag option1
```
-->

---------

Co-authored-by: Derek Guenther <[email protected]>
**Motivation**

Not able to sync due to the following errors:
```
Apr-18 07:57:45.931[sync]          ^[[36mverbose^[[39m: Batch download error id=Head, startEpoch=1798, status=Downloading, peer=16...YqJeis - Cannot read properties of undefined (reading 'type')
TypeError: Cannot read properties of undefined (reading 'type')
    at matchBlockWithDataColumns (file:///usr/src/lodestar/packages/beacon-node/src/network/reqresp/beaconBlocksMaybeBlobsByRange.ts:330:28)
    at beaconBlocksMaybeBlobsByRange (file:///usr/src/lodestar/packages/beacon-node/src/network/reqresp/beaconBlocksMaybeBlobsByRange.ts:127:20)
    at wrapError (file:///usr/src/lodestar/packages/beacon-node/src/util/wrapError.ts:18:32)
    at SyncChain.sendBatch (file:///usr/src/lodestar/packages/beacon-node/src/sync/range/chain.ts:410:19)
```

```
Apr-18 08:13:42.983[sync]          verbose: Batch download error id=Head, startEpoch=1800, status=Downloading, peer=16...T4qw69 - Unmatched blobSidecars, blocks=0, blobs=96 lastMatchedSlot=-1, pending blobSidecars slots=57602 57602 57602 57602 57602 57602 57602 57602 57608 57608 57608 57608 57608 57608 57608 57608 57609 57609 57609 57609 57609 57609 57609 57609 57610 57610 57610 57610 57610 57610 57610 57610 57612 57612 57612 57612 57612 57612 57612 57612 57615 57615 57615 57615 57615 57615 57615 57615 57616 57616 57616 57616 57616 57616 57616 57616 57617 57617 57617 57617 57617 57617 57617 57617 57620 57620 57620 57620 57620 57620 57620 57620 57625 57625 57625 57625 57625 57625 57625 57625 57627 57627 57627 57627 57627 57627 57627 57627 57631 57631 57631 57631 57631 57631 57631 57631
```

it happens so many times that prevent my node to sync
```
grep -e "Batch download error id=Head" -rn beacon-2025-04-18.log | grep "blocks=0" | wc -l
9408
```

**Description**

- handle `partialDownload` containing 0 blocks

---------

Co-authored-by: Tuyen Nguyen <[email protected]>
**Motivation**

- it's not a good idea to track peer count per sampling group because
when the node is restarted we have different sampling groups, then the
metric will also track sampling groups of prior times. It looks like:

<img width="1262" alt="Screenshot 2025-04-18 at 15 42 20"
src="https://github.com/user-attachments/assets/da816d46-d8be-4d55-9222-22c2d6b3935a"
/>


**Description**

- track by group index instead so it's consistently from 0 to 7. If we
want to know specific groups, it's available in the log, just search for
`requestedColumns`, for example

Co-authored-by: Tuyen Nguyen <[email protected]>
**Motivation**

- fix fetchUnknownBlockRoot

**Description**
- as confirmed by @g11tech : "probably a debugging artifact, should be
restored"

Co-authored-by: Tuyen Nguyen <[email protected]>
**Motivation**

implement the same pruning logic used for blob sidecars for data column
sidecars

**Description**

update archiveBlocks to delete data column sidecar data the the node has
stored longer than the minimum required epochs

add config field MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS
**Motivation**

We're currently fetching blobs from the EL as soon as the first
block/data column is received, per spec. However, we also should fetch
blobs in `unavailableBeaconBlobsByRoot`, like we do in deneb/electra
(this gets called about 2.5s into the slot). I added that in this PR, as
well as a test.

Note that getBlobsV2 will be called prior to every peer reqresp
attempted -- as far as I can tell, the behavior is the same behavior
with getBlobsV1.

**Description**

* Made `reconstructColumns` a generalized function:
`getDataColumnsFromExecution`
* Added a test that mocks blobs returned from the EL
**Motivation**

The MetadataController currently sets CGC on the ENR in two cases:

* When the node is first started after the Fulu epoch
* Any time `setAdvertisedGroupCount` is called

We should at least ensure CGC is set when crossing the Fulu fork
boundary so that nodes are aware of our CGC (for example, if we're set
as a supernode, nodes would currently assume we only custody 4 columns).

Additionally, I think we should have CGC available prior to the fork
boundary in case nodes want to use it to prioritize peers before the
fork. We already do this for Metadata by making the MetadataV3 endpoint
available prior to the fork.

<!-- Why is this PR exists? What are the goals of the pull request? -->

**Description**

* Removes the condition around CGC in the upstreamValues function so
that CGC is sent to ENR regardless of the epoch at node start.
* Adds a test to make sure CGC is always set by upstreamValues
**Motivation**

The data column sidecar gossip validation didn't match the spec, so
updated it to cover all cases:
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#data_column_sidecar_subnet_id

**Description**

* Also renamed `index` to `subnet` when used for gossip topics, to
distinguish between that and column indexes.

---------

Co-authored-by: matthewkeil <[email protected]>
**Motivation**

When the CGC was changed in MetadataController, `onSetValue` was being
called with the previous CGC rather than the new one.

**Description**

* Breaks out CGC serialization into a util and adds tests
* Adds test for updating metadatacontroller CGC
* Updates MetadataController CGC setter to call onSetValue with new CGC
instead of existing value

---------

Co-authored-by: matthewkeil <[email protected]>
**Motivation**

Updates the minimum epoch check in `dataColumnSidecarsByRoot` to match
the spec.

Co-authored-by: matthewkeil <[email protected]>
**Motivation**

Uses cell proofs from EL `getPayloadsV5` to offload cell proof
construction

Refer to [EIP-7594
spec](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7594.md#networking)
for more on requiring blob transaction senders to compute cell proofs.

[Pending spec
changes](https://github.com/ethereum/execution-apis/blob/cad4194e3fa37359a1be95e4aad2752d69691077/src/engine/osaka.md#engine_getpayloadv5)
define updates for `getPayloadsV5` in the execution engine API

**Description**

* add engine_getPayloadV5 to engine API
* avoid computing cell proofs in computeDataColumnSidecars
* rename CELLS_PER_BLOB to CELLS_PER_EXT_BLOB to match spec
* update sszTypes for Fulu BlockContents, SignedBlockContents

Closes #7669

**Other notes**

* implement blobsBundle validation

I've added a validation function for validating `BlobsBundleV2` by
computing cells and batch verifying the cell proofs, but don't currently
call this function. We could validate this data on receiving responses
from the EL or when producing the block body, but we might consider data
from the EL trustworthy and skip costly verification.

---------

Co-authored-by: Matthew Keil <[email protected]>
Co-authored-by: matthewkeil <[email protected]>
**Motivation**

we should not update advertised CGC until we have backfilled groups.
backfill not yet implemented
)

**Motivation**

I was updating the beacon API to include custody group count when I
noticed that the metadata field should be named `custody_group_count`,
not `cgc` (unlike the ENR).

*
https://github.com/ethereum/consensus-specs/blob/dev/specs/fulu/p2p-interface.md#metadata
*
https://github.com/ethereum/beacon-APIs/blob/2b1d7b5ac4756881bd29e7adacc9b7032343d981/types/p2p.yaml#L39

<!-- Why is this PR exists? What are the goals of the pull request? -->

**Description**

* Updated `/eth/v1/node/identity` to return Fulu metadata
* Renamed `cgc` to `custodyGroupCount` in Metadata

---------

Co-authored-by: Nico Flaig <[email protected]>
**Motivation**

phase0 spec states that `seq_number` should be incremented by 1 whenever
any other field in metadata changes:
https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#metadata

**Description**

increment seq_number when the metadata cgc field changes
…on (#7733)

**Motivation**

fix validator custody computation for supernodes

**Description**

if a node has set NODE_CUSTODy_REQUIREMENT to a higher value than
validator custody requires then the node should use
NODE_CUSTODY_REQUIREMENT to determine the number of groups to custody.
for example, if a node is running as a supernode, then it should require
all groups even if its validator balances don't require it

compute validators custody requirement as the max of
NODE_CUSTODY_REQUIREMENT and the requirement computed from validator
balances

**Steps to test or reproduce**

run added unit test in
`packages/beacon-node/test/unit/util/dataColumn.test.ts`
**Motivation**

- right now we cannot sync hoodi using `peerDAS`, this PR fixes it

**Description**

- when `archiveBlocks` if data is out of
`MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS`, no need to archive blobs/data
columns
Closes #7754

<img width="1677" alt="Screenshot 2025-04-29 at 09 19 09"
src="https://github.com/user-attachments/assets/93b8cb5c-f8ca-42ec-99c2-c81aae3e3004"
/>

---------

Co-authored-by: Tuyen Nguyen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants