Skip to content

Commit 7a85172

Browse files
committed
update TODO.md and fix test_packed_pcs
1 parent 681e6fd commit 7a85172

File tree

2 files changed

+145
-155
lines changed

2 files changed

+145
-155
lines changed

TODO.md

Lines changed: 4 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -4,33 +4,27 @@
44

55
- WHIR univariate skip?
66
- Opti recursion bytecode
7-
- inverse folding ordering in WHIR to enable Packing during sumcheck (more generally, TODO packing everywhere)
7+
- packing (SIMD) everywhere
88
- one can "move out" the variable of the eq(.) polynomials out of the sumcheck computation in WHIR (as done in the PIOP)
99
- Structured AIR: often no all the columns use both up/down -> only handle the used ones to speed up the PIOP zerocheck
10-
- avoid transpositions (poseidon trace generation)
1110
- Use Univariate Skip to commit to tables with k.2^n rows (k small)
12-
- avoid field embedding in the initial sumcheck of logup*, when table / values are in base field
1311
- opti logup* GKR when the indexes are not a power of 2 (which is the case in the execution table)
1412
- incremental merkle paths in whir-p3
15-
- Experiment to increase degree, and reduce commitments, in Poseidon arithmetization.
16-
Result: degree 9 is better than 3. TODO: degree 5, or 6 ? Also, the current degre 9 implem may not be perfectly optimal?
1713
- Avoid embedding overhead on the flag, len, and index columns in the AIR table for dot products
18-
- Batched logup*: when computing the eq() factor we can opti if the points contain boolean factor
1914
- Lev's trick to skip some low-level modular reduction
20-
- Sumcheck, case z = 0, no need to fold, only keep first half of the values (done in PR 33 by Lambda) (and also in WHIR?)
15+
- Sumcheck, case z = 0, no need to fold, only keep first half of the values (done in PR 33 by Lambda)
2116
- Custom AVX2 / AVX512 / Neon implem in Plonky3 for all of the finite field operations (done for degree 4 extension, but not degree 5)
2217
- Many times, we evaluate different multilinear polynomials (different columns of the same table etc) at a common point. OPTI = compute the eq(.) once, and then dot_product with everything
2318
- To commit to multiple AIR table using 1 single pcs, the most general form our "packed pcs" api should accept is:
2419
a list of n (n not a power of 2) columns, each ending with m repeated values (in this manner we can reduce proof size when they are a lot of columns (poseidons ...))
2520
- in the runner of leanISA program, if we call 2 times the same function with the same arguments, we can reuse the same memory frame
2621
- the interpreter of leanISA (+ witness generation) can be partially parallelized when there are some independent loops
27-
- (1 - x).r1 + x.r2 = x.(r2 - r1) + r1 TODO this opti is not everywhere currently + TODO generalize this with the univaraite skip
22+
- (1 - x).r1 + x.r2 = x.(r2 - r1) + r1 TODO this opti is not everywhere currently + TODO generalize this with the univariate skip
2823
- opti compute_eval_eq when scalar = ONE
2924
- Dmitry's range check, bonus: we can spare 2 memory cells if the value being range check is small (using the zeros present by conventio on the public memory)
3025
- Make everything "padding aware" (including WHIR, logup*, AIR, etc)
3126
- Opti WHIR: in sumcheck we know more than f(0) + f(1), we know f(0) and f(1)
3227
- Opti WHIR https://github.com/tcoratger/whir-p3/issues/303 and https://github.com/tcoratger/whir-p3/issues/306
33-
- Avoid committing to extra columns / adding some constraints in poseidon16 AIR, for "compression", and use instead sumcheck
3428
- Avoid committing to the 3 index columns, and replace it by a sumcheck? Using this idea, we would only commit to PC and FP for the execution table. Idea by Georg (Powdr). Do we even need to commit to FP then?
3529

3630
About "the packed pcs" (similar to SP1 Jagged PCS, slightly less efficient, but simpler (no sumchecks)):
@@ -93,11 +87,7 @@ But we reduce proof size a lot using instead (TODO):
9387

9488
# Random ideas
9589

96-
- About range checks, that can currently be done in 3 cycles (see 2.5.3 of the zkVM pdf), in the instruction encoding of DEREF, if we replaced (1 - AUX) by a dedicated column,
97-
we could allow DEREFS that 'does not do anything with the resulting value', which is exactly what we want for range check: we only want to ensure that m[m[fp + x]] (resp m[(t-1) - m[fp + x]])
98-
is a valid memory access (i.e. the index is < M the memory size), but currently the DEREF instruction forces us to 'store' the result, in m[fp + i] (resp m[fp + k]).
99-
TLDR: adding a new encoding field for DEREF would save 2 memory cells / range check. If this can also increase perf in alternative scenario (other instructions for instance),
100-
potentially we should consider it.
90+
- About range checks, that can currently be done in 3 cycles (see 2.5.3 of the zkVM pdf) + 3 memory cells used. For small ranges we can save 2 memory cells.
10191

10292
## Known leanISA compiler bugs:
10393

crates/packed_pcs/src/lib.rs

Lines changed: 141 additions & 141 deletions
Original file line numberDiff line numberDiff line change
@@ -555,144 +555,144 @@ fn compute_multilinear_value_from_chunks<F: Field, EF: ExtensionField<F>>(
555555
eval
556556
}
557557

558-
// #[cfg(test)]
559-
// mod tests {
560-
// use p3_field::{PrimeCharacteristicRing};
561-
// use p3_koala_bear::{KoalaBear, QuinticExtensionFieldKB};
562-
// use p3_util::log2_strict_usize;
563-
// use rand::{Rng, SeedableRng, rngs::StdRng};
564-
// use utils::{build_prover_state, build_verifier_state};
565-
566-
// use super::*;
567-
568-
// type F = KoalaBear;
569-
// type EF = QuinticExtensionFieldKB;
570-
571-
// #[test]
572-
// fn test_packed_pcs() {
573-
// let whir_config_builder = WhirConfigBuilder {
574-
// folding_factor: FoldingFactor::new(4, 4),
575-
// soundness_type: SecurityAssumption::CapacityBound,
576-
// pow_bits: 13,
577-
// max_num_variables_to_send_coeffs: 6,
578-
// rs_domain_initial_reduction_factor: 1,
579-
// security_level: 75,
580-
// starting_log_inv_rate: 1,
581-
// };
582-
583-
// let mut rng = StdRng::seed_from_u64(0);
584-
// let log_smallest_decomposition_chunk = 3;
585-
// let committed_length_lengths_and_default_value_and_log_public_data: [(
586-
// usize,
587-
// F,
588-
// Option<usize>,
589-
// ); _] = [
590-
// (16, F::from_usize(8), Some(4)),
591-
// (854, F::from_usize(0), Some(7)),
592-
// (854, F::from_usize(1), Some(5)),
593-
// (16, F::from_usize(0), Some(3)),
594-
// (17, F::from_usize(0), Some(4)),
595-
// (95, F::from_usize(3), Some(4)),
596-
// (17, F::from_usize(0), None),
597-
// (95, F::from_usize(3), None),
598-
// (256, F::from_usize(8), None),
599-
// (1088, F::from_usize(9), None),
600-
// (512, F::from_usize(0), None),
601-
// (256, F::from_usize(8), Some(3)),
602-
// (1088, F::from_usize(9), Some(4)),
603-
// (512, F::from_usize(0), Some(5)),
604-
// (754, F::from_usize(4), Some(4)),
605-
// (1023, F::from_usize(7), Some(4)),
606-
// (2025, F::from_usize(11), Some(8)),
607-
// (16, F::from_usize(8), None),
608-
// (854, F::from_usize(0), None),
609-
// (854, F::from_usize(1), None),
610-
// (16, F::from_usize(0), None),
611-
// (754, F::from_usize(4), None),
612-
// (1023, F::from_usize(7), None),
613-
// (2025, F::from_usize(15), None),
614-
// ];
615-
// let mut public_data = BTreeMap::new();
616-
// let mut polynomials = Vec::new();
617-
// let mut dims = Vec::new();
618-
// let mut statements_per_polynomial = Vec::new();
619-
// for (pol_index, &(committed_length, default_value, log_public_data)) in
620-
// committed_length_lengths_and_default_value_and_log_public_data
621-
// .iter()
622-
// .enumerate()
623-
// {
624-
// let mut poly = (0..committed_length + log_public_data.map_or(0, |l| 1 << l))
625-
// .map(|_| rng.random())
626-
// .collect::<Vec<F>>();
627-
// poly.resize(poly.len().next_power_of_two(), default_value);
628-
// if let Some(log_public) = log_public_data {
629-
// public_data.insert(pol_index, poly[..1 << log_public].to_vec());
630-
// }
631-
// let n_vars = log2_strict_usize(poly.len());
632-
// let n_points = rng.random_range(1..5);
633-
// let mut statements = Vec::new();
634-
// for _ in 0..n_points {
635-
// let point =
636-
// MultilinearPoint((0..n_vars).map(|_| rng.random()).collect::<Vec<EF>>());
637-
// let value = poly.evaluate(&point);
638-
// statements.push(Evaluation { point, value });
639-
// }
640-
// polynomials.push(poly);
641-
// dims.push(ColDims {
642-
// n_vars,
643-
// log_public_data_size: log_public_data,
644-
// committed_size: committed_length,
645-
// default_value,
646-
// });
647-
// statements_per_polynomial.push(statements);
648-
// }
649-
650-
// let mut prover_state = build_prover_state();
651-
// precompute_dft_twiddles::<F>(1 << 24);
652-
653-
// let polynomials_ref = polynomials.iter().map(|p| p.as_slice()).collect::<Vec<_>>();
654-
// let witness = packed_pcs_commit(
655-
// &whir_config_builder,
656-
// &polynomials_ref,
657-
// &dims,
658-
// &mut prover_state,
659-
// log_smallest_decomposition_chunk,
660-
// );
661-
662-
// let packed_statements = packed_pcs_global_statements_for_prover(
663-
// &polynomials_ref,
664-
// &dims,
665-
// log_smallest_decomposition_chunk,
666-
// &statements_per_polynomial,
667-
// &mut prover_state,
668-
// );
669-
// let num_variables = witness.packed_polynomial.by_ref().n_vars();
670-
// WhirConfig::new(whir_config_builder.clone(), num_variables).prove(
671-
// &mut prover_state,
672-
// packed_statements,
673-
// witness.inner_witness,
674-
// &witness.packed_polynomial.by_ref(),
675-
// );
676-
677-
// let mut verifier_state = build_verifier_state(&prover_state);
678-
679-
// let parsed_commitment = packed_pcs_parse_commitment(
680-
// &whir_config_builder,
681-
// &mut verifier_state,
682-
// &dims,
683-
// log_smallest_decomposition_chunk,
684-
// )
685-
// .unwrap();
686-
// let packed_statements = packed_pcs_global_statements_for_verifier(
687-
// &dims,
688-
// log_smallest_decomposition_chunk,
689-
// &statements_per_polynomial,
690-
// &mut verifier_state,
691-
// &public_data,
692-
// )
693-
// .unwrap();
694-
// WhirConfig::new(whir_config_builder, num_variables)
695-
// .verify(&mut verifier_state, &parsed_commitment, packed_statements)
696-
// .unwrap();
697-
// }
698-
// }
558+
#[cfg(test)]
559+
mod tests {
560+
use p3_field::PrimeCharacteristicRing;
561+
use p3_koala_bear::{KoalaBear, QuinticExtensionFieldKB};
562+
use p3_util::log2_strict_usize;
563+
use rand::{Rng, SeedableRng, rngs::StdRng};
564+
use utils::{build_prover_state, build_verifier_state};
565+
566+
use super::*;
567+
568+
type F = KoalaBear;
569+
type EF = QuinticExtensionFieldKB;
570+
571+
#[test]
572+
fn test_packed_pcs() {
573+
let whir_config_builder = WhirConfigBuilder {
574+
folding_factor: FoldingFactor::new(4, 4),
575+
soundness_type: SecurityAssumption::CapacityBound,
576+
pow_bits: 13,
577+
max_num_variables_to_send_coeffs: 6,
578+
rs_domain_initial_reduction_factor: 1,
579+
security_level: 75,
580+
starting_log_inv_rate: 1,
581+
};
582+
583+
let mut rng = StdRng::seed_from_u64(0);
584+
let log_smallest_decomposition_chunk = 4;
585+
let committed_length_lengths_and_default_value_and_log_public_data: [(
586+
usize,
587+
F,
588+
Option<usize>,
589+
); _] = [
590+
(916, F::from_usize(8), Some(5)),
591+
(854, F::from_usize(0), Some(7)),
592+
(854, F::from_usize(1), Some(5)),
593+
(16, F::from_usize(0), Some(5)),
594+
(1127, F::from_usize(0), Some(6)),
595+
(595, F::from_usize(3), Some(6)),
596+
(17, F::from_usize(0), None),
597+
(95, F::from_usize(3), None),
598+
(256, F::from_usize(8), None),
599+
(1088, F::from_usize(9), None),
600+
(512, F::from_usize(0), None),
601+
(256, F::from_usize(8), Some(6)),
602+
(1088, F::from_usize(9), Some(5)),
603+
(512, F::from_usize(0), Some(5)),
604+
(754, F::from_usize(4), Some(5)),
605+
(1023, F::from_usize(7), Some(5)),
606+
(2025, F::from_usize(11), Some(8)),
607+
(16, F::from_usize(8), None),
608+
(854, F::from_usize(0), None),
609+
(854, F::from_usize(1), None),
610+
(16, F::from_usize(0), None),
611+
(754, F::from_usize(4), None),
612+
(1023, F::from_usize(7), None),
613+
(2025, F::from_usize(15), None),
614+
];
615+
let mut public_data = BTreeMap::new();
616+
let mut polynomials = Vec::new();
617+
let mut dims = Vec::new();
618+
let mut statements_per_polynomial = Vec::new();
619+
for (pol_index, &(committed_length, default_value, log_public_data)) in
620+
committed_length_lengths_and_default_value_and_log_public_data
621+
.iter()
622+
.enumerate()
623+
{
624+
let mut poly = (0..committed_length + log_public_data.map_or(0, |l| 1 << l))
625+
.map(|_| rng.random())
626+
.collect::<Vec<F>>();
627+
poly.resize(poly.len().next_power_of_two(), default_value);
628+
if let Some(log_public) = log_public_data {
629+
public_data.insert(pol_index, poly[..1 << log_public].to_vec());
630+
}
631+
let n_vars = log2_strict_usize(poly.len());
632+
let n_points = rng.random_range(1..5);
633+
let mut statements = Vec::new();
634+
for _ in 0..n_points {
635+
let point =
636+
MultilinearPoint((0..n_vars).map(|_| rng.random()).collect::<Vec<EF>>());
637+
let value = poly.evaluate(&point);
638+
statements.push(Evaluation { point, value });
639+
}
640+
polynomials.push(poly);
641+
dims.push(ColDims {
642+
n_vars,
643+
log_public_data_size: log_public_data,
644+
committed_size: committed_length,
645+
default_value,
646+
});
647+
statements_per_polynomial.push(statements);
648+
}
649+
650+
let mut prover_state = build_prover_state();
651+
precompute_dft_twiddles::<F>(1 << 24);
652+
653+
let polynomials_ref = polynomials.iter().map(|p| p.as_slice()).collect::<Vec<_>>();
654+
let witness = packed_pcs_commit(
655+
&whir_config_builder,
656+
&polynomials_ref,
657+
&dims,
658+
&mut prover_state,
659+
log_smallest_decomposition_chunk,
660+
);
661+
662+
let packed_statements = packed_pcs_global_statements_for_prover(
663+
&polynomials_ref,
664+
&dims,
665+
log_smallest_decomposition_chunk,
666+
&statements_per_polynomial,
667+
&mut prover_state,
668+
);
669+
let num_variables = witness.packed_polynomial.by_ref().n_vars();
670+
WhirConfig::new(whir_config_builder.clone(), num_variables).prove(
671+
&mut prover_state,
672+
packed_statements,
673+
witness.inner_witness,
674+
&witness.packed_polynomial.by_ref(),
675+
);
676+
677+
let mut verifier_state = build_verifier_state(&prover_state);
678+
679+
let parsed_commitment = packed_pcs_parse_commitment(
680+
&whir_config_builder,
681+
&mut verifier_state,
682+
&dims,
683+
log_smallest_decomposition_chunk,
684+
)
685+
.unwrap();
686+
let packed_statements = packed_pcs_global_statements_for_verifier(
687+
&dims,
688+
log_smallest_decomposition_chunk,
689+
&statements_per_polynomial,
690+
&mut verifier_state,
691+
&public_data,
692+
)
693+
.unwrap();
694+
WhirConfig::new(whir_config_builder, num_variables)
695+
.verify(&mut verifier_state, &parsed_commitment, packed_statements)
696+
.unwrap();
697+
}
698+
}

0 commit comments

Comments
 (0)