You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Preliminary review and the overhead come from creating much intermediate vector instead of reuse and mutating. Besides, the expression processing here degree is just 1, so we can exploit this structure and apply optimisation on it wih new util function wit_infer_by_expr_degree_1
Idea of optimisation
allocate just one mutable vector and do in-placement change during witness inferrence
Context
expression witness inferring util wit_infer_by_expr both used in in opcode and table proof
occupied quite lot of time, e.g.
with fibonacchi e2e command
cargo run --release --package ceno_zkvm --bin e2e -- --profiling=3 --max-steps=1048576 --platform=sp1 ceno_zkvm/examples/fibonacci.elf
and take "ADD" opcode as example
the wit_inference inferring occupied around 50%.
Preliminary review and the overhead come from creating much intermediate vector instead of reuse and mutating. Besides, the expression processing here degree is just 1, so we can exploit this structure and apply optimisation on it wih new util function
wit_infer_by_expr_degree_1
Idea of optimisation
The (simulated) gain of idea
In this commit
master...hero78119:ceno:feat/wit_infer_opt
and the benchmark result
So this optimisation show great potiential and bring up to 15% e2e latency reduction
The text was updated successfully, but these errors were encountered: