You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Discussion #730 (comment) and commit 269c9fa shows the way of traverse vector can be huge different.
For commit 269c9fa, in fabonacci benchmark with 2^20 max_step, the e2e latency for this function can drop from ~2.52s -> 1.2s, more than 50%. It motivated us to further break down the reasoning
The new way of traverse to be more CPU cache friendly. It motivated me to review ceno implementation in other part which exposed similar patterns and probably we can polishing it.
Ceno sumcheck implementation are based on devirgo style, see #77.
In summary, we break down multi-variate polynomials evaluations vector into #n_threads chunk, each range handle by a thread.
This design is to minimize the synchronized overhead and maximize cache hit rate on L1 cache. This design bring ~+40% performance boost on multi-cores.
However, motivated by "grid-stride loop" design and review devirgo sumcheck. I thought devirgo sumcheck might not be the ultimate implementation, because separating vector into #n_thread chunk will bring more L2/L3 cache miss giving a huge vector can't fit into cache easily. We probably can achieve both
"grid-stride loop" in uni-variate computation (build based on original naive version)
Follows up
If this method prove to be worthy, we can do that in rayon crate to support "grid-stride loop" on vector parallel traverse, and upstream to rayon library
The text was updated successfully, but these errors were encountered:
Context
Discussion #730 (comment) and commit 269c9fa shows the way of traverse vector can be huge different.
For commit 269c9fa, in fabonacci benchmark with 2^20 max_step, the e2e latency for this function can drop from ~2.52s -> 1.2s, more than 50%. It motivated us to further break down the reasoning
Analysis
There might be 2 factors to account for the boost
case 1: before/after mem allocation & copy
case 2: cache hit/miss rate
similar discuss issues Plonky2, shared by @kunxian-xia
The new way of traverse to be more CPU cache friendly. It motivated me to review ceno implementation in other part which exposed similar patterns and probably we can polishing it.
"grid-stride loop" design from CUDA: https://developer.nvidia.com/blog/cuda-pro-tip-write-flexible-kernels-grid-stride-loops/
From case 2, it motivated to experiment traverse vector in more (cpu) cache friendly pattern
In summary
In Ceno, there is playground on on sumcheck uni-variate computation to experiment "grid-stride loop" https://github.com/scroll-tech/ceno/blob/master/sumcheck/src/prover_v2.rs#L787-L935.
verify idea in sumcheck
Ceno sumcheck implementation are based on devirgo style, see #77.
In summary, we break down multi-variate polynomials evaluations vector into #n_threads chunk, each range handle by a thread.
This design is to minimize the synchronized overhead and maximize cache hit rate on L1 cache. This design bring ~+40% performance boost on multi-cores.
However, motivated by "grid-stride loop" design and review devirgo sumcheck. I thought devirgo sumcheck might not be the ultimate implementation, because separating vector into #n_thread chunk will bring more L2/L3 cache miss giving a huge vector can't fit into cache easily. We probably can achieve both
in one shot.
noted of benchmark
We can quick modify https://github.com/scroll-tech/ceno/blob/master/sumcheck/src/prover_v2.rs#L787-L935 to "grid-stride loop" style, and do benchmark here https://github.com/scroll-tech/ceno/blob/master/sumcheck/benches/devirgo_sumcheck.rs and compare between
Follows up
If this method prove to be worthy, we can do that in rayon crate to support "grid-stride loop" on vector parallel traverse, and upstream to rayon library
The text was updated successfully, but these errors were encountered: