You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently blevm aggregator proves two adjancent blocks. We need to to extend this to the range we want to prove from evm-prover. There are a couple of ways to do this:
iterate
We can simply extend the current aggregator to read a variable number of proofs and match each adjacent header hash. The variable number can be the first input to the program should be at least 2. This is the easiest to implement but it scales linearly with the number of blocks and is not parallellisable.
tree
We can turn the range into a binary, ternary, or 4-ary tree where each proof is a leaf node and they are pairwise aggregated into intermediate nodes until a root node remains. It is parallelisable and scales lograrithmically with number of blocks.
merge
We can split the range into left and right halves and recursively combine them, the base case being two adjacent blocks. When merging we can ensure continuity in the header hash. It is parallelisable and scales logarithmically with the number of blocks.
I think it's easiest to start with the iterative approach. If we want to optimize we can switch to the merge approach.
The text was updated successfully, but these errors were encountered:
Currently blevm aggregator proves two adjancent blocks. We need to to extend this to the range we want to prove from evm-prover. There are a couple of ways to do this:
iterate
We can simply extend the current aggregator to read a variable number of proofs and match each adjacent header hash. The variable number can be the first input to the program should be at least 2. This is the easiest to implement but it scales linearly with the number of blocks and is not parallellisable.
tree
We can turn the range into a binary, ternary, or 4-ary tree where each proof is a leaf node and they are pairwise aggregated into intermediate nodes until a root node remains. It is parallelisable and scales lograrithmically with number of blocks.
merge
We can split the range into left and right halves and recursively combine them, the base case being two adjacent blocks. When merging we can ensure continuity in the header hash. It is parallelisable and scales logarithmically with the number of blocks.
I think it's easiest to start with the iterative approach. If we want to optimize we can switch to the merge approach.
The text was updated successfully, but these errors were encountered: