Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(blevm): support multiple proof aggregation #154

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion provers/blevm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This workspace contains multiple crates:

- `blevm`: SP1 program that verifies an EVM block was included in a Celestia data square.
- `blevm-mock`: SP1 program that acts as a mock version of `blevm`. It should execute faster than `blevm` because it skips verifying any inputs or outputs.
- `blevm-aggregator`: SP1 program that takes as input the public values from two `blevm` proofs. It verifies the proofs and ensures they are for monotonically increasing EVM blocks.
- `blevm-aggregator`: SP1 program that takes as input the verification keys and public values from multiple `blevm` proofs. It verifies the proofs and ensures they are for monotonically increasing EVM blocks.
- `blevm-prover`: library that exposes a `BlockProver` which can generate proofs. The proofs can either be `blevm` proofs or `blevm-mock` proofs depending on the `elf_bytes` used.
- `common`: library with common struct definitions
- `script`: binary that generates a blevm proof for an EVM roll-up block that was posted to Celestia mainnet.
Expand Down
84 changes: 49 additions & 35 deletions provers/blevm/blevm-aggregator/src/main.rs
Original file line number Diff line number Diff line change
@@ -1,54 +1,68 @@
//! A SP1 program that takes as input public values from two blevm mock proofs. It then verifies
//! those mock proofs. Lastly, it verifies that the second proof is for an EVM block immediately
//! following the EVM block in proof one. It commits to the EVM header hashes from those two blocks.
//! A SP1 program that takes as input N verification keys and N public values from N blevm proofs.
//! It then verifies those proofs. It verifies that each proof is for an EVM block immediately
//! following the previous block. It commits to the EVM header hashes from the first and last
//! blocks.
#![no_main]
sp1_zkvm::entrypoint!(main);

mod buffer;
use buffer::Buffer;

use blevm_common::{BlevmAggOutput, BlevmOutput};
use buffer::Buffer;
use sha2::{Digest, Sha256};

// Verification key of blevm-mock (Dec 22 2024)
// 0x001a3232969a5caac2de9a566ceee00641853a058b1ce1004ab4869f75a8dc59

const BLEVM_MOCK_VERIFICATION_KEY: [u32; 8] = [
0x001a3232, 0x969a5caa, 0xc2de9a56, 0x6ceee006, 0x41853a05, 0x8b1ce100, 0x4ab4869f, 0x75a8dc59,
];

pub fn main() {
let public_values1: Vec<u8> = sp1_zkvm::io::read();
let public_values2: Vec<u8> = sp1_zkvm::io::read();
// Read the number of proofs
let n: usize = sp1_zkvm::io::read();

if n < 2 {
panic!("must provide at least 2 proofs");
}

let proof1_values_hash = Sha256::digest(&public_values1);
let proof2_values_hash = Sha256::digest(&public_values2);
// Read all verification keys first
let mut verification_keys: Vec<[u32; 8]> = Vec::with_capacity(n);
for _ in 0..n {
verification_keys.push(sp1_zkvm::io::read());
}

sp1_zkvm::lib::verify::verify_sp1_proof(
&BLEVM_MOCK_VERIFICATION_KEY,
&proof1_values_hash.into(),
);
sp1_zkvm::lib::verify::verify_sp1_proof(
&BLEVM_MOCK_VERIFICATION_KEY,
&proof2_values_hash.into(),
);
// Read all public values
let mut public_values: Vec<Vec<u8>> = Vec::with_capacity(n);
for _ in 0..n {
public_values.push(sp1_zkvm::io::read());
}

let mut buffer1 = Buffer::from(&public_values1);
let mut buffer2 = Buffer::from(&public_values2);
// Verify all proofs using their respective verification keys
for (values, vk) in public_values.iter().zip(verification_keys.iter()) {
let proof_values_hash = Sha256::digest(values);
sp1_zkvm::lib::verify::verify_sp1_proof(vk, &proof_values_hash.into());
}

let output1 = buffer1.read::<BlevmOutput>();
let output2 = buffer2.read::<BlevmOutput>();
// Parse all outputs
let mut outputs: Vec<BlevmOutput> = Vec::with_capacity(n);
for values in &public_values {
let mut buffer = Buffer::from(values);
outputs.push(buffer.read::<BlevmOutput>());
}

if output1.header_hash != output2.prev_header_hash {
panic!("header hash mismatch");
// Verify block sequence
for i in 1..n {
if outputs[i - 1].header_hash != outputs[i].prev_header_hash {
panic!("header hash mismatch at position {}", i);
}
}

// Collect all Celestia header hashes
let celestia_header_hashes: Vec<_> = outputs
.iter()
.map(|output| output.celestia_header_hash)
.collect();

// Create aggregate output using first and last blocks
let agg_output = BlevmAggOutput {
newest_header_hash: output2.header_hash,
oldest_header_hash: output1.header_hash,
celestia_header_hashes: vec![output1.celestia_header_hash, output2.celestia_header_hash],
newest_state_root: output2.state_root,
newest_height: output2.height,
newest_header_hash: outputs[n - 1].header_hash,
oldest_header_hash: outputs[0].header_hash,
celestia_header_hashes,
newest_state_root: outputs[n - 1].state_root,
newest_height: outputs[n - 1].height,
};

sp1_zkvm::io::commit(&agg_output);
Expand Down
205 changes: 98 additions & 107 deletions provers/blevm/blevm-prover/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,23 +1,18 @@
mod proofs;

use celestia_rpc::{BlobClient, Client, HeaderClient};
use celestia_types::nmt::NamespacedHash;
use celestia_types::AppVersion;
use celestia_types::Blob;
use celestia_types::{
nmt::{Namespace, NamespaceProof, NamespacedHashExt},
nmt::{Namespace, NamespaceProof},
ExtendedHeader,
};
use core::cmp::max;
use nmt_rs::{
simple_merkle::{db::MemDb, proof::Proof, tree::MerkleTree},
TmSha2Hasher,
};
use rsp_client_executor::io::ClientExecutorInput;
use sp1_sdk::{ExecutionReport, ProverClient, SP1PublicValues, SP1Stdin};
use std::error::Error;
use tendermint_proto::{
v0_37::{types::BlockId as RawBlockId, version::Consensus as RawConsensusVersion},
Protobuf,
use sp1_sdk::{
ExecutionReport, ProverClient, SP1ProofWithPublicValues, SP1PublicValues, SP1Stdin,
SP1VerifyingKey,
};
use std::error::Error;

/// Configuration for the Celestia client
pub struct CelestiaConfig {
Expand All @@ -30,7 +25,25 @@ pub struct ProverConfig {
pub elf_bytes: &'static [u8],
}

/// Configuration for the aggregator
pub struct AggregatorConfig {
pub elf_bytes: &'static [u8],
}

/// Input for proof aggregation
pub struct AggregationInput {
pub proof: SP1ProofWithPublicValues,
pub vk: SP1VerifyingKey,
}

/// Output from proof aggregation
pub struct AggregationOutput {
/// The aggregated proof
pub proof: SP1ProofWithPublicValues,
}

/// Input data for block proving
#[derive(Clone)]
pub struct BlockProverInput {
pub block_height: u64,
pub l2_block_data: Vec<u8>,
Expand Down Expand Up @@ -84,106 +97,23 @@ impl CelestiaClient {
}
}

/// generate_header_proofs takes an extender header and creates a Merkle tree from its fields. Then
/// it generates a Merkle proof for the DataHash in that extended header.
pub fn generate_header_proofs(
header: &ExtendedHeader,
) -> Result<(Vec<u8>, Proof<TmSha2Hasher>), Box<dyn Error>> {
let mut header_field_tree: MerkleTree<MemDb<[u8; 32]>, TmSha2Hasher> =
MerkleTree::with_hasher(TmSha2Hasher::new());

let field_bytes = prepare_header_fields(header);

for leaf in field_bytes {
header_field_tree.push_raw_leaf(&leaf);
}

// The data_hash is the leaf at index 6 in the tree.
let (data_hash_bytes, data_hash_proof) = header_field_tree.get_index_with_proof(6);

// Verify the computed root matches the header hash
assert_eq!(header.hash().as_ref(), header_field_tree.root());

Ok((data_hash_bytes, data_hash_proof))
}

/// prepare_header_fields returns a vector with all the fields in a Tendermint header.
/// See https://github.com/cometbft/cometbft/blob/972fa8038b57cc2152cb67144869ccd604526550/spec/core/data_structures.md?plain=1#L130-L143
pub fn prepare_header_fields(header: &ExtendedHeader) -> Vec<Vec<u8>> {
vec![
Protobuf::<RawConsensusVersion>::encode_vec(header.header.version),
header.header.chain_id.clone().encode_vec(),
header.header.height.encode_vec(),
header.header.time.encode_vec(),
Protobuf::<RawBlockId>::encode_vec(header.header.last_block_id.unwrap_or_default()),
header
.header
.last_commit_hash
.unwrap_or_default()
.encode_vec(),
header.header.data_hash.unwrap_or_default().encode_vec(),
header.header.validators_hash.encode_vec(),
header.header.next_validators_hash.encode_vec(),
header.header.consensus_hash.encode_vec(),
header.header.app_hash.clone().encode_vec(),
header
.header
.last_results_hash
.unwrap_or_default()
.encode_vec(),
header.header.evidence_hash.unwrap_or_default().encode_vec(),
header.header.proposer_address.encode_vec(),
]
}

pub fn generate_row_proofs(
header: &ExtendedHeader,
blob: &Blob,
blob_index: u64,
) -> Result<(Proof<TmSha2Hasher>, Vec<NamespacedHash>), Box<dyn Error>> {
let eds_row_roots = header.dah.row_roots();
let eds_column_roots = header.dah.column_roots();
let eds_size: u64 = eds_row_roots.len().try_into()?;
let ods_size = eds_size / 2;

let blob_size: u64 = max(1, blob.to_shares()?.len() as u64);
let first_row_index: u64 = blob_index.div_ceil(eds_size) - 1;
let ods_index = blob_index - (first_row_index * ods_size);
let last_row_index: u64 = (ods_index + blob_size).div_ceil(ods_size) - 1;

let mut row_root_tree: MerkleTree<MemDb<[u8; 32]>, TmSha2Hasher> =
MerkleTree::with_hasher(TmSha2Hasher {});

let leaves = eds_row_roots
.iter()
.chain(eds_column_roots.iter())
.map(|root| root.to_array())
.collect::<Vec<[u8; 90]>>();

for root in &leaves {
row_root_tree.push_raw_leaf(root);
}

let row_root_multiproof =
row_root_tree.build_range_proof(first_row_index as usize..(last_row_index + 1) as usize);

let selected_roots =
eds_row_roots[first_row_index as usize..(last_row_index + 1) as usize].to_vec();

Ok((row_root_multiproof, selected_roots))
}

/// Main prover service that coordinates the entire proving process
pub struct BlockProver {
celestia_client: CelestiaClient,
prover_config: ProverConfig,
aggregator_config: AggregatorConfig,
}

impl BlockProver {
pub fn new(celestia_client: CelestiaClient, prover_config: ProverConfig) -> Self {
pub fn new(
celestia_client: CelestiaClient,
prover_config: ProverConfig,
aggregator_config: AggregatorConfig,
) -> Self {
Self {
celestia_client,
prover_config,
aggregator_config,
}
}

Expand All @@ -200,10 +130,10 @@ impl BlockProver {
.await?;

// Generate all required proofs
let (data_hash_bytes, data_hash_proof) = generate_header_proofs(&header)?;
let (data_hash_bytes, data_hash_proof) = proofs::generate_header_proofs(&header)?;

let (row_root_multiproof, selected_roots) =
generate_row_proofs(&header, &blob_from_chain, blob_from_chain.index.unwrap())?;
proofs::generate_row_proofs(&header, &blob_from_chain, blob_from_chain.index.unwrap())?;

let nmt_multiproofs = self
.celestia_client
Expand Down Expand Up @@ -237,13 +167,74 @@ impl BlockProver {
Ok((public_values, execution_report))
}

pub async fn generate_proof(&self, input: BlockProverInput) -> Result<Vec<u8>, Box<dyn Error>> {
pub async fn generate_proof(
&self,
input: BlockProverInput,
) -> Result<(SP1ProofWithPublicValues, SP1VerifyingKey), Box<dyn Error>> {
// Generate and return the proof
let client: sp1_sdk::EnvProver = ProverClient::from_env();
let (pk, _) = client.setup(self.prover_config.elf_bytes);
let (pk, vk) = client.setup(self.prover_config.elf_bytes);
let stdin = self.get_stdin(input).await?;
let proof = client.prove(&pk, &stdin).groth16().run()?;
Ok((proof, vk))
}

/// Aggregates multiple proofs into a single proof
pub async fn aggregate_proofs(
&self,
inputs: Vec<AggregationInput>,
) -> Result<AggregationOutput, Box<dyn Error>> {
if inputs.len() < 2 {
return Err("Must provide at least 2 proofs to aggregate".into());
}

// Create stdin for the aggregator
let mut stdin = SP1Stdin::new();

// Write number of proofs
stdin.write(&inputs.len());

// Write all verification keys first
for input in &inputs {
stdin.write(&input.vk);
}

// Then write all public values
for input in &inputs {
stdin.write_vec(input.proof.public_values.to_vec());
}

let client: sp1_sdk::EnvProver = ProverClient::from_env();

// Generate the aggregated proof
let (pk, _) = client.setup(self.aggregator_config.elf_bytes);
let proof = client.prove(&pk, &stdin).groth16().run()?;

Ok(AggregationOutput { proof })
}

/// Proves a range of blocks and aggregates their proofs
pub async fn prove_block_range(
&self,
inputs: Vec<BlockProverInput>,
) -> Result<AggregationOutput, Box<dyn Error>> {
if inputs.len() < 2 {
return Err("Must provide at least 2 proofs to aggregate".into());
}

// Generate proofs and collect verifying keys
let mut agg_inputs = Vec::with_capacity(inputs.len());

for input in inputs {
let (proof, vk) = self.generate_proof(input).await?;

agg_inputs.push(AggregationInput {
proof,
vk: vk.clone(),
});
}

bincode::serialize(&proof).map_err(|e| e.into())
// Aggregate the proofs
self.aggregate_proofs(agg_inputs).await
}
}
Loading
Loading