Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: larger batches for EigenDA #22

Merged
merged 3 commits into from
Aug 8, 2024

Conversation

epociask
Copy link
Collaborator

@epociask epociask commented Aug 7, 2024

Changes

  • Support max batch size config for eigenda integrations with 2 MB upper limit

Dependency PRs

Testing
Used testnode's spam feature to generate arbitrary L2 txs using 1096 bytes of calldata across 3 address ran across 3 processes:
i.e, sequencer --l2_tx-> validator --l2_tx-> l2_owner --> EOA

Using a testnode policy where batches would only submit every 15 minutes, the txs were sent serially across 20 threads per process. In the best case this would equate to roughly 60 txs / second (i.e, 60 * 1,096b = 65.760 kb/sec) or 5.918 mb/batch. Since these scripts use asynchronously concurrency bound by js and serially execute the real throughput expressed during this experiment is likely a bit less. Anyway, was able to get batches through around sizes of 780,328 bytes . The stateless block validator struggled very heavily to retain synchronization given it'd only sequentially execute 10-20 messages in a batch with the number of incoming messages drastically increasing:

INFO [08-07|04:14:32.927] validated execution                      messageCount=6224 globalstate="{BlockHash:0x85499c64e0b9ab24d5d7c094108c5fc30c61389b222db771b50cee5ffccff876 SendRoot:0x0000000000000000000000000000000000000000000000000000000000000000 Batch:2 PosInBatch:3127}" WasmRoots=[0xe82561d483d1b87cf09343ec6a1b9bf31b98ff4ab2b26b444f7d4eadb353a314]
INFO [08-07|04:14:33.927] validated execution                      messageCount=6250 globalstate="{BlockHash:0x1639290819a088772592f0fcbd96e0a82dbf03071f8e359d89f12a1d0f50aae4 SendRoot:0x0000000000000000000000000000000000000000000000000000000000000000 Batch:3 PosInBatch:0}" WasmRoots=[0xe82561d483d1b87cf09343ec6a1b9bf31b98ff4ab2b26b444f7d4eadb353a314]

This delta if left un-remediated would likely cause further delay to child --> parent chain withdrawal times.

TODO(s)
The following items should be completed to further verify that our fork can support batches of this size:
[] kzg-bn254 proof generation tests using 2MB blobs
[] experiment using 2MB max batch size

@epociask epociask changed the title feat: larger batches for EigenDA w/ max 2MB blobs feat: larger batches for EigenDA Aug 7, 2024
@@ -601,6 +602,17 @@ func mainImpl() int {
return 1
}
}

// NOTE: since the SRS is stored within the arbitrator and predetermines the max batch size
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when needed, we can put more srs up to 2^28

Copy link

@hopeyen hopeyen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my first review! everything looks good, just left some dumb questions:)

cmd/nitro/nitro.go Show resolved Hide resolved
arbnode/batch_poster.go Show resolved Hide resolved
@epociask epociask merged commit 5b37117 into eigenda-v3.0.3 Aug 8, 2024
7 checks passed
@epociask epociask deleted the epociask--larger-batches branch September 12, 2024 06:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants