-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't import/work with genesis.json larger 4GB #16915
Comments
Is the genesis.json entirely pre-compressed? Or only parts of the data inside genesis.json? Maybe snappy is used during protobuf portion of JSON decode and not supporting > 4GB? Is there a particular part of your genesis that is larger than others? Number of accounts? Contract state? Can you post the backtrace dump of the panic? |
Its a genesis.json obtained via "genesisd export" command which is a part of ethermint, so entire state. We expect also to increase that size to 20gb during recording of all obtained proteins (about 200k) entirely onchain (this will add about 10gb of data to store ~50 years of molecular biology experiments for public view/research). Will post more info on panic in a few hours from that message, thank you! |
This is with cronosd and imported genesis.json (same message on just exported ~10GB and with prettyfied ~14gb) ` goroutine 1 [running]: This is with kava ` goroutine 1 [running]: ` |
genesis.json can be found here (prettified): |
The issue is a bit further down the stack (not exactly cosmos-sdk) It looks like snappy directly affects goleveldb storage. Goleveldb can turn off compression if using more disk space is acceptable. Since you already have a fork of pebbledb will not work without breaking up batches (throws error "panic: pebble: batch too large: >= 4.0 G") - hard limit does NOT look adjustable. Did not look into the other storage backends since the are experimental/less used. |
Here's something to adjust in your code to get around the snappy problem and you can try again. The disk usage may not be significantly different without compression. (Try and share what you experience) It appears there might be a problem parsing in the genesis.json contents (not a snappy problem). The genesis.json document is fully read by this point. The data/state.db is about 9.7G at the time of panic (cronosd tag v1.0.9) Make this change https://github.com/cometbft/cometbft-db/blob/main/goleveldb.go#L26
The issue when using the genesis.json provided at
|
how to properly apply this? Can you please provide commit if you have time, thank you very much! |
Since you suggested to turn off compression, did following: versiondb/extsort/sort_test.go: SnappyCompression: true, Changed to false Rebuild Started with that genesis.json Got similar error, but after 5m waiting and ~90GB RAM consumed (did on 300gb swap) time cronosd start goroutine 44 [running]: real 5m26.681s |
I have an open pr on cometbft that fixes this issue without needing to disable snappy compression: cometbft/cometbft#1017 It removes saving the entire genesis json string into the database in a single key, which causes snappy to panic. Disabling the snappy compression should work too with the cometbft-db modification above so that it avoids the step that causes it. |
Amazing its already addressed by Kava, thank you! might we consider forking to Kava instead of Cronos. What do you think? |
@drklee3 is correct. Not saving the entire genesis is another way to do this (but might break the /genesis API endpoint ) @alpha-omega-labs Sounds like you might work with Kava code, but if you need help making the suggested edit above to turn off compression, write back |
yes please, it would be very very helpful if you provide code way to disable compression in current cronosd or kava source code. Here is a direct code fork of cronos since we started to experiment with it because evmos went close source. https://github.com/alpha-omega-labs/genesisL1 |
Yes, you are right! Was able to get your suggestion work on both kava and cronos and got that error as you do. |
Got same error with non modified at all genesis.json At the same time genesis.json seems to be visibly good - where to dig for error? initializing blockchain state from genesis.json goroutine 1 [running]: |
searching for a null in genesis.json returned: Null value at app_state.ibc.client_genesis.clients.0.client_state.proof_specs.0.inner_spec.empty_child Null value at app_state.params Where is most likely fail do you think? |
Hello,
In GenesisL1 we have largest state, exported state is ~10GB. Snappy compression lib can't handle anything above ~4GB and throwing
"panic: snappy: decoded block is too large" on importing genesis.json to new chain during upgrade.
4gb state might meet Cosmos and other chains, so maybe its a good idea to move to other compression lib.
Thank you
The text was updated successfully, but these errors were encountered: