Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare re-genesis #191

Closed
16 tasks done
jak-pan opened this issue Mar 30, 2021 · 5 comments · Fixed by #188
Closed
16 tasks done

Prepare re-genesis #191

jak-pan opened this issue Mar 30, 2021 · 5 comments · Fixed by #188
Assignees

Comments

@jak-pan
Copy link
Contributor

jak-pan commented Mar 30, 2021

We decided for a short epoch time of 10 minutes in the begining of the testnet which wasn't a problem until now.

Since multiple validators were concerned that this might lead to unnecessary kicking from the validator set and losing nominations if the node is offline for 10 minutes, and also if no blocks were produced for those 10 minutes, the chain could stall like Kusama did before.

This could result in the need to do a time-warp and coordination effort of all of the validators. We were trying to decide the best way forward as it's unfortunately not possible to just change this setting after the start of the network.

The current best way forward is re-genesis

We'll need to prepare a script to migrate all of the data to fresh genesis in the following order.

PHASE 1:

PHASE 2:

PHASE 3:

PHASE X (Anytime):

@andresilva
Copy link
Contributor

Check if it's possible to do migration

A runtime migration alone would not be enough unfortunately, as the client-side is also keeping track of epoch information and it needs to line up with what the runtime expects, and this is made more difficult by the fact that epochs are announced in advance (i.e. when epoch N starts, the client already knows about the details of epoch N+1). Also any on-chain calculations that use Babe::epoch_start(epoch_index) would start giving bad results (since it would assume that all epochs since genesis would have the new length).

Check if it's possible to keep block data and change epoch retroactively during re-genesis

I also don't know any easy way to do this unfortunately. I guess for trust purposes the genesis could somehow reference the best block hash (and state root hash?) of the best block of the current chain. But those blocks would need to be served with the existing client (they would be distinct networks).

@jak-pan
Copy link
Contributor Author

jak-pan commented Mar 30, 2021

Check if it's possible to keep block data and change epoch retroactively during re-genesis

I also don't know any easy way to do this unfortunately. I guess for trust purposes the genesis could somehow reference the best block hash (and state root hash?) of the best block of the current chain. But those blocks would need to be served with the existing client (they would be distinct networks).

Thanks 🙏🏻 we're thinking about upgrading the node so that it connects to the new chain and including last best block and original genesis hash would be ok, I guess as well as killing the old one with set_code() as described here paritytech/substrate#7458

@andresilva
Copy link
Contributor

You might also want to use a new protocolId in your chainspec. We always did this when restarting networks to make sure the peers don't collide, they would detect the mismatching genesis hash and refuse connecting to one another but it's still annoying to have two networks tangled up. This is the reason why Kusama's protocol id is ksmcc3.

@apopiak
Copy link
Contributor

apopiak commented Apr 6, 2021

About the chicken and egg: paritytech/substrate#7458 (comment)
I think you want to stop the chain and accept the blackout.

@jak-pan
Copy link
Contributor Author

jak-pan commented Apr 6, 2021

About the chicken and egg: paritytech/substrate#7458 (comment)
I think you want to stop the chain and accept the blackout.

We're probably going with soft killing the original chain by disallowing any action on it (basically a call filter with almost everything set to false), forking it and waiting for the upgrade of the extension, then we'll hard kill it with setCode and switch to new one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants