-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(state-dumper): rewrite the state dumper logic #12492
feat(state-dumper): rewrite the state dumper logic #12492
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #12492 +/- ##
==========================================
+ Coverage 70.53% 70.57% +0.03%
==========================================
Files 847 847
Lines 172855 173116 +261
Branches 172855 173116 +261
==========================================
+ Hits 121925 122174 +249
- Misses 45829 45832 +3
- Partials 5101 5110 +9
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Great refactoring!
nearcore/src/state_sync.rs
Outdated
|
||
let now = clock.now(); | ||
let mut check_head = Interval::new(now + iteration_delay, iteration_delay); | ||
let mut check_stored_parts = Interval::new(now + Duration::seconds(20), Duration::seconds(20)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move the hadcoded duration to a constant.
Ideally it should be a config. The config will allow us to tune the dumper in case there are too many conflicts when more than one dumper is active.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just moved it to a constant for now. If we want to put it in the config we can always come back to it, but I would guess this is prob not going to be a thing we need to tweak too much
nearcore/src/state_sync.rs
Outdated
dumper.init(iteration_delay).await?; | ||
|
||
let now = clock.now(); | ||
let mut check_head = Interval::new(now + iteration_delay, iteration_delay); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe try this to remove the mut
from mut iteration_delay: Duration
let mut check_head = Interval::new(now + iteration_delay, iteration_delay); | |
let min_iteration_delay = Duration::milliseconds(1); | |
let mut check_head = Interval::new(now + iteration_delay, min_iteration_delay.max(iteration_delay)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah good idea
thanks for taking a look! |
The current state dumper code is sort of difficult to follow, and doesn't make good use of the available cores to obtain and upload parts. It starts one thread per shard that dumps one part on each iteration of a big loop (that includes a good amount of unnecessary/redundant lookups and calculations). So here we rewrite the logic so that instead of starting one thread per shard and looping over part IDs like that, we just figure out what parts need to be dumped when we see a new epoch, and then spawn futures to obtain and upload the parts. Now the part upload speed will be limited by the number of allowed "obtain part" tasks (4), and the speed of generating those parts.
This has the advantage of not needing to change anything to work with dynamic resharding, and the part upload is much faster. On a forknet run with recent mainnet state, the old dumper takes around an hour and a half to dump all the parts, and this version takes around half an hour (could maybe be improved by tweaking/making configurable the number of allowed tasks obtaining parts at a time)
This could be refactored further because there's still some leftover structures from the previous implementation that don't fit super cleanly, but this can be done in a future PR.