You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A natural extension of #324, this would speed up IBD as is greatly. Say we want to request headers from 1300000 to 1500000 and maxConnections is defined to be 5. We could split the 200000 blocks to be synced into 5 chunks and request one chunk from each peer. On the database side, all we need to do is assemble the blocks out of line and then after sync is done, check the hashes of the four border blocks (the starting of each chunk except the first one which would be checked already).
Will do after the main PR on this is merged in order to prevent merge conflicts.
The text was updated successfully, but these errors were encountered:
A natural extension of #324, this would speed up IBD as is greatly. Say we want to request headers from 1300000 to 1500000 and
maxConnections
is defined to be 5. We could split the 200000 blocks to be synced into 5 chunks and request one chunk from each peer. On the database side, all we need to do is assemble the blocks out of line and then after sync is done, check the hashes of the four border blocks (the starting of each chunk except the first one which would be checked already).Will do after the main PR on this is merged in order to prevent merge conflicts.
The text was updated successfully, but these errors were encountered: