Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trigger creation of a new block if the transaction queue is above the configured threshold #183

Open
dforsten opened this issue Aug 5, 2019 · 3 comments
Assignees

Comments

@dforsten
Copy link

dforsten commented Aug 5, 2019

After the creation of a block we need to check if the transaction queue is larger than the configured threshold and trigger the creation of a new block if that is the case.

Otherwise validators will wait indefinitely until a new transaction is submitted, even if there are sufficient transactions in the queue to trigger a new block.

This feature needs to take the minimum block time setting into account.

@dforsten dforsten self-assigned this Aug 5, 2019
@afck
Copy link
Collaborator

afck commented Aug 6, 2019

This feature needs to take the minimum block time setting into account.

Does it? For maximizing throughput I'd rather say:

  • If the blocks are full, produce them as fast as possible.
  • If they aren't full, limit them (e.g. one per second).

That would mean produce a block if either:

  • the block time has timed out, or
  • we have enough transactions in the queue to make a contribution of the maximum size.

(So I guess it would rather be a "maximum block time".)

@dforsten
Copy link
Author

dforsten commented Aug 6, 2019

The main issue with block times <1s is that the block timestamp is of 1s granularity, it is not possible to create two blocks within a second without either changing the granularity of the block timestamp, or artificially incrementing the next block's timestamp to be larger than the parent's by at least 1 (this is actually the strategy Parity follows internally).

But that is not a practical solution, on continued high load the timestamp will shift so far into the future that other validity checks start failing.

@afck
Copy link
Collaborator

afck commented Aug 6, 2019

Right, I forgot about the granularity issue!
So I guess the right way to optimize throughput is to make the maximum block size so large that one block per second is close to the bandwitdth limit anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants