You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After the creation of a block we need to check if the transaction queue is larger than the configured threshold and trigger the creation of a new block if that is the case.
Otherwise validators will wait indefinitely until a new transaction is submitted, even if there are sufficient transactions in the queue to trigger a new block.
This feature needs to take the minimum block time setting into account.
The text was updated successfully, but these errors were encountered:
The main issue with block times <1s is that the block timestamp is of 1s granularity, it is not possible to create two blocks within a second without either changing the granularity of the block timestamp, or artificially incrementing the next block's timestamp to be larger than the parent's by at least 1 (this is actually the strategy Parity follows internally).
But that is not a practical solution, on continued high load the timestamp will shift so far into the future that other validity checks start failing.
Right, I forgot about the granularity issue!
So I guess the right way to optimize throughput is to make the maximum block size so large that one block per second is close to the bandwitdth limit anyway.
After the creation of a block we need to check if the transaction queue is larger than the configured threshold and trigger the creation of a new block if that is the case.
Otherwise validators will wait indefinitely until a new transaction is submitted, even if there are sufficient transactions in the queue to trigger a new block.
This feature needs to take the minimum block time setting into account.
The text was updated successfully, but these errors were encountered: