You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello CKB DevRel folks!!
I was wondering, have you considered adding an internal cache for RPC calls?
Many calls have no possiblity of being cached, like get_tip_header, then again many other are cache-able.
Let's take for example the Header of a block. The header is not returned by get_cells, but the header becomes part of the cell, especially in NervosDAO and iCKB calculations. To be able to get the header we either go with getHeaderByNumber (which is not reliable in case of a re-org) or we add another call per cell (get_transaction_proof) to get the hash of the containing block (which is not reliable in case of a re-org) and only then then we call getHeaderByHash (which can be cached safely).
All this means that every DApp using cell's header needs to jump thru hoops to get this info, which doesn't seem reasonable.
The solution I'd like to propose is simple and it tries to minimize the pain for the developers handling these headers, while also keeping implementation cost reasonable.
Idea
Same as RBG++, after a certain amount of time, those blocks can be consider final and not subject to re-org.
Implementation
The method getHeaderByNumber provided by CCC could have the following behavior:
If the header is cached return the header
If the header is not cached, get the value from RPC node
If the header's block can be consider final by comparing the header timestamp vs current time, then add it to the cache by number and block hash (we could also add non-final headers with a couple seconds of life after which they get evicted, but that would complicate the cache requirements)
Return header
Example
For example if we cache only blocks older than 1 hour, then only final blocks enter the cache as:
Given the target of 10 seconds for each block, this gives 360 blocks per hour and we can estimate re-orgs in one tenth of that, so only final blocks enter the cache. Say Nervos L1 slowdowns blocks production, say one block every 30 seconds, it's still robust.
If one hour slack is not enough, we can double it until is enough.
If one hour slack is too much, we can shorten the slack until is just the right duration to be robust while minimizing cache misses.
Keep up the Great Work,
Phroi
The text was updated successfully, but these errors were encountered:
Hello CKB DevRel folks!!
I was wondering, have you considered adding an internal cache for RPC calls?
Many calls have no possiblity of being cached, like get_tip_header, then again many other are cache-able.
Let's take for example the Header of a block. The header is not returned by get_cells, but the header becomes part of the cell, especially in NervosDAO and iCKB calculations. To be able to get the header we either go with
getHeaderByNumber
(which is not reliable in case of a re-org) or we add another call per cell (get_transaction_proof
) to get the hash of the containing block (which is not reliable in case of a re-org) and only then then we callgetHeaderByHash
(which can be cached safely).All this means that every DApp using cell's header needs to jump thru hoops to get this info, which doesn't seem reasonable.
The solution I'd like to propose is simple and it tries to minimize the pain for the developers handling these headers, while also keeping implementation cost reasonable.
Idea
Same as RBG++, after a certain amount of time, those blocks can be consider final and not subject to re-org.
Implementation
The method
getHeaderByNumber
provided by CCC could have the following behavior:Example
For example if we cache only blocks older than 1 hour, then only final blocks enter the cache as:
If one hour slack is not enough, we can double it until is enough.
If one hour slack is too much, we can shorten the slack until is just the right duration to be robust while minimizing cache misses.
Keep up the Great Work,
Phroi
The text was updated successfully, but these errors were encountered: