You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently exploring using Garnet in my application. I'm loving the performance so far, but I'm curious about what happens when it hits the memory limit? Does Garnet have a specific strategy like LRU/LFU to decide which data to evict? I’d also appreciate any tips on how to configure Garnet to make sure it manages memory efficiently under heavy use. Thanks!
Here are some details on Garnet's caching policies:
The data is stored in a hybrid log with 90% of the log marked as mutable (90% is the default, overridable using --mutable-percent).
When you add a new record R to the cache, it starts at the tail (in the mutable region). Updates to R in the mutable region happen "in place".
As other new records are added to the tail, the record R "travels" through the log, until it eventually reaches the immutable region in memory.
Updates in the immutable region will move the record back to the tail (simulating LRU with second chance, for writes).
Reads in the immutable region, however, will not by default move the record back to tail, so it will behav…
Here are some details on Garnet's caching policies:
The data is stored in a hybrid log with 90% of the log marked as mutable (90% is the default, overridable using --mutable-percent).
When you add a new record R to the cache, it starts at the tail (in the mutable region). Updates to R in the mutable region happen "in place".
As other new records are added to the tail, the record R "travels" through the log, until it eventually reaches the immutable region in memory.
Updates in the immutable region will move the record back to the tail (simulating LRU with second chance, for writes).
Reads in the immutable region, however, will not by default move the record back to tail, so it will behave as FIFO for a read-only record in the cache.
If you want LRU with second chance with respect to reads, you can set the flag --copy-reads-to-tail. This will cause reads in the immutable region to get copied to the tail, thereby retaining the read-hot records in the cache.
Experiments have shown that our strategies are extremely good at retaining the hot data in the cache, and provide hit rates close to LRU, without the overheads of the same.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I'm currently exploring using Garnet in my application. I'm loving the performance so far, but I'm curious about what happens when it hits the memory limit? Does Garnet have a specific strategy like LRU/LFU to decide which data to evict? I’d also appreciate any tips on how to configure Garnet to make sure it manages memory efficiently under heavy use. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions