-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compact: add simple threshold-based tombstone compaction heuristic #3739
compact: add simple threshold-based tombstone compaction heuristic #3739
Conversation
9af4e45
to
5b9740e
Compare
Currently experimenting with a rough implementation (5b9740e) of option # 1 from #3719. Here are the performance results from the benchmark added in de9b2e2:
I examined the LSM state after all background compactions were finished, and it looks like the new heuristic does target all of the tombstone-dense SSTables in the middle of the key range for compaction. After compaction, there are no tombstones left in L0/L5 which is where they were building up before. This means that A single tombstone swath is a pretty simple case though, and for next steps I'm planning to examine a delete-heavy KV workload, as well as looking into adding a new Pebble benchmark for a queue-style workload. |
5b9740e
to
50197e5
Compare
This change adds a heuristic to compact point tombstones based on their density across the LSM. We add a new table property called `NumTombstoneDenseBlocks` and a corresponding field in `TableStats` that tracks the number of data blocks in each table which are considered tombstone-dense. This value is calculated on the fly while tables are being written, so no extra I/O is required later on to compute it. A data block is considered tombstone-dense if it fulfills either of the following criteria: 1. The block contains at least `options.Experimental.NumDeletionsThreshold` point tombstones. The default value is `100`. 2. The ratio of the uncompressed size of point tombstones to the uncompressed size of the block is at least `options.Experimental.DeletionSizeRatioThreshold`. For example, with the default value of `0.5`, a data block of size 4KB would be considered tombstone-dense if it contains at least 2KB of point tombstones. The intuition here is that as described [here](cockroachdb#918 (comment)), dense clusters are bad because they a) waste CPU when skipping over tombstones, and b) waste I/O because we end up loading more blocks per live key. The two criteria above are meant to tackle these two issues respectively; the the count-based threshold prevents CPU waste, and the size-based threshold prevents I/O waste. A table is considered eligible for the new tombstone compaction type if it contains at least `options.Experimental.MinTombstoneDenseBlocks` tombstone-dense data blocks. The default value is `20`. We use an Annotator in a similar way to elision-only compactions in order to prioritize compacting the table with the most tombstone-dense blocks if there are multiple eligible tables. The default here was chosen through experimentation on CockroachDB KV workloads; with a lower value we were compacting too aggressively leading to very high write amplification, but lower values led to very few noticeable performance improvements.
50197e5
to
9772206
Compare
Closing because this simple heuristic did not improve performance with the queue benchmark here: #3744 (comment) An improved heuristic is at #3790 |
No description provided.