-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read performance degradation during compaction workload #4109
Comments
Gental ping @jbowens |
One thing I'd like to clarify is that this configures the number of concurrent compactions, not the concurrency used within a single compaction. Pebble may run multiple compactions concurrently in different parts of the LSM (ie, different levels or non-overlapping keyspaces of the same levels). Each one of these compactions by default can make use of up to 2 threads. One thread reads and performs most of the CPU work of the compaction, and the other thread performs the write syscalls to write output sstables. When you're observing this interference, is there a single compaction running or are there multiple concurrent compactions? How is CPU utilization under normal circumstances versus during one of these compactions? I agree that compactions' low read latency is probably explained by the sequential read pattern and the use of I don't have an explanation for why foreground workload block read latency would double. We sometimes see higher CPU utilization causing an increase in Go scheduling latency, and that Go scheduling latency is significant enough to show up in block read latencies. But that should uniformly affect compaction reads and iterator reads. Maybe the foreground workload's block reads are being queued behind the compaction's readahead reads? I.e, compaction reads using Digging into Linux I/O metrics might shed some light—like looking at block device I/O latency. Some form of compaction pacing #687 would help. If you're scheduling many concurrent compactions, reducing the max compaction concurrency may help as well, reducing the spikiness of the compaction workload. |
This phenomenon can be noticed in single compaction mode and concurrent compaction mode. In both cases, the CPU utilization is pretty low. |
We (go-ethereum) are experiencing a significant degradation in database read performance
whenever a compaction process is initiated.
Version:
github.com/cockroachdb/pebble v1.1.2
Hardware: 32GB memory, Samsung 980Pro 2TB SSD, 28 Core i7-14700K
The database configuration is shown as below:
The read performance without the compaction workload is stable. The average time to
load a single data block (~4KB) from disk (not in cache) during normal read operations
is 40µs. (This data was obtained by injecting debug code into Pebble.)
However, when the compaction process starts, the average time to load a single data
block (~4KB) from disk (not in cache) increases to 80µs, roughly 2x slower.
Meanwhile, the average time to load a single data block (~4KB) during compaction is
significantly faster, around 8µs. I suspect this discrepancy may be related to the following
factors:
FADV_SEQUENTIAL
flag, whichoptimizes the OS’s write-ahead mechanism.
page cache, whereas normal reads often target data in the bottom-most level, where
blocks are less likely to be cached. Although I have no evidence to prove it
What I don't really understand is why the data block loading from disk performance could
be 2x slower when compaction is actively running?
At first I suspected that when there are too many concurrent reads (compaction is concurrent,
so there may be many concurrent disk reads in the system), the file reading efficiency will
decrease. However, only the data loading in normal Get slowed down, not compaction.
And after I changed all concurrency to single-threaded sequential reading, the same phenomenon
still occurred.
Do you have any insights about this weird phenomenon and potentially any suggestion to
address it?
The branch I used for debugging: https://github.com/rjl493456442/pebble/commits/gary-debug/
Jira issue: PEBBLE-286
The text was updated successfully, but these errors were encountered: