forked from bpftrace/bpftrace
-
Notifications
You must be signed in to change notification settings - Fork 0
/
writeback_example.txt
42 lines (37 loc) · 1.91 KB
/
writeback_example.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Demonstrations of writeback, the Linux bpftrace/eBPF version.
This tool traces when the kernel writeback procedure is writing dirtied pages
to disk, and shows details such as the time, device numbers, reason for the
write back, and the duration. For example:
# writeback.bt
Attaching 4 probes...
Tracing writeback... Hit Ctrl-C to end.
TIME DEVICE PAGES REASON ms
23:28:47 259:1 15791 periodic 0.005
23:28:48 259:0 15792 periodic 0.004
23:28:52 259:1 15784 periodic 0.003
23:28:53 259:0 18682 periodic 0.003
23:28:55 259:0 41970 background 326.663
23:28:56 259:0 18418 background 332.689
23:28:56 259:0 60402 background 362.446
23:28:57 259:1 18230 periodic 0.005
23:28:57 259:1 65492 background 3.343
23:28:57 259:1 65492 background 0.002
23:28:58 259:0 36850 background 0.000
23:28:58 259:0 13298 background 597.198
23:28:58 259:0 55282 background 322.050
23:28:59 259:0 31730 background 336.031
23:28:59 259:0 8178 background 357.119
23:29:01 259:0 50162 background 1803.146
23:29:02 259:0 27634 background 1311.876
23:29:03 259:0 6130 background 331.599
23:29:03 259:0 50162 background 293.968
23:29:03 259:0 28658 background 284.946
23:29:03 259:0 7154 background 286.572
[...]
By looking a the timestamps and latency, it can be seen that the system was
not spending much time in writeback until 23:28:55, when "background"
writeback began, taking over 300 milliseconds per flush.
If timestamps of heavy writeback coincide with times when applications suffered
performance issues, that would be a clue that they are correlated and there
is contention for the disk devices. There are various ways to tune this:
eg, vm.dirty_writeback_centisecs.