Linux perf: Why is the sampling frequency set to 99Hz instead of 100Hz?

Linux perf: Why is the sampling frequency set to 99Hz instead of 100Hz?

View Image

When we use perf to sample, we often set a frequency of 99 instead of 100, such as:

# sudo perf record -F 99 -a -g -- sleep 20
[ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.560 MB (~24472 samples) ]

Options are:

  • -F 99: sample at 99 Hertz (samples per second). I'll sometimes sample faster than this (up to 999 Hertz), but that also costs overhead. 99 Hertz should be negligible. Also, the value '99' and not '100' is to avoid lockstep sampling, which can produce skewed results.

The above explanation is that setting to 100 will cause lockstep sampling. (The above paragraph is taken from:

So, what is lockstep sampling? Let's take a look at

The answer above:

Lockstep sampling is when the profiling samples occur at the same frequency as a loop in the application. The result of this would be that the sample often occurs at the same place in the loop, so it will think that that operation is the most common operation, and a likely bottleneck.

An analogy would be if you were trying to determine whether a road experiences congestion, and you sample it every 24 hours. That sample is likely to be in lock-step with traffic variation; if it's at 8am or 5pm, it will coincide with rush hour and conclude that the road is extremely busy; if it's at 3am it will conclude that there's practically no traffic at all.

For sampling to be accurate, it needs to avoid this. Ideally, the samples should be much more frequent than any cycles in the application, or at random intervals, so that the chance it occurs in any particular operation is proportional to the amount of time that operation takes. But this is often not feasible, so the next best thing is to use a sampling rate that doesn't coincide with the likely frequency of program cycles. If there are enough cycles in the program, this should ensure that the samples take place at many different offsets from the beginning of each cycle.

To relate this to the above analogy, sampling every 23 hours or at random times each day will cause the samples to eventually encounter all times of the day; every 23-day cycle of samples will include all hours of the day. This produces a more complete picture of the traffic levels. And sampling every hour would provide a complete picture in just a few weeks.

I'm not sure why odd-numbered frequencies are likely to ensure this. It seems to be based on an assumption that there are natural frequencies for program operations, and these are even.

I believe this concept is very helpful for us to do performance analysis based on time sampling.

Rather than forgetting about it, it is better to click on the QR code to follow Linuxer

View Image

Reference :