perf: Reimplement frequency driven sampling
There was a bug in the old period code that caused intel_pmu_enable_all() or native_write_msr_safe() to show up quite high in the profiles. In staring at that code it made my head hurt, so I rewrote it in a hopefully simpler fashion. Its now fully symetric between tick and overflow driven adjustments and uses less data to boot. The only complication is that it basically wants to do a u128 division. The code approximates that in a rather simple truncate until it fits fashion, taking care to balance the terms while truncating. This version does not generate that sampling artefact. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Cc: <stable@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
committed by
Ingo Molnar
parent
ef12a14130
commit
abd5071394
@ -498,9 +498,8 @@ struct hw_perf_event {
|
||||
atomic64_t period_left;
|
||||
u64 interrupts;
|
||||
|
||||
u64 freq_count;
|
||||
u64 freq_interrupts;
|
||||
u64 freq_stamp;
|
||||
u64 freq_time_stamp;
|
||||
u64 freq_count_stamp;
|
||||
#endif
|
||||
};
|
||||
|
||||
|
Reference in New Issue
Block a user