On 08/12/2016 12:25 PM, Binoy Jayan wrote:
From: Daniel Wagner daniel.wagner@bmw-carit.de
Finally we place a few tracepoint at the end of critical section. With the hist trigger in place we can generate the plots.
There are a few drawbacks compared to the latency_hist.patch [1]
The latency plots contain the values from all CPUs. In theory you can also filter with something like
'hist:key=latency.bucket:val=hitcount:sort=latency if cpu==0'
but I haven't got this working. I didn't spend much time figuring out why this doesn't work. Even if the above is working you still don't get the per CPU breakdown of the events. I don't know if this is a must have feature.
Another point is the need for placing at least one tracepoint so that the hist code is able to do something. Also I think it would be a good idea to reuse the time diff from the tracer instead. Some refactoring would be necessary for this. For simplicity I just added a hack for getting the time diff. And I am not really sure if it is okay to use the *_enabled() in this way, splitting the time stamping and adding the tracepoint in two sections.
Steven was tossing the idea around to introduce a 'variable' to the tracepoints which can be used for timestamping etc. I'd like to avoid placing two tracepoints and doing later the time diff. It sounds like too much overhead.
Not for inclusion!
Not-Signed-off-by: Daniel Wagner daniel.wagner@bmw-carit.de
[1] https://git.kernel.org/cgit/linux/kernel/git/rt/linux-stable-rt.git/commit/?...
Other changes:
- Added the field 'cpu' to the trace event entry struct so as to
capture per-cpu breakdown of events.
- Trigger for CPU specific breakdown of events:
'hist:key=cpu,latency:val=hitcount:sort=latency' 'hist:key=cpu,latency:val=hitcount:sort=latency if cpu==1'
Not-Signed-off-by: Binoy Jayan binoy.jayan@linaro.org
I think you need to update the commit message where I babbling about what is not working. I don't mind if you completely take over this patch (being the author), just mention in the commit message it was based on my work. That should do the trick.
enum { @@ -422,6 +429,11 @@ stop_critical_timing(unsigned long ip, unsigned long parent_ip) /* start and stop critical timings used to for stoppage (in idle) */ void start_critical_timings(void) {
- if (trace_latency_critical_timings_enabled()) {
int cpu = raw_smp_processor_id();
per_cpu(ts_critical_timings, cpu) = ftrace_now(cpu);
- }
- if (preempt_trace() || irq_trace()) start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
} @@ -431,6 +443,14 @@ void stop_critical_timings(void) { if (preempt_trace() || irq_trace()) stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
- if (trace_latency_critical_timings_enabled()) {
int cpu = raw_smp_processor_id();
trace_latency_critical_timings(cpu,
ftrace_now(cpu) -
per_cpu(ts_critical_timings, cpu));
- }
} EXPORT_SYMBOL_GPL(stop_critical_timings);
Hmm, maybe this is just my bike shedding me speaking here, but I suggest you copy the style with start_critical_timing() and stop_critical_timing() and move the implementation into a small inline function. IMO this is cluttering the reading flow too much. The same is true for the following hunks.
cheers, daniel