summaryrefslogtreecommitdiff
path: root/Documentation/trace
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/trace')
-rw-r--r--Documentation/trace/coresight-cpu-debug.txt4
-rw-r--r--Documentation/trace/coresight.txt103
-rw-r--r--Documentation/trace/events.rst8
-rw-r--r--Documentation/trace/ftrace-uses.rst6
-rw-r--r--Documentation/trace/ftrace.rst12
-rw-r--r--Documentation/trace/histogram.txt570
-rw-r--r--Documentation/trace/intel_th.rst2
-rw-r--r--Documentation/trace/tracepoint-analysis.rst8
8 files changed, 659 insertions, 54 deletions
diff --git a/Documentation/trace/coresight-cpu-debug.txt b/Documentation/trace/coresight-cpu-debug.txt
index 2b9b51cd501e..89ab09e78e8d 100644
--- a/Documentation/trace/coresight-cpu-debug.txt
+++ b/Documentation/trace/coresight-cpu-debug.txt
@@ -177,11 +177,11 @@ Here is an example of the debugging output format:
ARM external debug module:
coresight-cpu-debug 850000.debug: CPU[0]:
coresight-cpu-debug 850000.debug: EDPRSR: 00000001 (Power:On DLK:Unlock)
-coresight-cpu-debug 850000.debug: EDPCSR: [<ffff00000808e9bc>] handle_IPI+0x174/0x1d8
+coresight-cpu-debug 850000.debug: EDPCSR: handle_IPI+0x174/0x1d8
coresight-cpu-debug 850000.debug: EDCIDSR: 00000000
coresight-cpu-debug 850000.debug: EDVIDSR: 90000000 (State:Non-secure Mode:EL1/0 Width:64bits VMID:0)
coresight-cpu-debug 852000.debug: CPU[1]:
coresight-cpu-debug 852000.debug: EDPRSR: 00000001 (Power:On DLK:Unlock)
-coresight-cpu-debug 852000.debug: EDPCSR: [<ffff0000087fab34>] debug_notifier_call+0x23c/0x358
+coresight-cpu-debug 852000.debug: EDPCSR: debug_notifier_call+0x23c/0x358
coresight-cpu-debug 852000.debug: EDCIDSR: 00000000
coresight-cpu-debug 852000.debug: EDVIDSR: 90000000 (State:Non-secure Mode:EL1/0 Width:64bits VMID:0)
diff --git a/Documentation/trace/coresight.txt b/Documentation/trace/coresight.txt
index 6f0120c3a4f1..efbc832146e7 100644
--- a/Documentation/trace/coresight.txt
+++ b/Documentation/trace/coresight.txt
@@ -187,13 +187,19 @@ that can be performed on them (see "struct coresight_ops"). The
specific to that component only. "Implementation defined" customisations are
expected to be accessed and controlled using those entries.
-Last but not least, "struct module *owner" is expected to be set to reflect
-the information carried in "THIS_MODULE".
How to use the tracer modules
-----------------------------
-Before trace collection can start, a coresight sink needs to be identify.
+There are two ways to use the Coresight framework: 1) using the perf cmd line
+tools and 2) interacting directly with the Coresight devices using the sysFS
+interface. Preference is given to the former as using the sysFS interface
+requires a deep understanding of the Coresight HW. The following sections
+provide details on using both methods.
+
+1) Using the sysFS interface:
+
+Before trace collection can start, a coresight sink needs to be identified.
There is no limit on the amount of sinks (nor sources) that can be enabled at
any given moment. As a generic operation, all device pertaining to the sink
class will have an "active" entry in sysfs:
@@ -298,42 +304,48 @@ Instruction 13570831 0x8026B584 E28DD00C false ADD
Instruction 0 0x8026B588 E8BD8000 true LDM sp!,{pc}
Timestamp Timestamp: 17107041535
-How to use the STM module
--------------------------
+2) Using perf framework:
-Using the System Trace Macrocell module is the same as the tracers - the only
-difference is that clients are driving the trace capture rather
-than the program flow through the code.
+Coresight tracers are represented using the Perf framework's Performance
+Monitoring Unit (PMU) abstraction. As such the perf framework takes charge of
+controlling when tracing gets enabled based on when the process of interest is
+scheduled. When configured in a system, Coresight PMUs will be listed when
+queried by the perf command line tool:
-As with any other CoreSight component, specifics about the STM tracer can be
-found in sysfs with more information on each entry being found in [1]:
+ linaro@linaro-nano:~$ ./perf list pmu
-root@genericarmv8:~# ls /sys/bus/coresight/devices/20100000.stm
-enable_source hwevent_select port_enable subsystem uevent
-hwevent_enable mgmt port_select traceid
-root@genericarmv8:~#
+ List of pre-defined events (to be used in -e):
-Like any other source a sink needs to be identified and the STM enabled before
-being used:
+ cs_etm// [Kernel PMU event]
-root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20010000.etf/enable_sink
-root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20100000.stm/enable_source
+ linaro@linaro-nano:~$
-From there user space applications can request and use channels using the devfs
-interface provided for that purpose by the generic STM API:
+Regardless of the number of tracers available in a system (usually equal to the
+amount of processor cores), the "cs_etm" PMU will be listed only once.
-root@genericarmv8:~# ls -l /dev/20100000.stm
-crw------- 1 root root 10, 61 Jan 3 18:11 /dev/20100000.stm
-root@genericarmv8:~#
+A Coresight PMU works the same way as any other PMU, i.e the name of the PMU is
+listed along with configuration options within forward slashes '/'. Since a
+Coresight system will typically have more than one sink, the name of the sink to
+work with needs to be specified as an event option. Names for sink to choose
+from are listed in sysFS under ($SYSFS)/bus/coresight/devices:
-Details on how to use the generic STM API can be found here [2].
+ root@linaro-nano:~# ls /sys/bus/coresight/devices/
+ 20010000.etf 20040000.funnel 20100000.stm 22040000.etm
+ 22140000.etm 230c0000.funnel 23240000.etm 20030000.tpiu
+ 20070000.etr 20120000.replicator 220c0000.funnel
+ 23040000.etm 23140000.etm 23340000.etm
-[1]. Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
-[2]. Documentation/trace/stm.txt
+ root@linaro-nano:~# perf record -e cs_etm/@20070000.etr/u --per-thread program
+The syntax within the forward slashes '/' is important. The '@' character
+tells the parser that a sink is about to be specified and that this is the sink
+to use for the trace session.
-Using perf tools
-----------------
+More information on the above and other example on how to use Coresight with
+the perf tools can be found in the "HOWTO.md" file of the openCSD gitHub
+repository [3].
+
+2.1) AutoFDO analysis using the perf tools:
perf can be used to record and analyze trace of programs.
@@ -381,3 +393,38 @@ sort example is from the AutoFDO tutorial (https://gcc.gnu.org/wiki/AutoFDO/Tuto
$ taskset -c 2 ./sort_autofdo
Bubble sorting array of 30000 elements
5806 ms
+
+
+How to use the STM module
+-------------------------
+
+Using the System Trace Macrocell module is the same as the tracers - the only
+difference is that clients are driving the trace capture rather
+than the program flow through the code.
+
+As with any other CoreSight component, specifics about the STM tracer can be
+found in sysfs with more information on each entry being found in [1]:
+
+root@genericarmv8:~# ls /sys/bus/coresight/devices/20100000.stm
+enable_source hwevent_select port_enable subsystem uevent
+hwevent_enable mgmt port_select traceid
+root@genericarmv8:~#
+
+Like any other source a sink needs to be identified and the STM enabled before
+being used:
+
+root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20010000.etf/enable_sink
+root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20100000.stm/enable_source
+
+From there user space applications can request and use channels using the devfs
+interface provided for that purpose by the generic STM API:
+
+root@genericarmv8:~# ls -l /dev/20100000.stm
+crw------- 1 root root 10, 61 Jan 3 18:11 /dev/20100000.stm
+root@genericarmv8:~#
+
+Details on how to use the generic STM API can be found here [2].
+
+[1]. Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
+[2]. Documentation/trace/stm.rst
+[3]. https://github.com/Linaro/perf-opencsd
diff --git a/Documentation/trace/events.rst b/Documentation/trace/events.rst
index a5ea2cb0082b..696dc69b8158 100644
--- a/Documentation/trace/events.rst
+++ b/Documentation/trace/events.rst
@@ -8,7 +8,7 @@ Event Tracing
1. Introduction
===============
-Tracepoints (see Documentation/trace/tracepoints.txt) can be used
+Tracepoints (see Documentation/trace/tracepoints.rst) can be used
without creating custom kernel modules to register probe functions
using the event tracing infrastructure.
@@ -338,10 +338,14 @@ used for conditionally invoking triggers.
The syntax for event triggers is roughly based on the syntax for
set_ftrace_filter 'ftrace filter commands' (see the 'Filter commands'
-section of Documentation/trace/ftrace.txt), but there are major
+section of Documentation/trace/ftrace.rst), but there are major
differences and the implementation isn't currently tied to it in any
way, so beware about making generalizations between the two.
+Note: Writing into trace_marker (See Documentation/trace/ftrace.rst)
+ can also enable triggers that are written into
+ /sys/kernel/tracing/events/ftrace/print/trigger
+
6.1 Expression syntax
---------------------
diff --git a/Documentation/trace/ftrace-uses.rst b/Documentation/trace/ftrace-uses.rst
index 998a60a93015..1fbc69894eed 100644
--- a/Documentation/trace/ftrace-uses.rst
+++ b/Documentation/trace/ftrace-uses.rst
@@ -12,7 +12,7 @@ Written for: 4.14
Introduction
============
-The ftrace infrastructure was originially created to attach callbacks to the
+The ftrace infrastructure was originally created to attach callbacks to the
beginning of functions in order to record and trace the flow of the kernel.
But callbacks to the start of a function can have other use cases. Either
for live kernel patching, or for security monitoring. This document describes
@@ -30,7 +30,7 @@ The ftrace context
This requires extra care to what can be done inside a callback. A callback
can be called outside the protective scope of RCU.
-The ftrace infrastructure has some protections agains recursions and RCU
+The ftrace infrastructure has some protections against recursions and RCU
but one must still be very careful how they use the callbacks.
@@ -199,7 +199,7 @@ If @buf is NULL and reset is set, all functions will be enabled for tracing.
The @buf can also be a glob expression to enable all functions that
match a specific pattern.
-See Filter Commands in :file:`Documentation/trace/ftrace.txt`.
+See Filter Commands in :file:`Documentation/trace/ftrace.rst`.
To just trace the schedule function:
diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
index 67d9c38e95eb..a20d34955333 100644
--- a/Documentation/trace/ftrace.rst
+++ b/Documentation/trace/ftrace.rst
@@ -224,6 +224,8 @@ of ftrace. Here is a list of some of the key files:
has a side effect of enabling or disabling specific functions
to be traced. Echoing names of functions into this file
will limit the trace to only those functions.
+ This influences the tracers "function" and "function_graph"
+ and thus also function profiling (see "function_profile_enabled").
The functions listed in "available_filter_functions" are what
can be written into this file.
@@ -265,6 +267,8 @@ of ftrace. Here is a list of some of the key files:
Functions listed in this file will cause the function graph
tracer to only trace these functions and the functions that
they call. (See the section "dynamic ftrace" for more details).
+ Note, set_ftrace_filter and set_ftrace_notrace still affects
+ what functions are being traced.
set_graph_notrace:
@@ -277,7 +281,8 @@ of ftrace. Here is a list of some of the key files:
This lists the functions that ftrace has processed and can trace.
These are the function names that you can pass to
- "set_ftrace_filter" or "set_ftrace_notrace".
+ "set_ftrace_filter", "set_ftrace_notrace",
+ "set_graph_function", or "set_graph_notrace".
(See the section "dynamic ftrace" below for more details.)
dyn_ftrace_total_info:
@@ -507,6 +512,11 @@ of ftrace. Here is a list of some of the key files:
trace_fd = open("trace_marker", WR_ONLY);
+ Note: Writing into the trace_marker file can also initiate triggers
+ that are written into /sys/kernel/tracing/events/ftrace/print/trigger
+ See "Event triggers" in Documentation/trace/events.rst and an
+ example in Documentation/trace/histogram.rst (Section 3.)
+
trace_marker_raw:
This is similar to trace_marker above, but is meant for for binary data
diff --git a/Documentation/trace/histogram.txt b/Documentation/trace/histogram.txt
index 6e05510afc28..7ffea6aa22e3 100644
--- a/Documentation/trace/histogram.txt
+++ b/Documentation/trace/histogram.txt
@@ -7,7 +7,7 @@
Histogram triggers are special event triggers that can be used to
aggregate trace event data into histograms. For information on
- trace events and event triggers, see Documentation/trace/events.txt.
+ trace events and event triggers, see Documentation/trace/events.rst.
2. Histogram Trigger Command
@@ -1604,7 +1604,6 @@
Entries: 7
Dropped: 0
-
2.2 Inter-event hist triggers
-----------------------------
@@ -1730,35 +1729,35 @@ If a variable isn't a key variable or prefixed with 'vals=', the
associated event field will be saved in a variable but won't be summed
as a value:
- # echo 'hist:keys=next_pid:ts1=common_timestamp ... >> event/trigger
+ # echo 'hist:keys=next_pid:ts1=common_timestamp ...' >> event/trigger
Multiple variables can be assigned at the same time. The below would
result in both ts0 and b being created as variables, with both
common_timestamp and field1 additionally being summed as values:
- # echo 'hist:keys=pid:vals=$ts0,$b:ts0=common_timestamp,b=field1 ... >> \
+ # echo 'hist:keys=pid:vals=$ts0,$b:ts0=common_timestamp,b=field1 ...' >> \
event/trigger
Note that variable assignments can appear either preceding or
following their use. The command below behaves identically to the
command above:
- # echo 'hist:keys=pid:ts0=common_timestamp,b=field1:vals=$ts0,$b ... >> \
+ # echo 'hist:keys=pid:ts0=common_timestamp,b=field1:vals=$ts0,$b ...' >> \
event/trigger
Any number of variables not bound to a 'vals=' prefix can also be
assigned by simply separating them with colons. Below is the same
thing but without the values being summed in the histogram:
- # echo 'hist:keys=pid:ts0=common_timestamp:b=field1 ... >> event/trigger
+ # echo 'hist:keys=pid:ts0=common_timestamp:b=field1 ...' >> event/trigger
Variables set as above can be referenced and used in expressions on
another event.
For example, here's how a latency can be calculated:
- # echo 'hist:keys=pid,prio:ts0=common_timestamp ... >> event1/trigger
- # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp-$ts0 ... >> event2/trigger
+ # echo 'hist:keys=pid,prio:ts0=common_timestamp ...' >> event1/trigger
+ # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp-$ts0 ...' >> event2/trigger
In the first line above, the event's timetamp is saved into the
variable ts0. In the next line, ts0 is subtracted from the second
@@ -1767,7 +1766,7 @@ yet another variable, 'wakeup_lat'. The hist trigger below in turn
makes use of the wakeup_lat variable to compute a combined latency
using the same key and variable from yet another event:
- # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ... >> event3/trigger
+ # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ...' >> event3/trigger
2.2.2 Synthetic Events
----------------------
@@ -1808,10 +1807,11 @@ the command that defined it with a '!':
At this point, there isn't yet an actual 'wakeup_latency' event
instantiated in the event subsytem - for this to happen, a 'hist
trigger action' needs to be instantiated and bound to actual fields
-and variables defined on other events (see Section 6.3.3 below).
+and variables defined on other events (see Section 2.2.3 below on
+how that is done using hist trigger 'onmatch' action). Once that is
+done, the 'wakeup_latency' synthetic event instance is created.
-Once that is done, an event instance is created, and a histogram can
-be defined using it:
+A histogram can now be defined for the new synthetic event:
# echo 'hist:keys=pid,prio,lat.log2:sort=pid,lat' >> \
/sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
@@ -1961,7 +1961,7 @@ hist trigger specification.
back to that pid, the timestamp difference is calculated. If the
resulting latency, stored in wakeup_lat, exceeds the current
maximum latency, the values specified in the save() fields are
- recoreded:
+ recorded:
# echo 'hist:keys=pid:ts0=common_timestamp.usecs \
if comm=="cyclictest"' >> \
@@ -1993,3 +1993,547 @@ hist trigger specification.
Hits: 12970
Entries: 2
Dropped: 0
+
+3. User space creating a trigger
+--------------------------------
+
+Writing into /sys/kernel/tracing/trace_marker writes into the ftrace
+ring buffer. This can also act like an event, by writing into the trigger
+file located in /sys/kernel/tracing/events/ftrace/print/
+
+Modifying cyclictest to write into the trace_marker file before it sleeps
+and after it wakes up, something like this:
+
+static void traceputs(char *str)
+{
+ /* tracemark_fd is the trace_marker file descriptor */
+ if (tracemark_fd < 0)
+ return;
+ /* write the tracemark message */
+ write(tracemark_fd, str, strlen(str));
+}
+
+And later add something like:
+
+ traceputs("start");
+ clock_nanosleep(...);
+ traceputs("end");
+
+We can make a histogram from this:
+
+ # cd /sys/kernel/tracing
+ # echo 'latency u64 lat' > synthetic_events
+ # echo 'hist:keys=common_pid:ts0=common_timestamp.usecs if buf == "start"' > events/ftrace/print/trigger
+ # echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(ftrace.print).latency($lat) if buf == "end"' >> events/ftrace/print/trigger
+ # echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger
+
+The above created a synthetic event called "latency" and two histograms
+against the trace_marker, one gets triggered when "start" is written into the
+trace_marker file and the other when "end" is written. If the pids match, then
+it will call the "latency" synthetic event with the calculated latency as its
+parameter. Finally, a histogram is added to the latency synthetic event to
+record the calculated latency along with the pid.
+
+Now running cyclictest with:
+
+ # ./cyclictest -p80 -d0 -i250 -n -a -t --tracemark -b 1000
+
+ -p80 : run threads at priority 80
+ -d0 : have all threads run at the same interval
+ -i250 : start the interval at 250 microseconds (all threads will do this)
+ -n : sleep with nanosleep
+ -a : affine all threads to a separate CPU
+ -t : one thread per available CPU
+ --tracemark : enable trace mark writing
+ -b 1000 : stop if any latency is greater than 1000 microseconds
+
+Note, the -b 1000 is used just to make --tracemark available.
+
+Then we can see the histogram created by this with:
+
+ # cat events/synthetic/latency/hist
+# event histogram
+#
+# trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
+#
+
+{ lat: 107, common_pid: 2039 } hitcount: 1
+{ lat: 122, common_pid: 2041 } hitcount: 1
+{ lat: 166, common_pid: 2039 } hitcount: 1
+{ lat: 174, common_pid: 2039 } hitcount: 1
+{ lat: 194, common_pid: 2041 } hitcount: 1
+{ lat: 196, common_pid: 2036 } hitcount: 1
+{ lat: 197, common_pid: 2038 } hitcount: 1
+{ lat: 198, common_pid: 2039 } hitcount: 1
+{ lat: 199, common_pid: 2039 } hitcount: 1
+{ lat: 200, common_pid: 2041 } hitcount: 1
+{ lat: 201, common_pid: 2039 } hitcount: 2
+{ lat: 202, common_pid: 2038 } hitcount: 1
+{ lat: 202, common_pid: 2043 } hitcount: 1
+{ lat: 203, common_pid: 2039 } hitcount: 1
+{ lat: 203, common_pid: 2036 } hitcount: 1
+{ lat: 203, common_pid: 2041 } hitcount: 1
+{ lat: 206, common_pid: 2038 } hitcount: 2
+{ lat: 207, common_pid: 2039 } hitcount: 1
+{ lat: 207, common_pid: 2036 } hitcount: 1
+{ lat: 208, common_pid: 2040 } hitcount: 1
+{ lat: 209, common_pid: 2043 } hitcount: 1
+{ lat: 210, common_pid: 2039 } hitcount: 1
+{ lat: 211, common_pid: 2039 } hitcount: 4
+{ lat: 212, common_pid: 2043 } hitcount: 1
+{ lat: 212, common_pid: 2039 } hitcount: 2
+{ lat: 213, common_pid: 2039 } hitcount: 1
+{ lat: 214, common_pid: 2038 } hitcount: 1
+{ lat: 214, common_pid: 2039 } hitcount: 2
+{ lat: 214, common_pid: 2042 } hitcount: 1
+{ lat: 215, common_pid: 2039 } hitcount: 1
+{ lat: 217, common_pid: 2036 } hitcount: 1
+{ lat: 217, common_pid: 2040 } hitcount: 1
+{ lat: 217, common_pid: 2039 } hitcount: 1
+{ lat: 218, common_pid: 2039 } hitcount: 6
+{ lat: 219, common_pid: 2039 } hitcount: 9
+{ lat: 220, common_pid: 2039 } hitcount: 11
+{ lat: 221, common_pid: 2039 } hitcount: 5
+{ lat: 221, common_pid: 2042 } hitcount: 1
+{ lat: 222, common_pid: 2039 } hitcount: 7
+{ lat: 223, common_pid: 2036 } hitcount: 1
+{ lat: 223, common_pid: 2039 } hitcount: 3
+{ lat: 224, common_pid: 2039 } hitcount: 4
+{ lat: 224, common_pid: 2037 } hitcount: 1
+{ lat: 224, common_pid: 2036 } hitcount: 2
+{ lat: 225, common_pid: 2039 } hitcount: 5
+{ lat: 225, common_pid: 2042 } hitcount: 1
+{ lat: 226, common_pid: 2039 } hitcount: 7
+{ lat: 226, common_pid: 2036 } hitcount: 4
+{ lat: 227, common_pid: 2039 } hitcount: 6
+{ lat: 227, common_pid: 2036 } hitcount: 12
+{ lat: 227, common_pid: 2043 } hitcount: 1
+{ lat: 228, common_pid: 2039 } hitcount: 7
+{ lat: 228, common_pid: 2036 } hitcount: 14
+{ lat: 229, common_pid: 2039 } hitcount: 9
+{ lat: 229, common_pid: 2036 } hitcount: 8
+{ lat: 229, common_pid: 2038 } hitcount: 1
+{ lat: 230, common_pid: 2039 } hitcount: 11
+{ lat: 230, common_pid: 2036 } hitcount: 6
+{ lat: 230, common_pid: 2043 } hitcount: 1
+{ lat: 230, common_pid: 2042 } hitcount: 2
+{ lat: 231, common_pid: 2041 } hitcount: 1
+{ lat: 231, common_pid: 2036 } hitcount: 6
+{ lat: 231, common_pid: 2043 } hitcount: 1
+{ lat: 231, common_pid: 2039 } hitcount: 8
+{ lat: 232, common_pid: 2037 } hitcount: 1
+{ lat: 232, common_pid: 2039 } hitcount: 6
+{ lat: 232, common_pid: 2040 } hitcount: 2
+{ lat: 232, common_pid: 2036 } hitcount: 5
+{ lat: 232, common_pid: 2043 } hitcount: 1
+{ lat: 233, common_pid: 2036 } hitcount: 5
+{ lat: 233, common_pid: 2039 } hitcount: 11
+{ lat: 234, common_pid: 2039 } hitcount: 4
+{ lat: 234, common_pid: 2038 } hitcount: 2
+{ lat: 234, common_pid: 2043 } hitcount: 2
+{ lat: 234, common_pid: 2036 } hitcount: 11
+{ lat: 234, common_pid: 2040 } hitcount: 1
+{ lat: 235, common_pid: 2037 } hitcount: 2
+{ lat: 235, common_pid: 2036 } hitcount: 8
+{ lat: 235, common_pid: 2043 } hitcount: 2
+{ lat: 235, common_pid: 2039 } hitcount: 5
+{ lat: 235, common_pid: 2042 } hitcount: 2
+{ lat: 235, common_pid: 2040 } hitcount: 4
+{ lat: 235, common_pid: 2041 } hitcount: 1
+{ lat: 236, common_pid: 2036 } hitcount: 7
+{ lat: 236, common_pid: 2037 } hitcount: 1
+{ lat: 236, common_pid: 2041 } hitcount: 5
+{ lat: 236, common_pid: 2039 } hitcount: 3
+{ lat: 236, common_pid: 2043 } hitcount: 9
+{ lat: 236, common_pid: 2040 } hitcount: 7
+{ lat: 237, common_pid: 2037 } hitcount: 1
+{ lat: 237, common_pid: 2040 } hitcount: 1
+{ lat: 237, common_pid: 2036 } hitcount: 9
+{ lat: 237, common_pid: 2039 } hitcount: 3
+{ lat: 237, common_pid: 2043 } hitcount: 8
+{ lat: 237, common_pid: 2042 } hitcount: 2
+{ lat: 237, common_pid: 2041 } hitcount: 2
+{ lat: 238, common_pid: 2043 } hitcount: 10
+{ lat: 238, common_pid: 2040 } hitcount: 1
+{ lat: 238, common_pid: 2037 } hitcount: 9
+{ lat: 238, common_pid: 2038 } hitcount: 1
+{ lat: 238, common_pid: 2039 } hitcount: 1
+{ lat: 238, common_pid: 2042 } hitcount: 3
+{ lat: 238, common_pid: 2036 } hitcount: 7
+{ lat: 239, common_pid: 2041 } hitcount: 1
+{ lat: 239, common_pid: 2043 } hitcount: 11
+{ lat: 239, common_pid: 2037 } hitcount: 11
+{ lat: 239, common_pid: 2038 } hitcount: 6
+{ lat: 239, common_pid: 2036 } hitcount: 7
+{ lat: 239, common_pid: 2040 } hitcount: 1
+{ lat: 239, common_pid: 2042 } hitcount: 9
+{ lat: 240, common_pid: 2037 } hitcount: 29
+{ lat: 240, common_pid: 2043 } hitcount: 15
+{ lat: 240, common_pid: 2040 } hitcount: 44
+{ lat: 240, common_pid: 2039 } hitcount: 1
+{ lat: 240, common_pid: 2041 } hitcount: 2
+{ lat: 240, common_pid: 2038 } hitcount: 1
+{ lat: 240, common_pid: 2036 } hitcount: 10
+{ lat: 240, common_pid: 2042 } hitcount: 13
+{ lat: 241, common_pid: 2036 } hitcount: 21
+{ lat: 241, common_pid: 2041 } hitcount: 36
+{ lat: 241, common_pid: 2037 } hitcount: 34
+{ lat: 241, common_pid: 2042 } hitcount: 14
+{ lat: 241, common_pid: 2040 } hitcount: 94
+{ lat: 241, common_pid: 2039 } hitcount: 12
+{ lat: 241, common_pid: 2038 } hitcount: 2
+{ lat: 241, common_pid: 2043 } hitcount: 28
+{ lat: 242, common_pid: 2040 } hitcount: 109
+{ lat: 242, common_pid: 2041 } hitcount: 506
+{ lat: 242, common_pid: 2039 } hitcount: 155
+{ lat: 242, common_pid: 2042 } hitcount: 21
+{ lat: 242, common_pid: 2037 } hitcount: 52
+{ lat: 242, common_pid: 2043 } hitcount: 21
+{ lat: 242, common_pid: 2036 } hitcount: 16
+{ lat: 242, common_pid: 2038 } hitcount: 156
+{ lat: 243, common_pid: 2037 } hitcount: 46
+{ lat: 243, common_pid: 2039 } hitcount: 40
+{ lat: 243, common_pid: 2042 } hitcount: 119
+{ lat: 243, common_pid: 2041 } hitcount: 611
+{ lat: 243, common_pid: 2036 } hitcount: 69
+{ lat: 243, common_pid: 2038 } hitcount: 784
+{ lat: 243, common_pid: 2040 } hitcount: 323
+{ lat: 243, common_pid: 2043 } hitcount: 14
+{ lat: 244, common_pid: 2043 } hitcount: 35
+{ lat: 244, common_pid: 2042 } hitcount: 305
+{ lat: 244, common_pid: 2039 } hitcount: 8
+{ lat: 244, common_pid: 2040 } hitcount: 4515
+{ lat: 244, common_pid: 2038 } hitcount: 371
+{ lat: 244, common_pid: 2037 } hitcount: 31
+{ lat: 244, common_pid: 2036 } hitcount: 114
+{ lat: 244, common_pid: 2041 } hitcount: 3396
+{ lat: 245, common_pid: 2036 } hitcount: 700
+{ lat: 245, common_pid: 2041 } hitcount: 2772
+{ lat: 245, common_pid: 2037 } hitcount: 268
+{ lat: 245, common_pid: 2039 } hitcount: 472
+{ lat: 245, common_pid: 2038 } hitcount: 2758
+{ lat: 245, common_pid: 2042 } hitcount: 3833
+{ lat: 245, common_pid: 2040 } hitcount: 3105
+{ lat: 245, common_pid: 2043 } hitcount: 645
+{ lat: 246, common_pid: 2038 } hitcount: 3451
+{ lat: 246, common_pid: 2041 } hitcount: 142
+{ lat: 246, common_pid: 2037 } hitcount: 5101
+{ lat: 246, common_pid: 2040 } hitcount: 68
+{ lat: 246, common_pid: 2043 } hitcount: 5099
+{ lat: 246, common_pid: 2039 } hitcount: 5608
+{ lat: 246, common_pid: 2042 } hitcount: 3723
+{ lat: 246, common_pid: 2036 } hitcount: 4738
+{ lat: 247, common_pid: 2042 } hitcount: 312
+{ lat: 247, common_pid: 2043 } hitcount: 2385
+{ lat: 247, common_pid: 2041 } hitcount: 452
+{ lat: 247, common_pid: 2038 } hitcount: 792
+{ lat: 247, common_pid: 2040 } hitcount: 78
+{ lat: 247, common_pid: 2036 } hitcount: 2375
+{ lat: 247, common_pid: 2039 } hitcount: 1834
+{ lat: 247, common_pid: 2037 } hitcount: 2655
+{ lat: 248, common_pid: 2037 } hitcount: 36
+{ lat: 248, common_pid: 2042 } hitcount: 11
+{ lat: 248, common_pid: 2038 } hitcount: 122
+{ lat: 248, common_pid: 2036 } hitcount: 135
+{ lat: 248, common_pid: 2039 } hitcount: 26
+{ lat: 248, common_pid: 2041 } hitcount: 503
+{ lat: 248, common_pid: 2043 } hitcount: 66
+{ lat: 248, common_pid: 2040 } hitcount: 46
+{ lat: 249, common_pid: 2037 } hitcount: 29
+{ lat: 249, common_pid: 2038 } hitcount: 1
+{ lat: 249, common_pid: 2043 } hitcount: 29
+{ lat: 249, common_pid: 2039 } hitcount: 8
+{ lat: 249, common_pid: 2042 } hitcount: 56
+{ lat: 249, common_pid: 2040 } hitcount: 27
+{ lat: 249, common_pid: 2041 } hitcount: 11
+{ lat: 249, common_pid: 2036 } hitcount: 27
+{ lat: 250, common_pid: 2038 } hitcount: 1
+{ lat: 250, common_pid: 2036 } hitcount: 30
+{ lat: 250, common_pid: 2040 } hitcount: 19
+{ lat: 250, common_pid: 2043 } hitcount: 22
+{ lat: 250, common_pid: 2042 } hitcount: 20
+{ lat: 250, common_pid: 2041 } hitcount: 1
+{ lat: 250, common_pid: 2039 } hitcount: 6
+{ lat: 250, common_pid: 2037 } hitcount: 48
+{ lat: 251, common_pid: 2037 } hitcount: 43
+{ lat: 251, common_pid: 2039 } hitcount: 1
+{ lat: 251, common_pid: 2036 } hitcount: 12
+{ lat: 251, common_pid: 2042 } hitcount: 2
+{ lat: 251, common_pid: 2041 } hitcount: 1
+{ lat: 251, common_pid: 2043 } hitcount: 15
+{ lat: 251, common_pid: 2040 } hitcount: 3
+{ lat: 252, common_pid: 2040 } hitcount: 1
+{ lat: 252, common_pid: 2036 } hitcount: 12
+{ lat: 252, common_pid: 2037 } hitcount: 21
+{ lat: 252, common_pid: 2043 } hitcount: 14
+{ lat: 253, common_pid: 2037 } hitcount: 21
+{ lat: 253, common_pid: 2039 } hitcount: 2
+{ lat: 253, common_pid: 2036 } hitcount: 9
+{ lat: 253, common_pid: 2043 } hitcount: 6
+{ lat: 253, common_pid: 2040 } hitcount: 1
+{ lat: 254, common_pid: 2036 } hitcount: 8
+{ lat: 254, common_pid: 2043 } hitcount: 3
+{ lat: 254, common_pid: 2041 } hitcount: 1
+{ lat: 254, common_pid: 2042 } hitcount: 1
+{ lat: 254, common_pid: 2039 } hitcount: 1
+{ lat: 254, common_pid: 2037 } hitcount: 12
+{ lat: 255, common_pid: 2043 } hitcount: 1
+{ lat: 255, common_pid: 2037 } hitcount: 2
+{ lat: 255, common_pid: 2036 } hitcount: 2
+{ lat: 255, common_pid: 2039 } hitcount: 8
+{ lat: 256, common_pid: 2043 } hitcount: 1
+{ lat: 256, common_pid: 2036 } hitcount: 4
+{ lat: 256, common_pid: 2039 } hitcount: 6
+{ lat: 257, common_pid: 2039 } hitcount: 5
+{ lat: 257, common_pid: 2036 } hitcount: 4
+{ lat: 258, common_pid: 2039 } hitcount: 5
+{ lat: 258, common_pid: 2036 } hitcount: 2
+{ lat: 259, common_pid: 2036 } hitcount: 7
+{ lat: 259, common_pid: 2039 } hitcount: 7
+{ lat: 260, common_pid: 2036 } hitcount: 8
+{ lat: 260, common_pid: 2039 } hitcount: 6
+{ lat: 261, common_pid: 2036 } hitcount: 5
+{ lat: 261, common_pid: 2039 } hitcount: 7
+{ lat: 262, common_pid: 2039 } hitcount: 5
+{ lat: 262, common_pid: 2036 } hitcount: 5
+{ lat: 263, common_pid: 2039 } hitcount: 7
+{ lat: 263, common_pid: 2036 } hitcount: 7
+{ lat: 264, common_pid: 2039 } hitcount: 9
+{ lat: 264, common_pid: 2036 } hitcount: 9
+{ lat: 265, common_pid: 2036 } hitcount: 5
+{ lat: 265, common_pid: 2039 } hitcount: 1
+{ lat: 266, common_pid: 2036 } hitcount: 1
+{ lat: 266, common_pid: 2039 } hitcount: 3
+{ lat: 267, common_pid: 2036 } hitcount: 1
+{ lat: 267, common_pid: 2039 } hitcount: 3
+{ lat: 268, common_pid: 2036 } hitcount: 1
+{ lat: 268, common_pid: 2039 } hitcount: 6
+{ lat: 269, common_pid: 2036 } hitcount: 1
+{ lat: 269, common_pid: 2043 } hitcount: 1
+{ lat: 269, common_pid: 2039 } hitcount: 2
+{ lat: 270, common_pid: 2040 } hitcount: 1
+{ lat: 270, common_pid: 2039 } hitcount: 6
+{ lat: 271, common_pid: 2041 } hitcount: 1
+{ lat: 271, common_pid: 2039 } hitcount: 5
+{ lat: 272, common_pid: 2039 } hitcount: 10
+{ lat: 273, common_pid: 2039 } hitcount: 8
+{ lat: 274, common_pid: 2039 } hitcount: 2
+{ lat: 275, common_pid: 2039 } hitcount: 1
+{ lat: 276, common_pid: 2039 } hitcount: 2
+{ lat: 276, common_pid: 2037 } hitcount: 1
+{ lat: 276, common_pid: 2038 } hitcount: 1
+{ lat: 277, common_pid: 2039 } hitcount: 1
+{ lat: 277, common_pid: 2042 } hitcount: 1
+{ lat: 278, common_pid: 2039 } hitcount: 1
+{ lat: 279, common_pid: 2039 } hitcount: 4
+{ lat: 279, common_pid: 2043 } hitcount: 1
+{ lat: 280, common_pid: 2039 } hitcount: 3
+{ lat: 283, common_pid: 2036 } hitcount: 2
+{ lat: 284, common_pid: 2039 } hitcount: 1
+{ lat: 284, common_pid: 2043 } hitcount: 1
+{ lat: 288, common_pid: 2039 } hitcount: 1
+{ lat: 289, common_pid: 2039 } hitcount: 1
+{ lat: 300, common_pid: 2039 } hitcount: 1
+{ lat: 384, common_pid: 2039 } hitcount: 1
+
+Totals:
+ Hits: 67625
+ Entries: 278
+ Dropped: 0
+
+Note, the writes are around the sleep, so ideally they will all be of 250
+microseconds. If you are wondering how there are several that are under
+250 microseconds, that is because the way cyclictest works, is if one
+iteration comes in late, the next one will set the timer to wake up less that
+250. That is, if an iteration came in 50 microseconds late, the next wake up
+will be at 200 microseconds.
+
+But this could easily be done in userspace. To make this even more
+interesting, we can mix the histogram between events that happened in the
+kernel with trace_marker.
+
+ # cd /sys/kernel/tracing
+ # echo 'latency u64 lat' > synthetic_events
+ # echo 'hist:keys=pid:ts0=common_timestamp.usecs' > events/sched/sched_waking/trigger
+ # echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_waking).latency($lat) if buf == "end"' > events/ftrace/print/trigger
+ # echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger
+
+The difference this time is that instead of using the trace_marker to start
+the latency, the sched_waking event is used, matching the common_pid for the
+trace_marker write with the pid that is being woken by sched_waking.
+
+After running cyclictest again with the same parameters, we now have:
+
+ # cat events/synthetic/latency/hist
+# event histogram
+#
+# trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
+#
+
+{ lat: 7, common_pid: 2302 } hitcount: 640
+{ lat: 7, common_pid: 2299 } hitcount: 42
+{ lat: 7, common_pid: 2303 } hitcount: 18
+{ lat: 7, common_pid: 2305 } hitcount: 166
+{ lat: 7, common_pid: 2306 } hitcount: 1
+{ lat: 7, common_pid: 2301 } hitcount: 91
+{ lat: 7, common_pid: 2300 } hitcount: 17
+{ lat: 8, common_pid: 2303 } hitcount: 8296
+{ lat: 8, common_pid: 2304 } hitcount: 6864
+{ lat: 8, common_pid: 2305 } hitcount: 9464
+{ lat: 8, common_pid: 2301 } hitcount: 9213
+{ lat: 8, common_pid: 2306 } hitcount: 6246
+{ lat: 8, common_pid: 2302 } hitcount: 8797
+{ lat: 8, common_pid: 2299 } hitcount: 8771
+{ lat: 8, common_pid: 2300 } hitcount: 8119
+{ lat: 9, common_pid: 2305 } hitcount: 1519
+{ lat: 9, common_pid: 2299 } hitcount: 2346
+{ lat: 9, common_pid: 2303 } hitcount: 2841
+{ lat: 9, common_pid: 2301 } hitcount: 1846
+{ lat: 9, common_pid: 2304 } hitcount: 3861
+{ lat: 9, common_pid: 2302 } hitcount: 1210
+{ lat: 9, common_pid: 2300 } hitcount: 2762
+{ lat: 9, common_pid: 2306 } hitcount: 4247
+{ lat: 10, common_pid: 2299 } hitcount: 16
+{ lat: 10, common_pid: 2306 } hitcount: 333
+{ lat: 10, common_pid: 2303 } hitcount: 16
+{ lat: 10, common_pid: 2304 } hitcount: 168
+{ lat: 10, common_pid: 2302 } hitcount: 240
+{ lat: 10, common_pid: 2301 } hitcount: 28
+{ lat: 10, common_pid: 2300 } hitcount: 95
+{ lat: 10, common_pid: 2305 } hitcount: 18
+{ lat: 11, common_pid: 2303 } hitcount: 5
+{ lat: 11, common_pid: 2305 } hitcount: 8
+{ lat: 11, common_pid: 2306 } hitcount: 221
+{ lat: 11, common_pid: 2302 } hitcount: 76
+{ lat: 11, common_pid: 2304 } hitcount: 26
+{ lat: 11, common_pid: 2300 } hitcount: 125
+{ lat: 11, common_pid: 2299 } hitcount: 2
+{ lat: 12, common_pid: 2305 } hitcount: 3
+{ lat: 12, common_pid: 2300 } hitcount: 6
+{ lat: 12, common_pid: 2306 } hitcount: 90
+{ lat: 12, common_pid: 2302 } hitcount: 4
+{ lat: 12, common_pid: 2303 } hitcount: 1
+{ lat: 12, common_pid: 2304 } hitcount: 122
+{ lat: 13, common_pid: 2300 } hitcount: 12
+{ lat: 13, common_pid: 2301 } hitcount: 1
+{ lat: 13, common_pid: 2306 } hitcount: 32
+{ lat: 13, common_pid: 2302 } hitcount: 5
+{ lat: 13, common_pid: 2305 } hitcount: 1
+{ lat: 13, common_pid: 2303 } hitcount: 1
+{ lat: 13, common_pid: 2304 } hitcount: 61
+{ lat: 14, common_pid: 2303 } hitcount: 4
+{ lat: 14, common_pid: 2306 } hitcount: 5
+{ lat: 14, common_pid: 2305 } hitcount: 4
+{ lat: 14, common_pid: 2304 } hitcount: 62
+{ lat: 14, common_pid: 2302 } hitcount: 19
+{ lat: 14, common_pid: 2300 } hitcount: 33
+{ lat: 14, common_pid: 2299 } hitcount: 1
+{ lat: 14, common_pid: 2301 } hitcount: 4
+{ lat: 15, common_pid: 2305 } hitcount: 1
+{ lat: 15, common_pid: 2302 } hitcount: 25
+{ lat: 15, common_pid: 2300 } hitcount: 11
+{ lat: 15, common_pid: 2299 } hitcount: 5
+{ lat: 15, common_pid: 2301 } hitcount: 1
+{ lat: 15, common_pid: 2304 } hitcount: 8
+{ lat: 15, common_pid: 2303 } hitcount: 1
+{ lat: 15, common_pid: 2306 } hitcount: 6
+{ lat: 16, common_pid: 2302 } hitcount: 31
+{ lat: 16, common_pid: 2306 } hitcount: 3
+{ lat: 16, common_pid: 2300 } hitcount: 5
+{ lat: 17, common_pid: 2302 } hitcount: 6
+{ lat: 17, common_pid: 2303 } hitcount: 1
+{ lat: 18, common_pid: 2304 } hitcount: 1
+{ lat: 18, common_pid: 2302 } hitcount: 8
+{ lat: 18, common_pid: 2299 } hitcount: 1
+{ lat: 18, common_pid: 2301 } hitcount: 1
+{ lat: 19, common_pid: 2303 } hitcount: 4
+{ lat: 19, common_pid: 2304 } hitcount: 5
+{ lat: 19, common_pid: 2302 } hitcount: 4
+{ lat: 19, common_pid: 2299 } hitcount: 3
+{ lat: 19, common_pid: 2306 } hitcount: 1
+{ lat: 19, common_pid: 2300 } hitcount: 4
+{ lat: 19, common_pid: 2305 } hitcount: 5
+{ lat: 20, common_pid: 2299 } hitcount: 2
+{ lat: 20, common_pid: 2302 } hitcount: 3
+{ lat: 20, common_pid: 2305 } hitcount: 1
+{ lat: 20, common_pid: 2300 } hitcount: 2
+{ lat: 20, common_pid: 2301 } hitcount: 2
+{ lat: 20, common_pid: 2303 } hitcount: 3
+{ lat: 21, common_pid: 2305 } hitcount: 1
+{ lat: 21, common_pid: 2299 } hitcount: 5
+{ lat: 21, common_pid: 2303 } hitcount: 4
+{ lat: 21, common_pid: 2302 } hitcount: 7
+{ lat: 21, common_pid: 2300 } hitcount: 1
+{ lat: 21, common_pid: 2301 } hitcount: 5
+{ lat: 21, common_pid: 2304 } hitcount: 2
+{ lat: 22, common_pid: 2302 } hitcount: 5
+{ lat: 22, common_pid: 2303 } hitcount: 1
+{ lat: 22, common_pid: 2306 } hitcount: 3
+{ lat: 22, common_pid: 2301 } hitcount: 2
+{ lat: 22, common_pid: 2300 } hitcount: 1
+{ lat: 22, common_pid: 2299 } hitcount: 1
+{ lat: 22, common_pid: 2305 } hitcount: 1
+{ lat: 22, common_pid: 2304 } hitcount: 1
+{ lat: 23, common_pid: 2299 } hitcount: 1
+{ lat: 23, common_pid: 2306 } hitcount: 2
+{ lat: 23, common_pid: 2302 } hitcount: 6
+{ lat: 24, common_pid: 2302 } hitcount: 3
+{ lat: 24, common_pid: 2300 } hitcount: 1
+{ lat: 24, common_pid: 2306 } hitcount: 2
+{ lat: 24, common_pid: 2305 } hitcount: 1
+{ lat: 24, common_pid: 2299 } hitcount: 1
+{ lat: 25, common_pid: 2300 } hitcount: 1
+{ lat: 25, common_pid: 2302 } hitcount: 4
+{ lat: 26, common_pid: 2302 } hitcount: 2
+{ lat: 27, common_pid: 2305 } hitcount: 1
+{ lat: 27, common_pid: 2300 } hitcount: 1
+{ lat: 27, common_pid: 2302 } hitcount: 3
+{ lat: 28, common_pid: 2306 } hitcount: 1
+{ lat: 28, common_pid: 2302 } hitcount: 4
+{ lat: 29, common_pid: 2302 } hitcount: 1
+{ lat: 29, common_pid: 2300 } hitcount: 2
+{ lat: 29, common_pid: 2306 } hitcount: 1
+{ lat: 29, common_pid: 2304 } hitcount: 1
+{ lat: 30, common_pid: 2302 } hitcount: 4
+{ lat: 31, common_pid: 2302 } hitcount: 6
+{ lat: 32, common_pid: 2302 } hitcount: 1
+{ lat: 33, common_pid: 2299 } hitcount: 1
+{ lat: 33, common_pid: 2302 } hitcount: 3
+{ lat: 34, common_pid: 2302 } hitcount: 2
+{ lat: 35, common_pid: 2302 } hitcount: 1
+{ lat: 35, common_pid: 2304 } hitcount: 1
+{ lat: 36, common_pid: 2302 } hitcount: 4
+{ lat: 37, common_pid: 2302 } hitcount: 6
+{ lat: 38, common_pid: 2302 } hitcount: 2
+{ lat: 39, common_pid: 2302 } hitcount: 2
+{ lat: 39, common_pid: 2304 } hitcount: 1
+{ lat: 40, common_pid: 2304 } hitcount: 2
+{ lat: 40, common_pid: 2302 } hitcount: 5
+{ lat: 41, common_pid: 2304 } hitcount: 1
+{ lat: 41, common_pid: 2302 } hitcount: 8
+{ lat: 42, common_pid: 2302 } hitcount: 6
+{ lat: 42, common_pid: 2304 } hitcount: 1
+{ lat: 43, common_pid: 2302 } hitcount: 3
+{ lat: 43, common_pid: 2304 } hitcount: 4
+{ lat: 44, common_pid: 2302 } hitcount: 6
+{ lat: 45, common_pid: 2302 } hitcount: 5
+{ lat: 46, common_pid: 2302 } hitcount: 5
+{ lat: 47, common_pid: 2302 } hitcount: 7
+{ lat: 48, common_pid: 2301 } hitcount: 1
+{ lat: 48, common_pid: 2302 } hitcount: 9
+{ lat: 49, common_pid: 2302 } hitcount: 3
+{ lat: 50, common_pid: 2302 } hitcount: 1
+{ lat: 50, common_pid: 2301 } hitcount: 1
+{ lat: 51, common_pid: 2302 } hitcount: 2
+{ lat: 51, common_pid: 2301 } hitcount: 1
+{ lat: 61, common_pid: 2302 } hitcount: 1
+{ lat: 110, common_pid: 2302 } hitcount: 1
+
+Totals:
+ Hits: 89565
+ Entries: 158
+ Dropped: 0
+
+This doesn't tell us any information about how late cyclictest may have
+woken up, but it does show us a nice histogram of how long it took from
+the time that cyclictest was woken to the time it made it into user space.
diff --git a/Documentation/trace/intel_th.rst b/Documentation/trace/intel_th.rst
index 990f13265178..19e2d633f3c7 100644
--- a/Documentation/trace/intel_th.rst
+++ b/Documentation/trace/intel_th.rst
@@ -38,7 +38,7 @@ description is at Documentation/ABI/testing/sysfs-bus-intel_th-devices-gth.
STH registers an stm class device, through which it provides interface
to userspace and kernelspace software trace sources. See
-Documentation/trace/stm.txt for more information on that.
+Documentation/trace/stm.rst for more information on that.
MSU can be configured to collect trace data into a system memory
buffer, which can later on be read from its device nodes via read() or
diff --git a/Documentation/trace/tracepoint-analysis.rst b/Documentation/trace/tracepoint-analysis.rst
index a4d3ff2e5efb..716326b9f152 100644
--- a/Documentation/trace/tracepoint-analysis.rst
+++ b/Documentation/trace/tracepoint-analysis.rst
@@ -6,7 +6,7 @@ Notes on Analysing Behaviour Using Events and Tracepoints
1. Introduction
===============
-Tracepoints (see Documentation/trace/tracepoints.txt) can be used without
+Tracepoints (see Documentation/trace/tracepoints.rst) can be used without
creating custom kernel modules to register probe functions using the event
tracing infrastructure.
@@ -55,7 +55,7 @@ simple case of::
3.1 System-Wide Event Enabling
------------------------------
-See Documentation/trace/events.txt for a proper description on how events
+See Documentation/trace/events.rst for a proper description on how events
can be enabled system-wide. A short example of enabling all events related
to page allocation would look something like::
@@ -112,7 +112,7 @@ at that point.
3.4 Local Event Enabling
------------------------
-Documentation/trace/ftrace.txt describes how to enable events on a per-thread
+Documentation/trace/ftrace.rst describes how to enable events on a per-thread
basis using set_ftrace_pid.
3.5 Local Event Enablement with PCL
@@ -137,7 +137,7 @@ basis using PCL such as follows.
4. Event Filtering
==================
-Documentation/trace/ftrace.txt covers in-depth how to filter events in
+Documentation/trace/ftrace.rst covers in-depth how to filter events in
ftrace. Obviously using grep and awk of trace_pipe is an option as well
as any script reading trace_pipe.