NAME
lttng-concepts - LTTng conceptsDESCRIPTION
This manual page documents the concepts of LTTng.•Instrumentation point, event rule, and
event
•Trigger
•Recording session
•Tracing domain
•Channel and ring buffer
•Recording event rule and event
record
INSTRUMENTATION POINT, EVENT RULE, AND EVENT
An instrumentation point is a point, within a piece of software, which, when executed, creates an LTTng event.•The instrumentation point type (see
the “Instrumentation point types” section below).
•The instrumentation point name.
•The instrumentation point log
level.
•For a recording event rule (see the
“RECORDING EVENT RULE AND EVENT RECORD” section below):
•The status of the rule itself.
•The status of the channel (see the
“CHANNEL AND RING BUFFER” section below).
•The activity of the recording session
(started or stopped; see the “RECORDING SESSION” section
below).
•Whether or not the process for which
LTTng would create the event is allowed to record events (see
lttng-track(1)).
A specific type of event rule of which the
action is to record the matched event as an event record.
See the “RECORDING EVENT RULE AND EVENT RECORD” section below.
Create or enable a recording event rule with the lttng-enable-event(1)
command.
List the recording event rules of a specific recording session and/or channel
with the lttng-list(1) and lttng-status(1) commands.
“Event rule matches” trigger condition (since LTTng 2.13)
When the event rule of the trigger condition
matches an event, LTTng can execute user-defined actions such as sending an
LTTng notification, starting a recording session, and more.
See lttng-add-trigger(1) and lttng-event-rule(7).
•The instrumentation point from which
LTTng creates E has a specific type.
See the “Instrumentation point types” section below.
•A pattern matches the name
of E while another pattern doesn’t.
•The log level of the instrumentation
point from which LTTng creates E is at least as severe as some
value, or is exactly some value.
•The fields of the payload
of E and the current context fields satisfy a filter
expression.
Instrumentation point types
As of LTTng 2.13.9, the available instrumentation point types are, depending on the tracing domain (see the “TRACING DOMAIN” section below): Linux kernel
LTTng tracepoint
User space
A statically defined point in the source code
of the kernel image or of a kernel module using the LTTng-modules macros.
List the available Linux kernel tracepoints with lttng list --kernel. See
lttng-list(1) to learn more.
Linux kernel system call
Entry, exit, or both of a Linux kernel system
call.
List the available Linux kernel system call instrumentation points with lttng
list --kernel --syscall. See lttng-list(1) to learn more.
Linux kprobe
A single probe dynamically placed in the
compiled kernel code.
When you create such an instrumentation point, you set its memory address or
symbol name.
Linux user space probe
A single probe dynamically placed at the entry
of a compiled user space application/library function through the kernel.
When you create such an instrumentation point, you set:
With the ELF method
As of LTTng 2.13.9, LTTng only supports USDT probes which are NOT
reference-counted.
Linux kretprobe
Its application/library path and its symbol
name.
With the USDT method
Its application/library path, its provider
name, and its probe name.
“USDT” stands for SystemTap User-level Statically Defined Tracing,
a DTrace-style marker.
Entry, exit, or both of a Linux kernel
function.
When you create such an instrumentation point, you set the memory address or
symbol name of its function.
LTTng tracepoint
java.util.logging, Apache log4j, and Python
A statically defined point in the source code
of a C/C++ application/library using the LTTng-UST macros.
List the available Linux kernel tracepoints with lttng list --userspace.
See lttng-list(1) to learn more.
Java or Python logging statement
A method call on a Java or Python logger
attached to an LTTng-UST handler.
List the available Java and Python loggers with lttng list --jul,
lttng list --log4j, and lttng list --python. See
lttng-list(1) to learn more.
TRIGGER
A trigger associates a condition to one or more actions.•The consumed buffer size of a given
recording session (see the “RECORDING SESSION” section below)
becomes greater than some value.
•The buffer usage of a given channel
(see the “CHANNEL AND RING BUFFER” section below) becomes
greater than some value.
•The buffer usage of a given channel
becomes less than some value.
•There’s an ongoing recording
session rotation (see the “Recording session rotation” section
below).
•A recording session rotation becomes
completed.
•An event rule matches an event.
As of LTTng 2.13.9, this is the only available condition when you add a
trigger with the lttng-add-trigger(1) command. The other ones are
available through the liblttng-ctl C API.
•Send a notification to a user
application.
•Start a given recording session, like
lttng-start(1) would do.
•Stop a given recording session, like
lttng-stop(1) would do.
•Archive the current trace chunk of a
given recording session (rotate), like lttng-rotate(1) would do.
•Take a snapshot of a given recording
session, like lttng-snapshot(1) would do.
•Add a trigger as another Unix
user.
•List all the triggers, regardless of
their owner.
•Remove a trigger which belongs to
another Unix user.
RECORDING SESSION
A recording session (named “tracing session” prior to LTTng 2.13) is a stateful dialogue between you and a session daemon (see lttng-sessiond(8)) for everything related to event recording.•Has its own name, unique for a given
session daemon.
•Has its own set of trace files, if
any.
•Has its own state of activity (started
or stopped).
An active recording session is an implicit recording event rule condition (see
the “RECORDING EVENT RULE AND EVENT RECORD” section
below).
•Has its own mode (local, network
streaming, snapshot, or live).
See the “Recording session modes” section below to learn
more.
•Has its own channels (see the
“CHANNEL AND RING BUFFER” section below) to which are attached
their own recording event rules.
•Has its own process attribute
inclusion sets (see lttng-track(1)).
Current recording session
When you run the lttng-create(1) command, LTTng creates the $LTTNG_HOME/.lttngrc file if it doesn’t exist ( $LTTNG_HOME defaults to $HOME).Recording session modes
LTTng offers four recording session modes: Local modeWrite the trace data to the local file
system.
Network streaming mode
Send the trace data over the network to a
listening relay daemon (see lttng-relayd(8)).
Snapshot mode
Only write the trace data to the local file
system or send it to a listening relay daemon ( lttng-relayd(8)) when
LTTng takes a snapshot.
LTTng forces all the channels (see the “CHANNEL AND RING BUFFER”
section below) to be created to be configured to be snapshot-ready.
LTTng takes a snapshot of such a recording session when:
Live mode
•You run the lttng-snapshot(1)
command.
•LTTng executes a
snapshot-session trigger action (see the “TRIGGER”
section above).
Send the trace data over the network to a
listening relay daemon (see lttng-relayd(8)) for live reading.
An LTTng live reader (for example, babeltrace2(1)) can connect to the
same relay daemon to receive trace data while the recording session is
active.
Recording session rotation
A recording session rotation is the action of archiving the current trace chunk of the recording session to the file system.•The stream files which LTTng already
wrote to the file system, and which are not part of a previously archived
trace chunk, since the most recent event amongst:
•The first time the recording session
was started, either with the lttng-start(1) command or with a
start-session trigger action (see the “TRIGGER” section
above).
•The last rotation, performed with:
•An lttng-rotate(1)
command.
•A rotation schedule previously set
with lttng-enable-rotation(1).
•An executed rotate-session
trigger action (see the “TRIGGER” section above).
•The content of all the non-flushed
sub-buffers of the channels of the recording session.
Trace chunk archive naming
A trace chunk archive is a subdirectory of the archives subdirectory within the output directory of a recording session (see the --output option of the lttng-create(1) command and of lttng-relayd(8)).•A self-contained LTTng trace.
•A member of a set of trace chunk
archives which form the complete trace of a recording session.
archives/ BEGIN-END-ID
Date and time of the beginning of the trace
chunk archive with the ISO 8601-compatible
YYYYmmddTHHMMSS±HHMM form, where YYYYmmdd is the date and
HHMMSS±HHMM is the time with the time zone offset from UTC.
Example: 20171119T152407-0500
END
Date and time of the end of the trace chunk
archive with the ISO 8601-compatible YYYYmmddTHHMMSS±HHMM
form, where YYYYmmdd is the date and HHMMSS±HHMM is the
time with the time zone offset from UTC.
Example: 20180118T152407+0930
ID
Unique numeric identifier of the trace chunk
within its recording session.
archives/20171119T152407-0500-20171119T151422-0500-3
TRACING DOMAIN
A tracing domain identifies a type of LTTng tracer.Tracing domain | “Event rule matches” trigger condition option | Option for other CLI commands |
Linux kernel | --type option starts with kernel: | --kernel |
User space | --type option starts with user: | --userspace |
java.util.logging (JUL) | --type option starts with jul: | --jul |
Apache log4j | --type option starts with log4j: | --log4j |
Python | --type option starts with python: | --python |
CHANNEL AND RING BUFFER
A channel is an object which is responsible for a set of ring buffers.•Its buffering scheme.
See the “Buffering scheme” section below.
•What to do when there’s no
space left for a new event record because all sub-buffers are full.
See the “Event record loss mode” section below.
•The size of each ring buffer and how
many sub-buffers a ring buffer has.
See the “Sub-buffer size and count” section below.
•The size of each trace file LTTng
writes for this channel and the maximum count of trace files.
See the “Maximum trace file size and count” section below.
•The periods of its read, switch, and
monitor timers.
See the “Timers” section below.
•For a Linux kernel channel: its output
type ( mmap(2) or splice(2)).
See the --output option of the lttng-enable-channel(1)
command.
•For a user space channel: the value of
its blocking timeout.
See the --blocking-timeout option of the lttng-enable-channel(1)
command.
Buffering scheme
A channel has at least one ring buffer per CPU. LTTng always records an event to the ring buffer dedicated to the CPU which emits it.Allocate one set of ring buffers (one per CPU)
shared by all the instrumented processes of:
If your Unix user is root
Per-process buffering ( --buffers-pid option of the
lttng-enable-channel(1) command)
Each Unix user.
Otherwise
Your Unix user.
Allocate one set of ring buffers (one per CPU)
for each instrumented process of:
If your Unix user is root
All Unix users.
Otherwise
Your Unix user.
Event record loss mode
When LTTng emits an event, LTTng can record it to a specific, available sub-buffer within the ring buffers of specific channels. When there’s no space left in a sub-buffer, the tracer marks it as consumable and another, available sub-buffer starts receiving the following event records. An LTTng consumer daemon eventually consumes the marked sub-buffer, which returns to the available state.Drop the newest event records until a
sub-buffer becomes available.
This is the only available mode when you specify a blocking timeout.
With this mode, LTTng increments a count of lost event records when an event
record is lost and saves this count to the trace. A trace reader can use the
saved discarded event record count of the trace to decide whether or not to
perform some analysis even if trace data is known to be missing.
Overwrite mode
Clear the sub-buffer containing the oldest
event records and start writing the newest event records there.
This mode is sometimes called flight recorder mode because it’s
similar to a flight recorder
<https://en.wikipedia.org/wiki/Flight_recorder>: always keep a fixed
amount of the latest data. It’s also similar to the roll mode of an
oscilloscope.
Since LTTng 2.8, with this mode, LTTng writes to a given sub-buffer its
sequence number within its data stream. With a local, network streaming, or
live recording session (see the “Recording session modes”
section above), a trace reader can use such sequence numbers to report lost
packets. A trace reader can use the saved discarded sub-buffer (packet) count
of the trace to decide whether or not to perform some analysis even if trace
data is known to be missing.
With this mode, LTTng doesn’t write to the trace the exact number of lost
event records in the lost sub-buffers.
Sub-buffer size and count
A channel has one or more ring buffer for each CPU of the target system.In general, prefer large sub-buffers to lower
the risk of losing event records.
Having larger sub-buffers also ensures a lower sub-buffer switching frequency
(see the “Timers” section below).
The sub-buffer count is only meaningful if you create the channel in overwrite
mode (see the “Event record loss mode” section above): in this
case, if LTTng overwrites a sub-buffer, then the other sub-buffers are left
unaltered.
Low event throughput
In general, prefer smaller sub-buffers since
the risk of losing event records is low.
Because LTTng emits events less frequently, the sub-buffer switching frequency
should remain low and therefore the overhead of the tracer shouldn’t be
a problem.
Low memory system
If your target system has a low memory limit,
prefer fewer first, then smaller sub-buffers.
Even if the system is limited in memory, you want to keep the sub-buffers as
large as possible to avoid a high sub-buffer switching frequency.
Expect a very low sub-buffer switching
frequency, but if LTTng ever needs to overwrite a sub-buffer, half of the
event records so far (4 MiB) are definitely lost.
Eight sub-buffers of 1 MiB each
Expect four times the tracer overhead of the
configuration above, but if LTTng needs to overwrite a sub-buffer, only the
eighth of event records so far (1 MiB) are definitely lost.
Maximum trace file size and count
By default, trace files can grow as large as needed.Timers
Each channel can have up to three optional timers: Switch timerWhen this timer expires, a sub-buffer switch
happens: for each ring buffer of the channel, LTTng marks the current
sub-buffer as consumable and switches to an available one to record the next
events.
A switch timer is useful to ensure that LTTng consumes and commits trace data to
trace files or to a distant relay daemon ( lttng-relayd(8))
periodically in case of a low event throughput.
Such a timer is also convenient when you use large sub-buffers (see the
“Sub-buffer size and count” section above) to cope with a
sporadic high event throughput, even if the throughput is otherwise low.
Set the period of the switch timer of a channel, or disable the timer
altogether, with the --switch-timer option of the
lttng-enable-channel(1) command.
Read timer
When this timer expires, LTTng checks for
full, consumable sub-buffers.
By default, the LTTng tracers use an asynchronous message mechanism to signal a
full sub-buffer so that a consumer daemon can consume it.
When such messages must be avoided, for example in real-time applications, use
this timer instead.
Set the period of the read timer of a channel, or disable the timer altogether,
with the --read-timer option of the lttng-enable-channel(1)
command.
Monitor timer
When this timer expires, the consumer daemon
samples some channel statistics to evaluate the following trigger conditions:
If you disable the monitor timer of a channel C:
See the “TRIGGER” section above to learn more about triggers.
Set the period of the monitor timer of a channel, or disable the timer
altogether, with the --monitor-timer option of the
lttng-enable-channel(1) command.
1.The consumed buffer size of a given
recording session becomes greater than some value.
2.The buffer usage of a given channel becomes
greater than some value.
3.The buffer usage of a given channel becomes
less than some value.
•The consumed buffer size value of the
recording session of C could be wrong for trigger condition
type 1: the consumed buffer size of C won’t be
part of the grand total.
•The buffer usage trigger conditions
(types 2 and 3) for C will never be
satisfied.
RECORDING EVENT RULE AND EVENT RECORD
A recording event rule is a specific type of event rule (see the “INSTRUMENTATION POINT, EVENT RULE, AND EVENT” section above) of which the action is to serialize and record the matched event as an event record.•The recording event rule itself is
enabled.
A recording event rule is enabled on creation.
•The channel to which the recording
event rule is attached is enabled.
A channel is enabled on creation.
See the “CHANNEL AND RING BUFFER” section above.
•The recording session of the recording
event rule is active (started).
A recording session is inactive (stopped) on creation.
See the “RECORDING SESSION” section above.
•The process for which LTTng creates an
event to match is allowed to record events.
All processes are allowed to record events on recording session creation.
Use the lttng-track(1) and lttng-untrack(1) commands to select
which processes are allowed to record events based on specific process
attributes.
$ lttng enable-event --userspace hello:world $ lttng enable-event --userspace hello:world --loglevel=INFO
RESOURCES
•LTTng project website
<https://lttng.org>
•LTTng documentation
<https://lttng.org/docs>
•LTTng bug tracker
<https://bugs.lttng.org>
•Git repositories
<https://git.lttng.org>
•GitHub organization
<https://github.com/lttng>
•Continuous integration
<https://ci.lttng.org/>
•Mailing list
<https://lists.lttng.org/> for support and development:
[email protected]
•IRC channel
<irc://irc.oftc.net/lttng>: #lttng on irc.oftc.net
COPYRIGHT
This program is part of the LTTng-tools project.THANKS
Special thanks to Michel Dagenais and the DORSAL laboratory <http://www.dorsal.polymtl.ca/> at École Polytechnique de Montréal for the LTTng journey.SEE ALSO
lttng(1), lttng-relayd(8), lttng-sessiond(8)14 June 2021 | LTTng 2.13.9 |