mon-netdata.n00tz.net Open in urlscan Pro
64.18.99.66  Public Scan

URL: https://mon-netdata.n00tz.net/
Submission: On July 02 via api from US — Scanned from DE

Form analysis 5 forms found in the DOM

<form id="optionsForm1" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="stop_updates_when_focus_is_lost" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
                data-on="On Focus" data-off="Always" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">On Focus</label><label class="btn btn-danger active toggle-off">Always</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>When to refresh the charts?</strong><br><small>When set to <b>On Focus</b>, the charts will stop being updated if the page / tab does not have the focus of the user. When set to <b>Always</b>, the charts will
              always be refreshed. Set it to <b>On Focus</b> it to lower the CPU requirements of the browser (and extend the battery of laptops and tablets) when this page does not have your focus. Set to <b>Always</b> to work on another window (i.e.
              change the settings of something) and have the charts auto-refresh in this window.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="eliminate_zero_dimensions" type="checkbox" checked="checked" data-toggle="toggle" data-on="Non Zero" data-off="All"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Non Zero</label><label class="btn btn-default active toggle-off">All</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which dimensions to show?</strong><br><small>When set to <b>Non Zero</b>, dimensions that have all their values (within the current view) set to zero will not be transferred from the netdata server (except if
              all dimensions of the chart are zero, in which case this setting does nothing - all dimensions are transferred and shown). When set to <b>All</b>, all dimensions will always be shown. Set it to <b>Non Zero</b> to lower the data
              transferred between netdata and your browser, lower the CPU requirements of your browser (fewer lines to draw) and increase the focus on the legends (fewer entries at the legends).</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="destroy_on_hide" type="checkbox" data-toggle="toggle" data-on="Destroy" data-off="Hide" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Destroy</label><label class="btn btn-default active toggle-off">Hide</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>How to handle hidden charts?</strong><br><small>When set to <b>Destroy</b>, charts that are not in the current viewport of the browser (are above, or below the visible area of the page), will be destroyed and
              re-created if and when they become visible again. When set to <b>Hide</b>, the not-visible charts will be just hidden, to simplify the DOM and speed up your browser. Set it to <b>Destroy</b>, to lower the memory requirements of your
              browser. Set it to <b>Hide</b> for faster restoration of charts on page scrolling.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="async_on_scroll" type="checkbox" data-toggle="toggle" data-on="Async" data-off="Sync" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Async</label><label class="btn btn-default active toggle-off">Sync</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Page scroll handling?</strong><br><small>When set to <b>Sync</b>, charts will be examined for their visibility immediately after scrolling. On slow computers this may impact the smoothness of page scrolling.
              To update the page when scrolling ends, set it to <b>Async</b>. Set it to <b>Sync</b> for immediate chart updates when scrolling. Set it to <b>Async</b> for smoother page scrolling on slower computers.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm2" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="parallel_refresher" type="checkbox" checked="checked" data-toggle="toggle" data-on="Parallel" data-off="Sequential"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Parallel</label><label class="btn btn-default active toggle-off">Sequential</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which chart refresh policy to use?</strong><br><small>When set to <b>parallel</b>, visible charts are refreshed in parallel (all queries are sent to netdata server in parallel) and are rendered
              asynchronously. When set to <b>sequential</b> charts are refreshed one after another. Set it to parallel if your browser can cope with it (most modern browsers do), set it to sequential if you work on an older/slower computer.</small>
          </td>
        </tr>
        <tr class="option-row" id="concurrent_refreshes_row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="concurrent_refreshes" type="checkbox" checked="checked" data-toggle="toggle" data-on="Resync" data-off="Best Effort"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Resync</label><label class="btn btn-default active toggle-off">Best Effort</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Shall we re-sync chart refreshes?</strong><br><small>When set to <b>Resync</b>, the dashboard will attempt to re-synchronize all the charts so that they are refreshed concurrently. When set to
              <b>Best Effort</b>, each chart may be refreshed with a little time difference to the others. Normally, the dashboard starts refreshing them in parallel, but depending on the speed of your computer and the network latencies, charts start
              having a slight time difference. Setting this to <b>Resync</b> will attempt to re-synchronize the charts on every update. Setting it to <b>Best Effort</b> may lower the pressure on your browser and the network.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="sync_selection" type="checkbox" checked="checked" data-toggle="toggle" data-on="Sync" data-off="Don't Sync" data-onstyle="success"
                data-offstyle="danger" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Sync</label><label class="btn btn-danger active toggle-off">Don't Sync</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Sync hover selection on all charts?</strong><br><small>When enabled, a selection on one chart will automatically select the same time on all other visible charts and the legends of all visible charts will be
              updated to show the selected values. When disabled, only the chart getting the user's attention will be selected. Enable it to get better insights of the data. Disable it if you are on a very slow computer that cannot actually do
              it.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm3" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="legend_right" type="checkbox" checked="checked" data-toggle="toggle" data-on="Right" data-off="Below" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Right</label><label class="btn btn-default active toggle-off">Below</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Where do you want to see the legend?</strong><br><small>Netdata can place the legend in two positions: <b>Below</b> charts (the default) or to the <b>Right</b> of
              charts.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="netdata_theme_control" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
                data-on="Dark" data-off="White" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Dark</label><label class="btn btn-danger active toggle-off">White</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which theme to use?</strong><br><small>Netdata comes with two themes: <b>Dark</b> (the default) and <b>White</b>.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="show_help" type="checkbox" checked="checked" data-toggle="toggle" data-on="Help Me" data-off="No Help" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Help Me</label><label class="btn btn-default active toggle-off">No Help</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Do you need help?</strong><br><small>Netdata can show some help in some areas to help you use the dashboard. If all these balloons bother you, disable them using this
              switch.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="pan_and_zoom_data_padding" type="checkbox" checked="checked" data-toggle="toggle" data-on="Pad" data-off="Don't Pad"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Pad</label><label class="btn btn-default active toggle-off">Don't Pad</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable data padding when panning and zooming?</strong><br><small>When set to <b>Pad</b> the charts will be padded with more data, both before and after the visible area, thus giving the impression the whole
              database is loaded. This padding will happen only after the first pan or zoom operation on the chart (initially all charts have only the visible data). When set to <b>Don't Pad</b> only the visible data will be transferred from the
              netdata server, even after the first pan and zoom operation.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="smooth_plot" type="checkbox" checked="checked" data-toggle="toggle" data-on="Smooth" data-off="Rough" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Smooth</label><label class="btn btn-default active toggle-off">Rough</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable Bézier lines on charts?</strong><br><small>When set to <b>Smooth</b> the charts libraries that support it, will plot smooth curves instead of simple straight lines to connect the points.<br>Keep in
              mind <a href="http://dygraphs.com" target="_blank">dygraphs</a>, the main charting library in netdata dashboards, can only smooth line charts. It cannot smooth area or stacked charts. When set to <b>Rough</b>, this setting can lower the
              CPU resources consumed by your browser.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm4" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td colspan="2" align="center"><small><b>These settings are applied gradually, as charts are updated. To force them, refresh the dashboard now</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 38px;"><input id="units_conversion" type="checkbox" checked="checked" data-toggle="toggle" data-on="Scale Units" data-off="Fixed Units"
                data-onstyle="success" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Scale Units</label><label class="btn btn-default active toggle-off">Fixed Units</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable auto-scaling of select units?</strong><br><small>When set to <b>Scale Units</b> the values shown will dynamically be scaled (e.g. 1000 kilobits will be shown as 1 megabit). Netdata can auto-scale these
              original units: <code>kilobits/s</code>, <code>kilobytes/s</code>, <code>KB/s</code>, <code>KB</code>, <code>MB</code>, and <code>GB</code>. When set to <b>Fixed Units</b> all the values will be rendered using the original units
              maintained by the netdata server.</small></td>
        </tr>
        <tr id="settingsLocaleTempRow" class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="units_temp" type="checkbox" checked="checked" data-toggle="toggle" data-on="Celsius" data-off="Fahrenheit" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Celsius</label><label class="btn btn-default active toggle-off">Fahrenheit</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which units to use for temperatures?</strong><br><small>Set the temperature units of the dashboard.</small></td>
        </tr>
        <tr id="settingsLocaleTimeRow" class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="seconds_as_time" type="checkbox" checked="checked" data-toggle="toggle" data-on="Time" data-off="Seconds" data-onstyle="success"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Time</label><label class="btn btn-default active toggle-off">Seconds</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Convert seconds to time?</strong><br><small>When set to <b>Time</b>, charts that present <code>seconds</code> will show <code>DDd:HH:MM:SS</code>. When set to <b>Seconds</b>, the raw number of seconds will be
              presented.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

#

<form action="#"><input class="form-control" id="switchRegistryPersonGUID" placeholder="your personal ID" maxlength="36" autocomplete="off" style="text-align:center;font-size:1.4em"></form>

Text Content

netdata

Real-time performance monitoring, done right!
Welcome back!Sign in again to enjoy the benefits of Netdata Cloud Sign in
4145e76070ae
Connection to Cloud



UTC +2
Playing

02.07.24 • 21:0921:16
• last
7min
1
0
Sign in
Discover the free benefits of Netdata Cloud:
Home
Node View
Overview
Nodes
Dashboards
Alerts
Anomalies
Pricing
Privacy





NETDATA

REAL-TIME PERFORMANCE MONITORING, IN THE GREATEST POSSIBLE DETAIL

Drag charts to pan. Shift + wheel on them, to zoom in and out. Double-click on
them, to reset. Hover on them too!
system.cpu



SYSTEM OVERVIEW

Overview of the key system metrics.
100,000000Used Swap%
0,00Disk ReadMiB/s
0,9Disk WriteMiB/s
2,6CPU%0,0100,0
0,20Net Inboundmegabits/s
0,40Net Outboundmegabits/s
16,1Used RAM%


CPU


Total CPU utilization (all cores). 100% here means there is no CPU idle time at
all. You can get per core usage at the CPUs section and per application usage at
the Applications Monitoring section.
Keep an eye on iowait

iowait
(0,24%). If it is constantly high, your disks are a bottleneck and they slow
your system down.
An important metric worth monitoring, is softirq

softirq
(0,0335%). A constantly high percentage of softirq may indicate network driver
issues. The individual metrics can be found in the kernel documentation.
Total CPU utilization (system.cpu)
0,0
20,0
40,0
60,0
80,0
100,0
21:10:30
21:11:00
21:11:30
21:12:00
21:12:30
21:13:00
21:13:30
21:14:00
21:14:30
21:15:00
21:15:30
21:16:00
21:16:30
21:17:00

guest_nice


guest


steal


softirq


irq


user


system


nice


iowait
percentage
Di., 02. Juli 2024|21:17:10

guest_nice0,0

guest0,0

steal0,0

softirq0,1

irq0,0

user0,7

system1,2

nice0,5

iowait0,2


CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
CPU some pressure (system.cpu_some_pressure)
0
0,2
0,4
0,6
0,8
1
21:10:30
21:11:00
21:11:30
21:12:00
21:12:30
21:13:00
21:13:30
21:14:00
21:14:30
21:15:00
21:15:30
21:16:00
21:16:30
21:17:00

some 10


some 60


some 300
percentage
Di., 02. Juli 2024|21:17:08

some 100

some 600

some 3000


The amount of time some processes have been waiting for CPU time.
CPU some pressure stall time (system.cpu_some_pressure_stall_time)
1,00
1,50
2,00
2,50
3,00
21:10:30
21:11:00
21:11:30
21:12:00
21:12:30
21:13:00
21:13:30
21:14:00
21:14:30
21:15:00
21:15:30
21:16:00
21:16:30
21:17:00

time
ms
Di., 02. Juli 2024|21:17:08

time1,28




LOAD


Current system load, i.e. the number of processes using CPU or waiting for
system resources (usually CPU and disk). The 3 metrics refer to 1, 5 and 15
minute averages. The system calculates this once every 5 seconds. For more
information check this wikipedia article.
system.load



DISK


Total Disk I/O, for all physical disks. You can get detailed information about
each disk at the Disks section and per application Disk usage at the
Applications Monitoring section. Physical are all the disks that are listed in
/sys/block, but do not exist in /sys/devices/virtual/block.
system.io

Memory paged from/to disk. This is usually the total disk I/O of the system.
system.pgpgio

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
system.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
system.io_full_pressure_stall_time



RAM


System Random Access Memory (i.e. physical memory) usage.
system.ram

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.memory_some_pressure

The amount of time some processes have been waiting due to memory congestion.
system.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.memory_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
system.memory_full_pressure_stall_time



SWAP


System swap memory usage. Swap space is used when the amount of physical memory
(RAM) is full. When the system needs more memory resources and the RAM is full,
inactive pages in memory are moved to the swap space (usually a disk, a disk
partition or a file).
system.swap


System swap I/O.

In - pages the system has swapped in from disk to RAM. Out - pages the system
has swapped out from RAM to disk.
system.swapio



NETWORK


Total bandwidth of all physical network interfaces. This does not include lo,
VPNs, network bridges, IFB devices, bond interfaces, etc. Only the bandwidth of
physical network interfaces is aggregated. Physical are all the network
interfaces that are listed in /proc/net/dev, but do not exist in
/sys/devices/virtual/net.
system.net

Total IP traffic in the system.
system.ip

Total IPv6 Traffic.
system.ipv6



PROCESSES



System processes.

Running - running or ready to run (runnable). Blocked - currently blocked,
waiting for I/O to complete.

system.processes


The number of processes in different states.

Running - Process using the CPU at a particular moment. Sleeping
(uninterruptible) - Process will wake when a waited-upon resource becomes
available or after a time-out occurs during that wait. Mostly used by device
drivers waiting for disk or network I/O. Sleeping (interruptible) - Process is
waiting either for a particular time slot or for a particular event to occur.
Zombie - Process that has completed its execution, released the system
resources, but its entry is not removed from the process table. Usually occurs
in child processes when the parent process still needs to read its child’s exit
status. A process that stays a zombie for a long time is generally an error and
causes syst...

The number of processes in different states.

Running - Process using the CPU at a particular moment. Sleeping
(uninterruptible) - Process will wake when a waited-upon resource becomes
available or after a time-out occurs during that wait. Mostly used by device
drivers waiting for disk or network I/O. Sleeping (interruptible) - Process is
waiting either for a particular time slot or for a particular event to occur.
Zombie - Process that has completed its execution, released the system
resources, but its entry is not removed from the process table. Usually occurs
in child processes when the parent process still needs to read its child’s exit
status. A process that stays a zombie for a long time is generally an error and
causes system PID space leak. Stopped - Process is suspended from proceeding
further due to STOP or TSTP signals. In this state, a process will not do
anything (not even terminate) until it receives a CONT signal.

show more information
system.processes_state

The number of new processes created.
system.forks

The total number of processes in the system.
system.active_processes

Context Switches, is the switching of the CPU from one process, task or thread
to another. If there are many processes or threads willing to execute and very
few CPU cores available to handle them, the system is making more context
switching to balance the CPU resources among them. The whole process is
computationally intensive. The more the context switches, the slower the system
gets.
system.ctxt



IDLEJITTER


Idle jitter is calculated by netdata. A thread is spawned that requests to sleep
for a few microseconds. When the system wakes it up, it measures how many
microseconds have passed. The difference between the requested and the actual
duration of the sleep, is the idle jitter. This number is useful in real-time
environments, where CPU jitter can affect the quality of the service (like VoIP
media gateways).
system.idlejitter



INTERRUPTS

Interrupts are signals sent to the CPU by external devices (normally I/O
devices) or programs (running processes). They tell the CPU to stop its current
activities and execute the appropriate part of the operating system. Interrupt
types are hardware (generated by hardware devices to signal that they need some
attention from the OS), software (generated by programs when they want to
request a system call to be performed by the operating system), and traps
(generated by the CPU itself to indicate that some error or condition occurred
for which assistance from the operating system is needed).

Total number of CPU interrupts. Check system.interrupts that gives more detail
about each interrupt and also the CPUs section where interrupts are analyzed per
CPU core.
system.intr

CPU interrupts in detail. At the CPUs section, interrupts are analyzed per CPU
core. The last column in /proc/interrupts provides an interrupt description or
the device name that registered the handler for that interrupt.
system.interrupts



SOFTIRQS

Software interrupts (or "softirqs") are one of the oldest deferred-execution
mechanisms in the kernel. Several tasks among those executed by the kernel are
not critical: they can be deferred for a long period of time, if necessary. The
deferrable tasks can execute with all interrupts enabled (softirqs are patterned
after hardware interrupts). Taking them out of the interrupt handler helps keep
kernel response time small.


Total number of software interrupts in the system. At the CPUs section, softirqs
are analyzed per CPU core.

HI - high priority tasklets. TIMER - tasklets related to timer interrupts.
NET_TX, NET_RX - used for network transmit and receive processing. BLOCK -
handles block I/O completion events. IRQ_POLL - used by the IO subsystem to
increase performance (a NAPI like approach for block devices). TASKLET - handles
regular tasklets. SCHED - used by the scheduler to perform load-balancing and
other scheduling tasks. HRTIMER - used for high-resolution timers. RCU -
performs read-copy-update (RCU) processing.

system.softirqs



SOFTNET

Statistics for CPUs SoftIRQs related to network receive work. Break down per CPU
core can be found at CPU / softnet statistics. More information about
identifying and troubleshooting network driver related issues can be found at
Red Hat Enterprise Linux Network Performance Tuning Guide.

Processed - packets processed. Dropped - packets dropped because the network
device backlog was full. Squeezed - number of times the network device budget
was consumed or the time limit was reached, but more work was available.
ReceivedRPS - number of times this CPU has been woken up to process packets via
an Inter-processor Interrupt. FlowLimitCount - number of times the flow limit
has been reached (flow limiting is an optional Receive Packet Steering feature).


system.softnet_stat



ENTROPY


Entropy, is a pool of random numbers (/dev/random) that is mainly used in
cryptography. If the pool of entropy gets empty, processes requiring random
numbers may run a lot slower (it depends on the interface each program uses),
waiting for the pool to be replenished. Ideally a system with high entropy
demands should have a hardware device for that purpose (TPM is one such device).
There are also several software-only options you may install, like haveged,
although these are generally useful only in servers.
system.entropy



UPTIME


The amount of time the system has been running, including time spent in suspend.
system.uptime



CLOCK SYNCHRONIZATION

NTP lets you automatically sync your system time with a remote server. This
keeps your machine’s time accurate by syncing with servers that are known to
have accurate times.


The system clock synchronization state as provided by the ntp_adjtime() system
call. An unsynchronized clock may be the result of synchronization issues by the
NTP daemon or a hardware clock fault. It can take several minutes (usually up to
17) before NTP daemon selects a server to synchronize with.

State map: 0 - not synchronized, 1 - synchronized.

system.clock_sync_state


The kernel code can operate in various modes and with various features enabled
or disabled, as selected by the ntp_adjtime() system call. The system clock
status shows the value of the time_status variable in the kernel. The bits of
the variable are used to control these functions and record error conditions as
they exist.

UNSYNC - set/cleared by the caller to indicate clock unsynchronized (e.g., when
no peers are reachable). This flag is usually controlled by an application
program, but the operating system may also set it. CLOCKERR - set/cleared by the
external hardware clock driver to indicate hardware fault.

Status map: 0 - bit unset, 1 - bit set.

system.clock_status

A typical NTP client regularly polls one or more NTP servers. The client must
compute its time offset and round-trip delay. Time offset is the difference in
absolute time between the two clocks.
system.clock_sync_offset



IPC SEMAPHORES

System V semaphores is an inter-process communication (IPC) mechanism. It allows
processes or threads within a process to synchronize their actions. They are
often used to monitor and control the availability of system resources such as
shared memory segments. For details, see svipc(7). To see the host IPC semaphore
information, run ipcs -us. For limits, run ipcs -ls.

Number of allocated System V IPC semaphores. The system-wide limit on the number
of semaphores in all semaphore sets is specified in /proc/sys/kernel/sem file
(2nd field).
system.ipc_semaphores

Number of used System V IPC semaphore arrays (sets). Semaphores support
semaphore sets where each one is a counting semaphore. So when an application
requests semaphores, the kernel releases them in sets. The system-wide limit on
the maximum number of semaphore sets is specified in /proc/sys/kernel/sem file
(4th field).
system.ipc_semaphore_arrays



IPC SHARED MEMORY

System V shared memory is an inter-process communication (IPC) mechanism. It
allows processes to communicate information by sharing a region of memory. It is
the fastest form of inter-process communication available since no kernel
involvement occurs when data is passed between the processes (no copying).
Typically, processes must synchronize their access to a shared memory object,
using, for example, POSIX semaphores. For details, see svipc(7). To see the host
IPC shared memory information, run ipcs -um. For limits, run ipcs -lm.

Number of allocated System V IPC memory segments. The system-wide maximum number
of shared memory segments that can be created is specified in
/proc/sys/kernel/shmmni file.
system.shared_memory_segments

Amount of memory currently used by System V IPC memory segments. The run-time
limit on the maximum shared memory segment size that can be created is specified
in /proc/sys/kernel/shmmax file.
system.shared_memory_bytes


--------------------------------------------------------------------------------


CPUS

Detailed information for each CPU of the system. A summary of the system for all
CPUs can be found at the System Overview section.



UTILIZATION


cpu.cpu0

cpu.cpu1

cpu.cpu2

cpu.cpu3

cpu.cpu4

cpu.cpu5

cpu.cpu6

cpu.cpu7

cpu.cpu8

cpu.cpu9

cpu.cpu10

cpu.cpu11

cpu.cpu12

cpu.cpu13

cpu.cpu14

cpu.cpu15

cpu.cpu16

cpu.cpu17

cpu.cpu18

cpu.cpu19

cpu.cpu20

cpu.cpu21

cpu.cpu22

cpu.cpu23

cpu.cpu24

cpu.cpu25

cpu.cpu26

cpu.cpu27

cpu.cpu28

cpu.cpu29

cpu.cpu30

cpu.cpu31

cpu.cpu32

cpu.cpu33

cpu.cpu34

cpu.cpu35

cpu.cpu36

cpu.cpu37

cpu.cpu38

cpu.cpu39



INTERRUPTS

Total number of interrupts per CPU. To see the total number for the system check
the interrupts section. The last column in /proc/interrupts provides an
interrupt description or the device name that registered the handler for that
interrupt.

cpu.cpu0_interrupts

cpu.cpu1_interrupts

cpu.cpu2_interrupts

cpu.cpu3_interrupts

cpu.cpu4_interrupts

cpu.cpu5_interrupts

cpu.cpu6_interrupts

cpu.cpu7_interrupts

cpu.cpu8_interrupts

cpu.cpu9_interrupts

cpu.cpu10_interrupts

cpu.cpu11_interrupts

cpu.cpu12_interrupts

cpu.cpu13_interrupts

cpu.cpu14_interrupts

cpu.cpu15_interrupts

cpu.cpu16_interrupts

cpu.cpu17_interrupts

cpu.cpu18_interrupts

cpu.cpu19_interrupts

cpu.cpu20_interrupts

cpu.cpu21_interrupts

cpu.cpu22_interrupts

cpu.cpu23_interrupts

cpu.cpu24_interrupts

cpu.cpu25_interrupts

cpu.cpu26_interrupts

cpu.cpu27_interrupts

cpu.cpu28_interrupts

cpu.cpu29_interrupts

cpu.cpu30_interrupts

cpu.cpu31_interrupts

cpu.cpu32_interrupts

cpu.cpu33_interrupts

cpu.cpu34_interrupts

cpu.cpu35_interrupts

cpu.cpu36_interrupts

cpu.cpu37_interrupts

cpu.cpu38_interrupts

cpu.cpu39_interrupts



SOFTIRQS

Total number of software interrupts per CPU. To see the total number for the
system check the softirqs section.

cpu.cpu0_softirqs

cpu.cpu1_softirqs

cpu.cpu2_softirqs

cpu.cpu3_softirqs

cpu.cpu4_softirqs

cpu.cpu5_softirqs

cpu.cpu6_softirqs

cpu.cpu7_softirqs

cpu.cpu8_softirqs

cpu.cpu9_softirqs

cpu.cpu10_softirqs

cpu.cpu11_softirqs

cpu.cpu12_softirqs

cpu.cpu13_softirqs

cpu.cpu14_softirqs

cpu.cpu15_softirqs

cpu.cpu16_softirqs

cpu.cpu17_softirqs

cpu.cpu18_softirqs

cpu.cpu19_softirqs

cpu.cpu20_softirqs

cpu.cpu21_softirqs

cpu.cpu22_softirqs

cpu.cpu23_softirqs

cpu.cpu24_softirqs

cpu.cpu25_softirqs

cpu.cpu26_softirqs

cpu.cpu27_softirqs

cpu.cpu28_softirqs

cpu.cpu29_softirqs

cpu.cpu30_softirqs

cpu.cpu31_softirqs

cpu.cpu32_softirqs

cpu.cpu33_softirqs

cpu.cpu34_softirqs

cpu.cpu35_softirqs

cpu.cpu36_softirqs

cpu.cpu37_softirqs

cpu.cpu38_softirqs

cpu.cpu39_softirqs



SOFTNET

Statistics for CPUs SoftIRQs related to network receive work. Total for all CPU
cores can be found at System / softnet statistics. More information about
identifying and troubleshooting network driver related issues can be found at
Red Hat Enterprise Linux Network Performance Tuning Guide.

Processed - packets processed. Dropped - packets dropped because the network
device backlog was full. Squeezed - number of times the network device budget
was consumed or the time limit was reached, but more work was available.
ReceivedRPS - number of times this CPU has been woken up to process packets via
an Inter-processor Interrupt. FlowLimitCount - number of times the flow limit
has been reached (flow limiting is an optional Receive Packet Steering feature).


cpu.cpu0_softnet_stat

cpu.cpu1_softnet_stat

cpu.cpu2_softnet_stat

cpu.cpu3_softnet_stat

cpu.cpu4_softnet_stat

cpu.cpu5_softnet_stat

cpu.cpu6_softnet_stat

cpu.cpu7_softnet_stat

cpu.cpu8_softnet_stat

cpu.cpu9_softnet_stat

cpu.cpu10_softnet_stat

cpu.cpu11_softnet_stat

cpu.cpu12_softnet_stat

cpu.cpu13_softnet_stat

cpu.cpu14_softnet_stat

cpu.cpu15_softnet_stat

cpu.cpu16_softnet_stat

cpu.cpu17_softnet_stat

cpu.cpu18_softnet_stat

cpu.cpu19_softnet_stat

cpu.cpu20_softnet_stat

cpu.cpu21_softnet_stat

cpu.cpu22_softnet_stat

cpu.cpu23_softnet_stat

cpu.cpu24_softnet_stat

cpu.cpu25_softnet_stat

cpu.cpu26_softnet_stat

cpu.cpu27_softnet_stat

cpu.cpu28_softnet_stat

cpu.cpu29_softnet_stat

cpu.cpu30_softnet_stat

cpu.cpu31_softnet_stat

cpu.cpu32_softnet_stat

cpu.cpu33_softnet_stat

cpu.cpu34_softnet_stat

cpu.cpu35_softnet_stat

cpu.cpu36_softnet_stat

cpu.cpu37_softnet_stat

cpu.cpu38_softnet_stat

cpu.cpu39_softnet_stat



THROTTLING

CPU throttling is commonly used to automatically slow down the computer when
possible to use less energy and conserve battery.

The number of adjustments made to the clock speed of the CPU based on it's core
temperature.
cpu.core_throttling



CPUFREQ


The frequency measures the number of cycles your CPU executes per second.
cpu.cpufreq



CPUIDLE

Idle States (C-states) are used to save power when the processor is idle.

cpu.cpu0_cpuidle

cpu.cpu1_cpuidle

cpu.cpu2_cpuidle

cpu.cpu3_cpuidle

cpu.cpu4_cpuidle

cpu.cpu5_cpuidle

cpu.cpu6_cpuidle

cpu.cpu7_cpuidle

cpu.cpu8_cpuidle

cpu.cpu9_cpuidle

cpu.cpu10_cpuidle

cpu.cpu11_cpuidle

cpu.cpu12_cpuidle

cpu.cpu13_cpuidle

cpu.cpu14_cpuidle

cpu.cpu15_cpuidle

cpu.cpu16_cpuidle

cpu.cpu17_cpuidle

cpu.cpu18_cpuidle

cpu.cpu19_cpuidle

cpu.cpu20_cpuidle

cpu.cpu21_cpuidle

cpu.cpu22_cpuidle

cpu.cpu23_cpuidle

cpu.cpu24_cpuidle

cpu.cpu25_cpuidle

cpu.cpu26_cpuidle

cpu.cpu27_cpuidle

cpu.cpu28_cpuidle

cpu.cpu29_cpuidle

cpu.cpu30_cpuidle

cpu.cpu31_cpuidle

cpu.cpu32_cpuidle

cpu.cpu33_cpuidle

cpu.cpu34_cpuidle

cpu.cpu35_cpuidle

cpu.cpu36_cpuidle

cpu.cpu37_cpuidle

cpu.cpu38_cpuidle

cpu.cpu39_cpuidle


--------------------------------------------------------------------------------


MEMORY

Detailed information about the memory management of the system.



SYSTEM


Available Memory is estimated by the kernel, as the amount of RAM that can be
used by userspace processes, without causing swapping.
mem.available

The number of processes killed by Out of Memory Killer. The kernel's OOM killer
is summoned when the system runs short of free memory and is unable to proceed
without killing one or more processes. It tries to pick the process whose demise
will free the most memory while causing the least misery for users of the
system. This counter also includes processes within containers that have
exceeded the memory limit.
mem.oom_kill

Committed Memory, is the sum of all memory which has been allocated by
processes.
mem.committed


A page fault is a type of interrupt, called trap, raised by computer hardware
when a running program accesses a memory page that is mapped into the virtual
address space, but not actually loaded into main memory.



Minor - the page is loaded in memory at the time the fault is generated, but is
not marked in the memory management unit as being loaded in memory. Major -
generated when the system needs to load the memory page from disk or swap
memory.



mem.pgfaults



KERNEL


Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
mem.writeback


The total amount of memory being used by the kernel.

Slab - used by the kernel to cache data structures for its own use. KernelStack
- allocated for each task done by the kernel. PageTables - dedicated to the
lowest level of page tables (A page table is used to turn a virtual address into
a physical memory address). VmallocUsed - being used as virtual address space.
Percpu - allocated to the per-CPU allocator used to back per-CPU allocations
(excludes the cost of metadata). When you create a per-CPU variable, each
processor on the system gets its own copy of that variable.

mem.kernel



SLAB



Slab memory statistics.



Reclaimable - amount of memory which the kernel can reuse. Unreclaimable - can
not be reused even when the kernel is lacking memory.

mem.slab



HUGEPAGES

Hugepages is a feature that allows the kernel to utilize the multiple page size
capabilities of modern hardware architectures. The kernel creates multiple pages
of virtual memory, mapped from both physical RAM and swap. There is a mechanism
in the CPU architecture called "Translation Lookaside Buffers" (TLB) to manage
the mapping of virtual memory pages to actual physical memory addresses. The TLB
is a limited hardware resource, so utilizing a large amount of physical memory
with the default page size consumes the TLB and adds processing overhead. By
utilizing Huge Pages, the kernel is able to create pages of much larger sizes,
each page consuming a single resource in the TLB. Huge Pages are pinned to
physical RAM and cannot be swapped/paged out.

Transparent HugePages (THP) is backing virtual memory with huge pages,
supporting automatic promotion and demotion of page sizes. It works for all
applications for anonymous memory mappings and tmpfs/shmem.
mem.transparent_hugepages

mem.thp_faults

mem.thp_file

mem.thp_collapse

mem.thp_split

mem.thp_compact



NUMA

Non-Uniform Memory Access (NUMA) is a hierarchical memory design the memory
access time is dependent on locality. Under NUMA, a processor can access its own
local memory faster than non-local memory (memory local to another processor or
memory shared between processors). The individual metrics are described in the
Linux kernel documentation.


NUMA balancing statistics.

Local - pages successfully allocated on this node, by a process on this node.
Foreign - pages initially intended for this node that were allocated to another
node instead. Interleave - interleave policy pages successfully allocated to
this node. Other - pages allocated on this node, by a process on another node.
PteUpdates - base pages that were marked for NUMA hinting faults. HugePteUpdates
- transparent huge pages that were marked for NUMA hinting faults. In
Combination with pte_updates the total address space that was marked can be
calculated. HintFaults - NUMA hinting faults that were trapped. HintFaultsLocal
- hinting faults that were to local nodes. In combination with HintFaults, the
pe...

NUMA balancing statistics.

Local - pages successfully allocated on this node, by a process on this node.
Foreign - pages initially intended for this node that were allocated to another
node instead. Interleave - interleave policy pages successfully allocated to
this node. Other - pages allocated on this node, by a process on another node.
PteUpdates - base pages that were marked for NUMA hinting faults. HugePteUpdates
- transparent huge pages that were marked for NUMA hinting faults. In
Combination with pte_updates the total address space that was marked can be
calculated. HintFaults - NUMA hinting faults that were trapped. HintFaultsLocal
- hinting faults that were to local nodes. In combination with HintFaults, the
percentage of local versus remote faults can be calculated. A high percentage of
local hinting faults indicates that the workload is closer to being converged.
PagesMigrated - pages were migrated because they were misplaced. As migration is
a copying operation, it contributes the largest part of the overhead created by
NUMA balancing.

show more information
mem.numa

mem.node0

mem.node1



ECC

ECC memory is a type of computer data storage that uses an error correction code
(ECC) to detect and correct n-bit data corruption which occurs in memory.
Typically, ECC memory maintains a memory system immune to single-bit errors: the
data that is read from each word is always the same as the data that had been
written to it, even if one of the bits actually stored has been flipped to the
wrong state.

Memory errors can be classified into two types: Soft errors, which randomly
corrupt bits but do not leave physical damage. Soft errors are transient in
nature and are not repeatable, can be because of electrical or magnetic
interference. Hard errors, which corrupt bits in a repeatable manner because of
a physical/hardware defect or an environmental problem.


The number of correctable (single-bit) ECC errors. These errors do not affect
the normal operation of the system because they are still being corrected.
Periodic correctable errors may indicate that one of the memory modules is
slowly failing.
mem.ecc_ce

The number of uncorrectable (multi-bit) ECC errors. An uncorrectable error is a
fatal issue that will typically lead to an OS crash.
mem.ecc_ue



FRAGMENTATION

These charts show whether the kernel will compact memory or direct reclaim to
satisfy a high-order allocation. The extfrag/extfrag_index file in debugfs shows
what the fragmentation index for each order is in each zone in the system.Values
tending towards 0 imply allocations would fail due to lack of memory, values
towards 1000 imply failures are due to fragmentation and -1 implies that the
allocation will succeed as long as watermarks are met.

mem.fragmentation_index_node_0_dma

mem.fragmentation_index_node_0_dma32

mem.fragmentation_index_node_0_normal

mem.fragmentation_index_node_1_normal


--------------------------------------------------------------------------------


DISKS

Charts with performance information for all the system disks. Special care has
been given to present disk performance metrics in a way compatible with iostat
-x. netdata by default prevents rendering performance charts for individual
partitions and unmounted virtual disks. Disabled charts can still be enabled by
configuring the relative settings in the netdata configuration file.



DM-0

disk.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

disk.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

disk_util.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The amount of data transferred to and from disk.
disk.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The average I/O operation size.
disk_avgsz.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The average discard operation size.
disk_ext_avgsz.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.dm-0-RBWw51ecumomn3BYMo88R5KaRf22oyhhwXjQCJO9wa5yZeaFKn7JTRJcVHIgdfAu



DM-1

disk.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

disk.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

disk_util.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The amount of data transferred to and from disk.
disk.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The average I/O operation size.
disk_avgsz.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The average discard operation size.
disk_ext_avgsz.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.dm-1-RBWw51ecumomn3BYMo88R5KaRf22oyhhIEdXyUMpAX3u5Be03OV8oHIoEChPCFjp



NVME0N1

disk.nvme0n1

disk.nvme0n1

disk_util.nvme0n1

The amount of data transferred to and from disk.
disk.nvme0n1

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.nvme0n1

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.nvme0n1


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.nvme0n1

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.nvme0n1

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.nvme0n1

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.nvme0n1

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.nvme0n1

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.nvme0n1

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.nvme0n1

The average I/O operation size.
disk_avgsz.nvme0n1

The average discard operation size.
disk_ext_avgsz.nvme0n1

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.nvme0n1

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.nvme0n1

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.nvme0n1

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.nvme0n1

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.nvme0n1



NVME1N1

disk.nvme1n1

disk.nvme1n1

disk_util.nvme1n1

The amount of data transferred to and from disk.
disk.nvme1n1

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.nvme1n1

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.nvme1n1


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.nvme1n1

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.nvme1n1

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.nvme1n1

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.nvme1n1

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.nvme1n1

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.nvme1n1

The average I/O operation size.
disk_avgsz.nvme1n1

The average discard operation size.
disk_ext_avgsz.nvme1n1

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.nvme1n1

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.nvme1n1

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.nvme1n1



NVME2N1

disk.nvme2n1

disk.nvme2n1

disk_util.nvme2n1

The amount of data transferred to and from disk.
disk.nvme2n1

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.nvme2n1

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.nvme2n1


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.nvme2n1

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.nvme2n1

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.nvme2n1

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.nvme2n1

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.nvme2n1

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.nvme2n1

The average I/O operation size.
disk_avgsz.nvme2n1

The average discard operation size.
disk_ext_avgsz.nvme2n1

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.nvme2n1

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.nvme2n1

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.nvme2n1



SDA

disk.sda

disk.sda

disk_util.sda

The amount of data transferred to and from disk.
disk.sda

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sda

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sda


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sda

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sda

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sda

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sda

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sda

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sda

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sda

The average I/O operation size.
disk_avgsz.sda

The average discard operation size.
disk_ext_avgsz.sda

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sda

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sda

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sda

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sda

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sda



SDB

disk.sdb

disk.sdb

disk_util.sdb

The amount of data transferred to and from disk.
disk.sdb

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdb

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdb


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdb

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdb

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdb

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdb

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdb

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdb

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdb

The average I/O operation size.
disk_avgsz.sdb

The average discard operation size.
disk_ext_avgsz.sdb

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdb

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdb

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdb

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdb

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdb



SDC

disk.sdc

disk.sdc

disk_util.sdc

The amount of data transferred to and from disk.
disk.sdc

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdc

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdc


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdc

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdc

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdc

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdc

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdc

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdc

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdc

The average I/O operation size.
disk_avgsz.sdc

The average discard operation size.
disk_ext_avgsz.sdc

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdc

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdc

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdc

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdc

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdc



SDD

disk.sdd

disk.sdd

disk_util.sdd

The amount of data transferred to and from disk.
disk.sdd

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdd

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdd


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdd

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdd

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdd

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdd

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdd

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdd

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdd

The average I/O operation size.
disk_avgsz.sdd

The average discard operation size.
disk_ext_avgsz.sdd

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdd

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdd

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdd

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdd

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdd



SDE

disk.sde

disk.sde

disk_util.sde

The amount of data transferred to and from disk.
disk.sde

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sde

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sde


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sde

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sde

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sde

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sde

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sde

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sde

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sde

The average I/O operation size.
disk_avgsz.sde

The average discard operation size.
disk_ext_avgsz.sde

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sde

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sde

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sde

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sde

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sde



SDF

disk.sdf

disk.sdf

disk_util.sdf

The amount of data transferred to and from disk.
disk.sdf

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdf

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdf


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdf

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdf

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdf

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdf

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdf

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdf

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdf

The average I/O operation size.
disk_avgsz.sdf

The average discard operation size.
disk_ext_avgsz.sdf

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdf

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdf

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdf

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdf

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdf



SDG

disk.sdg

disk.sdg

disk_util.sdg

The amount of data transferred to and from disk.
disk.sdg

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdg

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdg


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdg

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdg

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdg

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdg

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdg

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdg

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdg

The average I/O operation size.
disk_avgsz.sdg

The average discard operation size.
disk_ext_avgsz.sdg

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdg

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdg

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdg

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdg

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdg



SDH

disk.sdh

disk.sdh

disk_util.sdh

The amount of data transferred to and from disk.
disk.sdh

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdh

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdh


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdh

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdh

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdh

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdh

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdh

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdh

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdh

The average I/O operation size.
disk_avgsz.sdh

The average discard operation size.
disk_ext_avgsz.sdh

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdh

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdh

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdh

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdh

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdh



SDI

disk.sdi

disk.sdi

disk_util.sdi

The amount of data transferred to and from disk.
disk.sdi

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdi

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdi


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdi

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdi

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdi

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdi

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdi

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdi

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdi

The average I/O operation size.
disk_avgsz.sdi

The average discard operation size.
disk_ext_avgsz.sdi

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdi

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdi

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdi

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdi

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdi



SDJ

disk.sdj

disk.sdj

disk_util.sdj

The amount of data transferred to and from disk.
disk.sdj

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdj

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdj


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdj

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdj

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdj

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdj

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdj

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdj

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdj

The average I/O operation size.
disk_avgsz.sdj

The average discard operation size.
disk_ext_avgsz.sdj

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdj

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdj

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdj

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdj

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdj



SDK

disk.sdk

disk.sdk

disk_util.sdk

The amount of data transferred to and from disk.
disk.sdk

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdk

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdk


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdk

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdk

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdk

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdk

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdk

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdk

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdk

The average I/O operation size.
disk_avgsz.sdk

The average discard operation size.
disk_ext_avgsz.sdk

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdk

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdk

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdk

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdk

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdk



SDL

disk.sdl

disk.sdl

disk_util.sdl

The amount of data transferred to and from disk.
disk.sdl

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdl

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdl


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdl

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdl

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdl

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdl

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdl

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdl

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdl

The average I/O operation size.
disk_avgsz.sdl

The average discard operation size.
disk_ext_avgsz.sdl

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdl

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdl

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdl

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdl

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdl



SDM

disk.sdm

disk.sdm

disk_util.sdm

The amount of data transferred to and from disk.
disk.sdm

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sdm

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sdm


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sdm

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sdm

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sdm

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sdm

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sdm

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sdm

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sdm

The average I/O operation size.
disk_avgsz.sdm

The average discard operation size.
disk_ext_avgsz.sdm

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sdm

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sdm

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sdm

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sdm

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sdm



/


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._



/DEV


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._dev

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._dev



/DEV/SHM


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._dev_shm

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._dev_shm


--------------------------------------------------------------------------------


ZFS CACHE

Performance metrics of the ZFS ARC and L2ARC. The following charts visualize all
metrics reported by arcstat.py and arc_summary.py.



SIZE



The size of the ARC.

Arcsz - actual size. Target - target size that the ARC is attempting to maintain
(adaptive). Min - minimum size limit. When the ARC is asked to shrink, it will
stop shrinking at this value. Max - maximum size limit.

zfs.arc_size



ACCESSES



The number of read requests.

ARC - all prefetch and demand requests. Demand - triggered by an application
request. Prefetch - triggered by the prefetch mechanism, not directly from an
application request. Metadata - metadata read requests. L2 - L2ARC read
requests.

zfs.reads



EFFICIENCY



MRU and MFU cache hit rate.

Hits - a data block was in the ARC DRAM cache and returned. Misses - a data
block was not in the ARC DRAM cache. It will be read from the L2ARC cache
devices (if available and the data is cached on them) or the pool disks.

zfs.actual_hits

zfs.actual_hits_rate

The size of MRU (most recently used) and MFU (most frequently used) cache.
zfs.arc_size_breakdown


Hit rate of the ARC read requests.

Hits - a data block was in the ARC DRAM cache and returned. Misses - a data
block was not in the ARC DRAM cache. It will be read from the L2ARC cache
devices (if available and the data is cached on them) or the pool disks.

zfs.hits

zfs.hits_rate


Hit rate of the ARC data and metadata demand read requests. Demand requests are
triggered by an application request.

Hits - a data block was in the ARC DRAM cache and returned. Misses - a data
block was not in the ARC DRAM cache. It will be read from the L2ARC cache
devices (if available and the data is cached on them) or the pool disks.

zfs.dhits

zfs.dhits_rate


Hit rate of the ARC data demand read requests. Demand requests are triggered by
an application request.

Hits - a data block was in the ARC DRAM cache and returned. Misses - a data
block was not in the ARC DRAM cache. It will be read from the L2ARC cache
devices (if available and the data is cached on them) or the pool disks.



zfs.demand_data_hits

zfs.demand_data_hits_rate


Hit rate of the ARC data prefetch read requests. Prefetch requests are triggered
by the prefetch mechanism, not directly from an application request.

Hits - a data block was in the ARC DRAM cache and returned. Misses - a data
block was not in the ARC DRAM cache. It will be read from the L2ARC cache
devices (if available and the data is cached on them) or the pool disks.

zfs.prefetch_data_hits

zfs.prefetch_data_hits_rate


Hit rate of the ARC data and metadata prefetch read requests. Prefetch requests
are triggered by the prefetch mechanism, not directly from an application
request.

Hits - a data block was in the ARC DRAM cache and returned. Misses - a data
block was not in the ARC DRAM cache. It will be read from the L2ARC cache
devices (if available and the data is cached on them) or the pool disks.

zfs.phits

zfs.phits_rate


Hit rate of the ARC metadata read requests.

Hits - a data block was in the ARC DRAM cache and returned. Misses - a data
block was not in the ARC DRAM cache. It will be read from the L2ARC cache
devices (if available and the data is cached on them) or the pool disks.

zfs.mhits

zfs.mhits_rate

MRU (most recently used) and MFU (most frequently used) cache list hits. MRU and
MFU lists contain metadata for requested blocks which are cached. Ghost lists
contain metadata of the evicted pages on disk.
zfs.list_hits



OPERATIONS



Eviction and insertion operation statistics.

EvictSkip - skipped data eviction operations. Deleted - old data is evicted
(deleted) from the cache. MutexMiss - an attempt to get hash or data block mutex
when it is locked during eviction. HashCollisions - occurs when two distinct
data block numbers have the same hash value.

zfs.important_ops


Memory operation statistics.

Direct - synchronous memory reclaim. Data is evicted from the ARC and free slabs
reaped. Throttled - number of times that ZFS had to limit the ARC growth. A
constant increasing of the this value can indicate excessive pressure to evict
data from the ARC. Indirect - asynchronous memory reclaim. It reaps free slabs
from the ARC cache.

zfs.memory_ops



HASHES



Data Virtual Address (DVA) hash table element statistics.

Current - current number of elements. Max - maximum number of elements seen.

zfs.hash_elements


Data Virtual Address (DVA) hash table chain statistics. A chain is formed when
two or more distinct data block numbers have the same hash value.

Current - current number of chains. Max - longest length seen for a chain. If
the value is high, performance may degrade as the hash locks are held longer
while the chains are walked.

zfs.hash_chains


--------------------------------------------------------------------------------


ZFS POOLS

State of ZFS pools.



FAST


ZFS pool state. The overall health of a pool, as reported by zpool status, is
determined by the aggregate state of all devices within the pool. For states
description, see ZFS documentation.
zfspool.state_fast



STORAGE


ZFS pool state. The overall health of a pool, as reported by zpool status, is
determined by the aggregate state of all devices within the pool. For states
description, see ZFS documentation.
zfspool.state_storage


--------------------------------------------------------------------------------


NETWORKING STACK

Metrics for the networking stack of the system. These metrics are collected from
/proc/net/netstat or attaching kprobes to kernel functions, apply to both IPv4
and IPv6 traffic and are related to operation of the kernel networking stack.



TCP



TCP connection aborts.

BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a
packet with a sequence number beyond the last one for this connection - the
kernel responds with RST (closes the connection). UserClosed - happens when the
kernel receives data on an already closed connection and responds with RST.
NoMemory - happens when there are too many orphaned sockets (not attached to an
fd) and the kernel has to drop a connection - sometimes it will send an RST,
sometimes it won't. Timeout - happens when a connection times out. Linger -
happens when the kernel killed a socket that was already closed by the
application and lingered around for long enough. Failed - happens when the
kernel attempted to se...

TCP connection aborts.

BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a
packet with a sequence number beyond the last one for this connection - the
kernel responds with RST (closes the connection). UserClosed - happens when the
kernel receives data on an already closed connection and responds with RST.
NoMemory - happens when there are too many orphaned sockets (not attached to an
fd) and the kernel has to drop a connection - sometimes it will send an RST,
sometimes it won't. Timeout - happens when a connection times out. Linger -
happens when the kernel killed a socket that was already closed by the
application and lingered around for long enough. Failed - happens when the
kernel attempted to send an RST but failed because there was no memory
available.

show more information
ip.tcpconnaborts


TCP prevents out-of-order packets by either sequencing them in the correct order
or by requesting the retransmission of out-of-order packets.

Timestamp - detected re-ordering using the timestamp option. SACK - detected
re-ordering using Selective Acknowledgment algorithm. FACK - detected
re-ordering using Forward Acknowledgment algorithm. Reno - detected re-ordering
using Fast Retransmit algorithm.

ip.tcpreorders


TCP maintains an out-of-order queue to keep the out-of-order packets in the TCP
communication.

InQueue - the TCP layer receives an out-of-order packet and has enough memory to
queue it. Dropped - the TCP layer receives an out-of-order packet but does not
have enough memory, so drops it. Merged - the received out-of-order packet has
an overlay with the previous packet. The overlay part will be dropped. All these
packets will also be counted into InQueue. Pruned - packets dropped from
out-of-order queue because of socket buffer overrun.

ip.tcpofo



BROADCAST

In computer networking, broadcasting refers to transmitting a packet that will
be received by every device on the network. In practice, the scope of the
broadcast is limited to a broadcast domain.

Total broadcast traffic in the system.
ip.bcast

Total transferred broadcast packets in the system.
ip.bcastpkts



ECN

Explicit Congestion Notification (ECN) is an extension to the IP and to the TCP
that allows end-to-end notification of network congestion without dropping
packets. ECN is an optional feature that may be used between two ECN-enabled
endpoints when the underlying network infrastructure also supports it.


Total number of received IP packets with ECN bits set in the system.

CEP - congestion encountered. NoECTP - non ECN-capable transport. ECTP0 and
ECTP1 - ECN capable transport.

ip.ecnpkts


--------------------------------------------------------------------------------


IPV4 NETWORKING

Metrics for the IPv4 stack of the system. Internet Protocol version 4 (IPv4) is
the fourth version of the Internet Protocol (IP). It is one of the core
protocols of standards-based internetworking methods in the Internet. IPv4 is a
connectionless protocol for use on packet-switched networks. It operates on a
best effort delivery model, in that it does not guarantee delivery, nor does it
assure proper sequencing or avoidance of duplicate delivery. These aspects,
including data integrity, are addressed by an upper layer transport protocol,
such as the Transmission Control Protocol (TCP).



SOCKETS


The total number of used sockets for all address families in this system.
ipv4.sockstat_sockets



PACKETS



IPv4 packets statistics for this host.

Received - packets received by the IP layer. This counter will be increased even
if the packet is dropped later. Sent - packets sent via IP layer, for both
single cast and multicast packets. This counter does not include any packets
counted in Forwarded. Forwarded - input packets for which this host was not
their final IP destination, as a result of which an attempt was made to find a
route to forward them to that final destination. In hosts which do not act as IP
Gateways, this counter will include only those packets which were Source-Routed
and the Source-Route option processing was successful. Delivered - packets
delivered to the upper layer protocols, e.g. TCP, UDP, ICMP, and so on.

ipv4.packets



ICMP



The number of transferred IPv4 ICMP messages.

Received, Sent - ICMP messages which the host received and attempted to send.
Both these counters include errors.

ipv4.icmp


The number of IPv4 ICMP errors.

InErrors - received ICMP messages but determined as having ICMP-specific errors,
e.g. bad ICMP checksums, bad length, etc. OutErrors - ICMP messages which this
host did not send due to problems discovered within ICMP such as a lack of
buffers. This counter does not include errors discovered outside the ICMP layer
such as the inability of IP to route the resultant datagram. InCsumErrors -
received ICMP messages with bad checksum.

ipv4.icmp_errors

The number of transferred IPv4 ICMP control messages.
ipv4.icmpmsg



TCP


The number of TCP connections for which the current state is either ESTABLISHED
or CLOSE-WAIT. This is a snapshot of the established connections at the time of
measurement (i.e. a connection established and a connection disconnected within
the same iteration will not affect this metric).
ipv4.tcpsock


The number of TCP sockets in the system in certain states.

Alloc - in any TCP state. Orphan - no longer attached to a socket descriptor in
any user processes, but for which the kernel is still required to maintain state
in order to complete the transport protocol. InUse - in any TCP state, excluding
TIME-WAIT and CLOSED. TimeWait - in the TIME-WAIT state.

ipv4.sockstat_tcp_sockets


The number of packets transferred by the TCP layer.



Received - received packets, including those received in error, such as checksum
error, invalid TCP header, and so on. Sent - sent packets, excluding the
retransmitted packets. But it includes the SYN, ACK, and RST packets.



ipv4.tcppackets


TCP connection statistics.

Active - number of outgoing TCP connections attempted by this host. Passive -
number of incoming TCP connections accepted by this host.

ipv4.tcpopens


TCP errors.

InErrs - TCP segments received in error (including header too small, checksum
errors, sequence errors, bad packets - for both IPv4 and IPv6). InCsumErrors -
TCP segments received with checksum errors (for both IPv4 and IPv6). RetransSegs
- TCP segments retransmitted.

ipv4.tcperrors


TCP handshake statistics.

EstabResets - established connections resets (i.e. connections that made a
direct transition from ESTABLISHED or CLOSE_WAIT to CLOSED). OutRsts - TCP
segments sent, with the RST flag set (for both IPv4 and IPv6). AttemptFails -
number of times TCP connections made a direct transition from either SYN_SENT or
SYN_RECV to CLOSED, plus the number of times TCP connections made a direct
transition from the SYN_RECV to LISTEN. SynRetrans - shows retries for new
outbound TCP connections, which can indicate general connectivity issues or
backlog on the remote host.

ipv4.tcphandshake

The amount of memory used by allocated TCP sockets.
ipv4.sockstat_tcp_mem



UDP


The number of used UDP sockets.
ipv4.sockstat_udp_sockets

The number of transferred UDP packets.
ipv4.udppackets


The number of errors encountered during transferring UDP packets.

RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no
kernel memory available, or the IP layer reported an error when trying to send
the packet and no error queue has been setup. InErrors - that is an aggregated
counter for all errors, excluding NoPorts. NoPorts - no application is listening
at the destination port. InCsumErrors - a UDP checksum failure is detected.
IgnoredMulti - ignored multicast packets.
ipv4.udperrors

The amount of memory used by allocated UDP sockets.
ipv4.sockstat_udp_mem


--------------------------------------------------------------------------------


IPV6 NETWORKING

Metrics for the IPv6 stack of the system. Internet Protocol version 6 (IPv6) is
the most recent version of the Internet Protocol (IP), the communications
protocol that provides an identification and location system for computers on
networks and routes traffic across the Internet. IPv6 was developed by the
Internet Engineering Task Force (IETF) to deal with the long-anticipated problem
of IPv4 address exhaustion. IPv6 is intended to replace IPv4.



PACKETS



IPv6 packet statistics for this host.

Received - packets received by the IP layer. This counter will be increased even
if the packet is dropped later. Sent - packets sent via IP layer, for both
single cast and multicast packets. This counter does not include any packets
counted in Forwarded. Forwarded - input packets for which this host was not
their final IP destination, as a result of which an attempt was made to find a
route to forward them to that final destination. In hosts which do not act as IP
Gateways, this counter will include only those packets which were Source-Routed
and the Source-Route option processing was successful. Delivers - packets
delivered to the upper layer protocols, e.g. TCP, UDP, ICMP, and so on.

ipv6.packets



ERRORS



The number of discarded IPv6 packets.

InDiscards, OutDiscards - packets which were chosen to be discarded even though
no errors had been detected to prevent their being deliverable to a higher-layer
protocol. InHdrErrors - errors in IP headers, including bad checksums, version
number mismatch, other format errors, time-to-live exceeded, etc. InAddrErrors -
invalid IP address or the destination IP address is not a local address and IP
forwarding is not enabled. InUnknownProtos - unknown or unsupported protocol.
InTooBigErrors - the size exceeded the link MTU. InTruncatedPkts - packet frame
did not carry enough data. InNoRoutes - no route could be found while
forwarding. OutNoRoutes - no route could be found for packets generated by this
host.

ipv6.errors



TCP6


The number of TCP sockets in any state, excluding TIME-WAIT and CLOSED.
ipv6.sockstat6_tcp_sockets


--------------------------------------------------------------------------------


NETWORK INTERFACES

Performance metrics for network interfaces.

Netdata retrieves this data reading the /proc/net/dev file and /sys/class/net/
directory.




BR-076FD1DF071D

net.br-076fd1df071d

net.br-076fd1df071d

The amount of traffic transferred by the network interface.
net.br-076fd1df071d

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.br-076fd1df071d


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.br-076fd1df071d

The current physical link state of the interface.
net_carrier.br-076fd1df071d

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.br-076fd1df071d



BR-6C8211DF4B6B

net.br-6c8211df4b6b

net.br-6c8211df4b6b

The amount of traffic transferred by the network interface.
net.br-6c8211df4b6b

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.br-6c8211df4b6b


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.br-6c8211df4b6b

The current physical link state of the interface.
net_carrier.br-6c8211df4b6b

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.br-6c8211df4b6b



BR-FE979ABE701A

net.br-fe979abe701a

net.br-fe979abe701a

The amount of traffic transferred by the network interface.
net.br-fe979abe701a

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.br-fe979abe701a


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.br-fe979abe701a

The current physical link state of the interface.
net_carrier.br-fe979abe701a

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.br-fe979abe701a



ENO1

net.eno1

net.eno1

The amount of traffic transferred by the network interface.
net.eno1

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.eno1


The number of packets that have been dropped at the network interface level.

Inbound - packets received but not processed, e.g. due to softnet backlog
overflow, bad/unintended VLAN tags, unknown or unregistered protocols, IPv6
frames when the server is not configured for IPv6. Outbound - packets dropped on
their way to transmission, e.g. due to lack of resources.

net_drops.eno1

The interface's latest or current speed that the network adapter negotiated with
the device it is connected to. This does not give the max supported speed of the
NIC.
net_speed.eno1


The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.eno1


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.eno1

The current physical link state of the interface.
net_carrier.eno1

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.eno1



ENO4

net.eno4

net.eno4

The amount of traffic transferred by the network interface.
net.eno4

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.eno4


The number of packets that have been dropped at the network interface level.

Inbound - packets received but not processed, e.g. due to softnet backlog
overflow, bad/unintended VLAN tags, unknown or unregistered protocols, IPv6
frames when the server is not configured for IPv6. Outbound - packets dropped on
their way to transmission, e.g. due to lack of resources.

net_drops.eno4

The interface's latest or current speed that the network adapter negotiated with
the device it is connected to. This does not give the max supported speed of the
NIC.
net_speed.eno4


The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.eno4


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.eno4

The current physical link state of the interface.
net_carrier.eno4

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.eno4



VETH0D390D0

net.veth0d390d0

net.veth0d390d0

The amount of traffic transferred by the network interface.
net.veth0d390d0

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth0d390d0


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth0d390d0

The current physical link state of the interface.
net_carrier.veth0d390d0

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth0d390d0



VETH0E83464

net.veth0e83464

net.veth0e83464

The amount of traffic transferred by the network interface.
net.veth0e83464

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth0e83464


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth0e83464

The current physical link state of the interface.
net_carrier.veth0e83464

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth0e83464



VETH01AC962

net.veth01ac962

net.veth01ac962

The amount of traffic transferred by the network interface.
net.veth01ac962

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth01ac962


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth01ac962

The current physical link state of the interface.
net_carrier.veth01ac962

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth01ac962



VETH1BDBEE8

net.veth1bdbee8

net.veth1bdbee8

The amount of traffic transferred by the network interface.
net.veth1bdbee8

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth1bdbee8


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth1bdbee8

The current physical link state of the interface.
net_carrier.veth1bdbee8

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth1bdbee8



VETH1CA47D1

net.veth1ca47d1

net.veth1ca47d1

The amount of traffic transferred by the network interface.
net.veth1ca47d1

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth1ca47d1


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth1ca47d1

The current physical link state of the interface.
net_carrier.veth1ca47d1

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth1ca47d1



VETH1E183BB

net.veth1e183bb

net.veth1e183bb

The amount of traffic transferred by the network interface.
net.veth1e183bb

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth1e183bb


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth1e183bb

The current physical link state of the interface.
net_carrier.veth1e183bb

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth1e183bb



VETH2B6A90B

net.veth2b6a90b

net.veth2b6a90b

The amount of traffic transferred by the network interface.
net.veth2b6a90b

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth2b6a90b


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth2b6a90b

The current physical link state of the interface.
net_carrier.veth2b6a90b

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth2b6a90b



VETH5D309E1

net.veth5d309e1

net.veth5d309e1

The amount of traffic transferred by the network interface.
net.veth5d309e1

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth5d309e1


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth5d309e1

The current physical link state of the interface.
net_carrier.veth5d309e1

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth5d309e1



VETH5D4486A

net.veth5d4486a

net.veth5d4486a

The amount of traffic transferred by the network interface.
net.veth5d4486a

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth5d4486a


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth5d4486a

The current physical link state of the interface.
net_carrier.veth5d4486a

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth5d4486a



VETH5EBD40A

net.veth5ebd40a

net.veth5ebd40a

The amount of traffic transferred by the network interface.
net.veth5ebd40a

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth5ebd40a


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth5ebd40a

The current physical link state of the interface.
net_carrier.veth5ebd40a

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth5ebd40a



VETH8BF35E1

net.veth8bf35e1

net.veth8bf35e1

The amount of traffic transferred by the network interface.
net.veth8bf35e1

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth8bf35e1


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth8bf35e1

The current physical link state of the interface.
net_carrier.veth8bf35e1

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth8bf35e1



VETH8E0C533

net.veth8e0c533

net.veth8e0c533

The amount of traffic transferred by the network interface.
net.veth8e0c533

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth8e0c533


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth8e0c533

The current physical link state of the interface.
net_carrier.veth8e0c533

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth8e0c533



VETH8FF63C9

net.veth8ff63c9

net.veth8ff63c9

The amount of traffic transferred by the network interface.
net.veth8ff63c9

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth8ff63c9


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth8ff63c9

The current physical link state of the interface.
net_carrier.veth8ff63c9

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth8ff63c9



VETH9F0AC4F

net.veth9f0ac4f

net.veth9f0ac4f

The amount of traffic transferred by the network interface.
net.veth9f0ac4f

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth9f0ac4f


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth9f0ac4f

The current physical link state of the interface.
net_carrier.veth9f0ac4f

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth9f0ac4f



VETH10F52D1

net.veth10f52d1

net.veth10f52d1

The amount of traffic transferred by the network interface.
net.veth10f52d1

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth10f52d1


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth10f52d1

The current physical link state of the interface.
net_carrier.veth10f52d1

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth10f52d1



VETH52CFD3C

net.veth52cfd3c

net.veth52cfd3c

The amount of traffic transferred by the network interface.
net.veth52cfd3c

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth52cfd3c


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth52cfd3c

The current physical link state of the interface.
net_carrier.veth52cfd3c

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth52cfd3c



VETH55B61A9

net.veth55b61a9

net.veth55b61a9

The amount of traffic transferred by the network interface.
net.veth55b61a9

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth55b61a9


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth55b61a9

The current physical link state of the interface.
net_carrier.veth55b61a9

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth55b61a9



VETH77C9AD9

net.veth77c9ad9

net.veth77c9ad9

The amount of traffic transferred by the network interface.
net.veth77c9ad9

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth77c9ad9


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth77c9ad9

The current physical link state of the interface.
net_carrier.veth77c9ad9

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth77c9ad9



VETH83A7EA1

net.veth83a7ea1

net.veth83a7ea1

The amount of traffic transferred by the network interface.
net.veth83a7ea1

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth83a7ea1


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth83a7ea1

The current physical link state of the interface.
net_carrier.veth83a7ea1

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth83a7ea1



VETH97F9022

net.veth97f9022

net.veth97f9022

The amount of traffic transferred by the network interface.
net.veth97f9022

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth97f9022


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth97f9022

The current physical link state of the interface.
net_carrier.veth97f9022

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth97f9022



VETH142E238

net.veth142e238

net.veth142e238

The amount of traffic transferred by the network interface.
net.veth142e238

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth142e238


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth142e238

The current physical link state of the interface.
net_carrier.veth142e238

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth142e238



VETH410B6C2

net.veth410b6c2

net.veth410b6c2

The amount of traffic transferred by the network interface.
net.veth410b6c2

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth410b6c2


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth410b6c2

The current physical link state of the interface.
net_carrier.veth410b6c2

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth410b6c2



VETH3030DF8

net.veth3030df8

net.veth3030df8

The amount of traffic transferred by the network interface.
net.veth3030df8

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth3030df8


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth3030df8

The current physical link state of the interface.
net_carrier.veth3030df8

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth3030df8



VETH7667EB7

net.veth7667eb7

net.veth7667eb7

The amount of traffic transferred by the network interface.
net.veth7667eb7

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth7667eb7


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth7667eb7

The current physical link state of the interface.
net_carrier.veth7667eb7

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth7667eb7



VETH9437141

net.veth9437141

net.veth9437141

The amount of traffic transferred by the network interface.
net.veth9437141

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth9437141


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth9437141

The current physical link state of the interface.
net_carrier.veth9437141

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth9437141



VETHA5EEE49

net.vetha5eee49

net.vetha5eee49

The amount of traffic transferred by the network interface.
net.vetha5eee49

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vetha5eee49


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vetha5eee49

The current physical link state of the interface.
net_carrier.vetha5eee49

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vetha5eee49



VETHA7A7F3F

net.vetha7a7f3f

net.vetha7a7f3f

The amount of traffic transferred by the network interface.
net.vetha7a7f3f

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vetha7a7f3f


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vetha7a7f3f

The current physical link state of the interface.
net_carrier.vetha7a7f3f

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vetha7a7f3f



VETHA88DF0A

net.vetha88df0a

net.vetha88df0a

The amount of traffic transferred by the network interface.
net.vetha88df0a

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vetha88df0a


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vetha88df0a

The current physical link state of the interface.
net_carrier.vetha88df0a

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vetha88df0a



VETHA856CB3

net.vetha856cb3

net.vetha856cb3

The amount of traffic transferred by the network interface.
net.vetha856cb3

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vetha856cb3


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vetha856cb3

The current physical link state of the interface.
net_carrier.vetha856cb3

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vetha856cb3



VETHB77B930

net.vethb77b930

net.vethb77b930

The amount of traffic transferred by the network interface.
net.vethb77b930

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vethb77b930


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vethb77b930

The current physical link state of the interface.
net_carrier.vethb77b930

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vethb77b930



VETHB547444

net.vethb547444

net.vethb547444

The amount of traffic transferred by the network interface.
net.vethb547444

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vethb547444


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vethb547444

The current physical link state of the interface.
net_carrier.vethb547444

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vethb547444



VETHCA8287E

net.vethca8287e

net.vethca8287e

The amount of traffic transferred by the network interface.
net.vethca8287e

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vethca8287e


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vethca8287e

The current physical link state of the interface.
net_carrier.vethca8287e

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vethca8287e



VETHE614541

net.vethe614541

net.vethe614541

The amount of traffic transferred by the network interface.
net.vethe614541

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vethe614541


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vethe614541

The current physical link state of the interface.
net_carrier.vethe614541

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vethe614541



VETHFFDC3F7

net.vethffdc3f7

net.vethffdc3f7

The amount of traffic transferred by the network interface.
net.vethffdc3f7

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.vethffdc3f7


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.vethffdc3f7

The current physical link state of the interface.
net_carrier.vethffdc3f7

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.vethffdc3f7



ENO2



The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.eno2


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.eno2

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.eno2



ENO3



The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.eno3


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.eno3

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.eno3



IDRAC



The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.idrac


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.idrac

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.idrac



DOCKER0



The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.docker0

The current physical link state of the interface.
net_carrier.docker0

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.docker0


--------------------------------------------------------------------------------


FIREWALL (NETFILTER)

Performance metrics of the netfilter components.



CONNECTION TRACKER

Netfilter Connection Tracker performance metrics. The connection tracker keeps
track of all connections of the machine, inbound and outbound. It works by
keeping a database with all open connections, tracking network and address
translation and connection expectations.

The number of entries in the conntrack table.
netfilter.conntrack_sockets


--------------------------------------------------------------------------------


SYSTEMD SERVICES

Resources utilization of systemd services. Netdata monitors all systemd services
via cgroups (the resources accounting used by containers).



CPU


Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
services.cpu



MEM


The amount of used RAM.
services.mem_usage



SWAP


The amount of used swap memory.
services.swap_usage



DISK


The amount of data transferred from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
services.io_read

The amount of data transferred to specific devices as seen by the CFQ scheduler.
It is not updated when the CFQ scheduler is operating on a request queue.
services.io_write

The number of read operations performed on specific devices as seen by the CFQ
scheduler.
services.io_ops_read

The number write operations performed on specific devices as seen by the CFQ
scheduler.
services.io_ops_write


--------------------------------------------------------------------------------


APPLICATIONS

Per application statistics are collected using apps.plugin. This plugin walks
through all processes and aggregates statistics for application groups. The
plugin also counts the resources of exited children. So for processes like shell
scripts, the reported values include the resources used by the commands these
scripts run within each timeframe.



CPU


Total CPU utilization (all cores). It includes user, system and guest time.
apps.cpu

The amount of time the CPU was busy executing code in user mode (all cores).
apps.cpu_user

The amount of time the CPU was busy executing code in kernel mode (all cores).
apps.cpu_system

apps.voluntary_ctxt_switches

apps.involuntary_ctxt_switches



DISK


The amount of data that has been read from the storage layer. Actual physical
disk I/O was required.
apps.preads

The amount of data that has been written to the storage layer. Actual physical
disk I/O was required.
apps.pwrites

The amount of data that has been read from the storage layer. It includes things
such as terminal I/O and is unaffected by whether or not actual physical disk
I/O was required (the read might have been satisfied from pagecache).
apps.lreads

The amount of data that has been written or shall be written to the storage
layer. It includes things such as terminal I/O and is unaffected by whether or
not actual physical disk I/O was required.
apps.lwrites

The number of open files and directories.
apps.files



MEM


Real memory (RAM) used by applications. This does not include shared memory.
apps.mem

apps.rss

Virtual memory allocated by applications. Check this article for more
information.
apps.vmem

The number of minor faults which have not required loading a memory page from
the disk. Minor page faults occur when a process needs data that is in memory
and is assigned to another process. They share memory pages between multiple
processes – no additional data needs to be read from disk to memory.
apps.minor_faults



PROCESSES


The number of threads.
apps.threads

The number of processes.
apps.processes

The period of time within which at least one process in the group has been
running.
apps.uptime

The number of open pipes. A pipe is a unidirectional data channel that can be
used for interprocess communication.
apps.pipes



SWAP


The amount of swapped-out virtual memory by anonymous private pages. This does
not include shared swap memory.
apps.swap

The number of major faults which have required loading a memory page from the
disk. Major page faults occur because of the absence of the required page from
the RAM. They are expected when a process starts or needs to read in additional
data and in these cases do not indicate a problem condition. However, a major
page fault can also be the result of reading memory pages that have been written
out to the swap file, which could indicate a memory shortage.
apps.major_faults



NETWORK

Netdata also gives a summary for eBPF charts in Networking Stack submenu.

The number of open sockets. Sockets are a way to enable inter-process
communication between programs running on a server, or between programs running
on separate servers. This includes both network and UNIX sockets.
apps.sockets


--------------------------------------------------------------------------------


USER GROUPS

Per user group statistics are collected using apps.plugin. This plugin walks
through all processes and aggregates statistics per user group. The plugin also
counts the resources of exited children. So for processes like shell scripts,
the reported values include the resources used by the commands these scripts run
within each timeframe.



CPU


Total CPU utilization (all cores). It includes user, system and guest time.
groups.cpu

The amount of time the CPU was busy executing code in user mode (all cores).
groups.cpu_user

The amount of time the CPU was busy executing code in kernel mode (all cores).
groups.cpu_system

groups.voluntary_ctxt_switches

groups.involuntary_ctxt_switches



DISK


The amount of data that has been read from the storage layer. Actual physical
disk I/O was required.
groups.preads

The amount of data that has been written to the storage layer. Actual physical
disk I/O was required.
groups.pwrites

The amount of data that has been read from the storage layer. It includes things
such as terminal I/O and is unaffected by whether or not actual physical disk
I/O was required (the read might have been satisfied from pagecache).
groups.lreads

The amount of data that has been written or shall be written to the storage
layer. It includes things such as terminal I/O and is unaffected by whether or
not actual physical disk I/O was required.
groups.lwrites

The number of open files and directories.
groups.files



MEM


Real memory (RAM) used per user group. This does not include shared memory.
groups.mem

groups.rss

Virtual memory allocated per user group since the Netdata restart. Please check
this article for more information.
groups.vmem

The number of minor faults which have not required loading a memory page from
the disk. Minor page faults occur when a process needs data that is in memory
and is assigned to another process. They share memory pages between multiple
processes – no additional data needs to be read from disk to memory.
groups.minor_faults



PROCESSES


The number of threads.
groups.threads

The number of processes.
groups.processes

The period of time within which at least one process in the group has been
running.
groups.uptime

The number of open pipes. A pipe is a unidirectional data channel that can be
used for interprocess communication.
groups.pipes



SWAP


The amount of swapped-out virtual memory by anonymous private pages. This does
not include shared swap memory.
groups.swap

The number of major faults which have required loading a memory page from the
disk. Major page faults occur because of the absence of the required page from
the RAM. They are expected when a process starts or needs to read in additional
data and in these cases do not indicate a problem condition. However, a major
page fault can also be the result of reading memory pages that have been written
out to the swap file, which could indicate a memory shortage.
groups.major_faults



NET


The number of open sockets. Sockets are a way to enable inter-process
communication between programs running on a server, or between programs running
on separate servers. This includes both network and UNIX sockets.
groups.sockets


--------------------------------------------------------------------------------


USERS

Per user statistics are collected using apps.plugin. This plugin walks through
all processes and aggregates statistics per user. The plugin also counts the
resources of exited children. So for processes like shell scripts, the reported
values include the resources used by the commands these scripts run within each
timeframe.



CPU


Total CPU utilization (all cores). It includes user, system and guest time.
users.cpu

The amount of time the CPU was busy executing code in user mode (all cores).
users.cpu_user

The amount of time the CPU was busy executing code in kernel mode (all cores).
users.cpu_system

users.voluntary_ctxt_switches

users.involuntary_ctxt_switches



DISK


The amount of data that has been read from the storage layer. Actual physical
disk I/O was required.
users.preads

The amount of data that has been written to the storage layer. Actual physical
disk I/O was required.
users.pwrites

The amount of data that has been read from the storage layer. It includes things
such as terminal I/O and is unaffected by whether or not actual physical disk
I/O was required (the read might have been satisfied from pagecache).
users.lreads

The amount of data that has been written or shall be written to the storage
layer. It includes things such as terminal I/O and is unaffected by whether or
not actual physical disk I/O was required.
users.lwrites

The number of open files and directories.
users.files



MEM


Real memory (RAM) used per user. This does not include shared memory.
users.mem

users.rss

Virtual memory allocated per user since the Netdata restart. Please check this
article for more information.
users.vmem

The number of minor faults which have not required loading a memory page from
the disk. Minor page faults occur when a process needs data that is in memory
and is assigned to another process. They share memory pages between multiple
processes – no additional data needs to be read from disk to memory.
users.minor_faults



PROCESSES


The number of threads.
users.threads

The number of processes.
users.processes

The period of time within which at least one process in the group has been
running.
users.uptime

The number of open pipes. A pipe is a unidirectional data channel that can be
used for interprocess communication.
users.pipes



SWAP


The amount of swapped-out virtual memory by anonymous private pages. This does
not include shared swap memory.
users.swap

The number of major faults which have required loading a memory page from the
disk. Major page faults occur because of the absence of the required page from
the RAM. They are expected when a process starts or needs to read in additional
data and in these cases do not indicate a problem condition. However, a major
page fault can also be the result of reading memory pages that have been written
out to the swap file, which could indicate a memory shortage.
users.major_faults



NET


The number of open sockets. Sockets are a way to enable inter-process
communication between programs running on a server, or between programs running
on separate servers. This includes both network and UNIX sockets.
users.sockets


--------------------------------------------------------------------------------


ANOMALY DETECTION

Charts relating to anomaly detection, increased anomalous dimensions or a higher
than usual anomaly_rate could be signs of some abnormal behaviour. Read our
anomaly detection guide for more details.



DIMENSIONS


Total count of dimensions considered anomalous or normal.
anomaly_detection.dimensions_on_6958c25a-2737-11ee-bd35-0242ac140019



ANOMALY RATE


Percentage of anomalous dimensions.
anomaly_detection.anomaly_rate_on_6958c25a-2737-11ee-bd35-0242ac140019



ANOMALY DETECTION


Flags (0 or 1) to show when an anomaly event has been triggered by the detector.
anomaly_detection.anomaly_detection_on_6958c25a-2737-11ee-bd35-0242ac140019

anomaly_detection.ml_running_on_6958c25a-2737-11ee-bd35-0242ac140019


--------------------------------------------------------------------------------


 BAZARR

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_bazarr.cpu_limit

cgroup_bazarr.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_bazarr.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_bazarr.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_bazarr.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_bazarr.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_bazarr.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_bazarr.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_bazarr.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_bazarr.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_bazarr.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_bazarr.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_bazarr.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_bazarr.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_bazarr.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_bazarr.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bazarr.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_bazarr.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bazarr.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_bazarr.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_bazarr.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_bazarr.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bazarr.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_bazarr.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bazarr.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_bazarr.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 BEDROCK-SLOAN-CREATIVE

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_bedrock-sloan-creative.cpu_limit

cgroup_bedrock-sloan-creative.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_bedrock-sloan-creative.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_bedrock-sloan-creative.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_bedrock-sloan-creative.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_bedrock-sloan-creative.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_bedrock-sloan-creative.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_bedrock-sloan-creative.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_bedrock-sloan-creative.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_bedrock-sloan-creative.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_bedrock-sloan-creative.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_bedrock-sloan-creative.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_bedrock-sloan-creative.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_bedrock-sloan-creative.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_bedrock-sloan-creative.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_bedrock-sloan-creative.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-creative.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_bedrock-sloan-creative.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-creative.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_bedrock-sloan-creative.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_bedrock-sloan-creative.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_bedrock-sloan-creative.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-creative.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_bedrock-sloan-creative.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-creative.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_bedrock-sloan-creative.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 BEDROCK-SLOAN-FLAT

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_bedrock-sloan-flat.cpu_limit

cgroup_bedrock-sloan-flat.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_bedrock-sloan-flat.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_bedrock-sloan-flat.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_bedrock-sloan-flat.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_bedrock-sloan-flat.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_bedrock-sloan-flat.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_bedrock-sloan-flat.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_bedrock-sloan-flat.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_bedrock-sloan-flat.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_bedrock-sloan-flat.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_bedrock-sloan-flat.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_bedrock-sloan-flat.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_bedrock-sloan-flat.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_bedrock-sloan-flat.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_bedrock-sloan-flat.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-flat.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_bedrock-sloan-flat.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-flat.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_bedrock-sloan-flat.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_bedrock-sloan-flat.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_bedrock-sloan-flat.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-flat.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_bedrock-sloan-flat.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_bedrock-sloan-flat.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_bedrock-sloan-flat.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 CLOUDBEAVER

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_cloudbeaver.cpu_limit

cgroup_cloudbeaver.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_cloudbeaver.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_cloudbeaver.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_cloudbeaver.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_cloudbeaver.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_cloudbeaver.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_cloudbeaver.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_cloudbeaver.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_cloudbeaver.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_cloudbeaver.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_cloudbeaver.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_cloudbeaver.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_cloudbeaver.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_cloudbeaver.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_cloudbeaver.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_cloudbeaver.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_cloudbeaver.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_cloudbeaver.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_cloudbeaver.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_cloudbeaver.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_cloudbeaver.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_cloudbeaver.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_cloudbeaver.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_cloudbeaver.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_cloudbeaver.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 GITEA

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_gitea.cpu_limit

cgroup_gitea.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_gitea.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_gitea.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_gitea.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_gitea.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_gitea.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_gitea.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_gitea.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_gitea.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_gitea.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_gitea.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_gitea.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_gitea.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_gitea.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_gitea.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_gitea.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_gitea.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_gitea.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_gitea.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_gitea.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_gitea.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_gitea.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_gitea.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_gitea.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_gitea.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 HAPROXY-HASS

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_haproxy-hass.cpu_limit

cgroup_haproxy-hass.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_haproxy-hass.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_haproxy-hass.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_haproxy-hass.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_haproxy-hass.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_haproxy-hass.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_haproxy-hass.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_haproxy-hass.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_haproxy-hass.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_haproxy-hass.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_haproxy-hass.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_haproxy-hass.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_haproxy-hass.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_haproxy-hass.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_haproxy-hass.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-hass.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_haproxy-hass.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-hass.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_haproxy-hass.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_haproxy-hass.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_haproxy-hass.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-hass.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_haproxy-hass.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-hass.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_haproxy-hass.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 HAPROXY-NODERED

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_haproxy-nodered.cpu_limit

cgroup_haproxy-nodered.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_haproxy-nodered.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_haproxy-nodered.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_haproxy-nodered.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_haproxy-nodered.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_haproxy-nodered.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_haproxy-nodered.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_haproxy-nodered.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_haproxy-nodered.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_haproxy-nodered.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_haproxy-nodered.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_haproxy-nodered.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_haproxy-nodered.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_haproxy-nodered.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_haproxy-nodered.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-nodered.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_haproxy-nodered.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-nodered.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_haproxy-nodered.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-nodered.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_haproxy-nodered.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-nodered.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_haproxy-nodered.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 HAPROXY-OKD-API

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_haproxy-okd-api.cpu_limit

cgroup_haproxy-okd-api.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_haproxy-okd-api.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_haproxy-okd-api.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_haproxy-okd-api.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_haproxy-okd-api.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_haproxy-okd-api.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_haproxy-okd-api.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_haproxy-okd-api.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_haproxy-okd-api.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_haproxy-okd-api.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_haproxy-okd-api.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_haproxy-okd-api.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_haproxy-okd-api.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_haproxy-okd-api.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_haproxy-okd-api.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-api.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_haproxy-okd-api.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-api.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_haproxy-okd-api.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_haproxy-okd-api.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_haproxy-okd-api.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-api.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_haproxy-okd-api.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-api.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_haproxy-okd-api.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 HAPROXY-OKD-APPS

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_haproxy-okd-apps.cpu_limit

cgroup_haproxy-okd-apps.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_haproxy-okd-apps.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_haproxy-okd-apps.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_haproxy-okd-apps.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_haproxy-okd-apps.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_haproxy-okd-apps.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_haproxy-okd-apps.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_haproxy-okd-apps.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_haproxy-okd-apps.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_haproxy-okd-apps.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_haproxy-okd-apps.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_haproxy-okd-apps.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_haproxy-okd-apps.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_haproxy-okd-apps.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_haproxy-okd-apps.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-apps.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_haproxy-okd-apps.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-apps.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_haproxy-okd-apps.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_haproxy-okd-apps.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_haproxy-okd-apps.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-apps.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_haproxy-okd-apps.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-okd-apps.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_haproxy-okd-apps.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 HAPROXY-ZWAVE2MQTT

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_haproxy-zwave2mqtt.cpu_limit

cgroup_haproxy-zwave2mqtt.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_haproxy-zwave2mqtt.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_haproxy-zwave2mqtt.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_haproxy-zwave2mqtt.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_haproxy-zwave2mqtt.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_haproxy-zwave2mqtt.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_haproxy-zwave2mqtt.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_haproxy-zwave2mqtt.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_haproxy-zwave2mqtt.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_haproxy-zwave2mqtt.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_haproxy-zwave2mqtt.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_haproxy-zwave2mqtt.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_haproxy-zwave2mqtt.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_haproxy-zwave2mqtt.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_haproxy-zwave2mqtt.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-zwave2mqtt.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_haproxy-zwave2mqtt.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-zwave2mqtt.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_haproxy-zwave2mqtt.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-zwave2mqtt.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_haproxy-zwave2mqtt.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_haproxy-zwave2mqtt.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_haproxy-zwave2mqtt.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 HAX-WEB

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_hax-web.cpu_limit

cgroup_hax-web.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_hax-web.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_hax-web.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_hax-web.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_hax-web.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_hax-web.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_hax-web.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_hax-web.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_hax-web.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_hax-web.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_hax-web.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_hax-web.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_hax-web.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_hax-web.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_hax-web.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hax-web.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_hax-web.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hax-web.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_hax-web.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_hax-web.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_hax-web.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hax-web.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_hax-web.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hax-web.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_hax-web.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 HEDGEDOC

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_hedgedoc.cpu_limit

cgroup_hedgedoc.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_hedgedoc.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_hedgedoc.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_hedgedoc.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_hedgedoc.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_hedgedoc.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_hedgedoc.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_hedgedoc.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_hedgedoc.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_hedgedoc.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_hedgedoc.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_hedgedoc.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_hedgedoc.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_hedgedoc.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_hedgedoc.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hedgedoc.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_hedgedoc.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hedgedoc.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_hedgedoc.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hedgedoc.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_hedgedoc.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_hedgedoc.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_hedgedoc.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 JACKETT

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_jackett.cpu_limit

cgroup_jackett.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_jackett.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_jackett.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_jackett.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_jackett.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_jackett.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_jackett.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_jackett.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_jackett.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_jackett.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_jackett.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_jackett.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_jackett.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_jackett.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_jackett.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_jackett.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_jackett.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_jackett.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_jackett.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_jackett.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_jackett.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_jackett.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_jackett.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_jackett.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_jackett.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 LIDARR

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_lidarr.cpu_limit

cgroup_lidarr.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_lidarr.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_lidarr.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_lidarr.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_lidarr.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_lidarr.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_lidarr.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_lidarr.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_lidarr.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_lidarr.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_lidarr.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_lidarr.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_lidarr.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_lidarr.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_lidarr.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_lidarr.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_lidarr.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_lidarr.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_lidarr.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_lidarr.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_lidarr.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_lidarr.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_lidarr.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_lidarr.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_lidarr.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 MONREDIS

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_monredis.cpu_limit

cgroup_monredis.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_monredis.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_monredis.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_monredis.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_monredis.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_monredis.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_monredis.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_monredis.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_monredis.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_monredis.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_monredis.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_monredis.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_monredis.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_monredis.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_monredis.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_monredis.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_monredis.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_monredis.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_monredis.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_monredis.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_monredis.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_monredis.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_monredis.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 MYSQL

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_mysql.cpu_limit

cgroup_mysql.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_mysql.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_mysql.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_mysql.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_mysql.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_mysql.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_mysql.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_mysql.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_mysql.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_mysql.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_mysql.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_mysql.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_mysql.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_mysql.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_mysql.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_mysql.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_mysql.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_mysql.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_mysql.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_mysql.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_mysql.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_mysql.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_mysql.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_mysql.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_mysql.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 NETDATA

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_netdata.cpu_limit

cgroup_netdata.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_netdata.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_netdata.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_netdata.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_netdata.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_netdata.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_netdata.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_netdata.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_netdata.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_netdata.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_netdata.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_netdata.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_netdata.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_netdata.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_netdata.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_netdata.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_netdata.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_netdata.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_netdata.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_netdata.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_netdata.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_netdata.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_netdata.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_netdata.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_netdata.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 OMBI

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_ombi.cpu_limit

cgroup_ombi.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_ombi.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_ombi.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_ombi.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_ombi.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_ombi.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_ombi.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_ombi.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_ombi.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_ombi.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_ombi.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_ombi.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_ombi.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_ombi.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_ombi.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_ombi.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_ombi.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_ombi.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_ombi.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_ombi.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_ombi.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_ombi.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_ombi.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_ombi.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_ombi.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 PLEX

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_plex.cpu_limit

cgroup_plex.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_plex.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_plex.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_plex.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_plex.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_plex.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_plex.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_plex.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_plex.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_plex.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_plex.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_plex.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_plex.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_plex.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_plex.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_plex.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_plex.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_plex.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_plex.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_plex.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_plex.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_plex.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_plex.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_plex.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_plex.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 PORTAINER

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_portainer.cpu_limit

cgroup_portainer.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_portainer.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_portainer.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_portainer.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_portainer.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_portainer.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_portainer.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_portainer.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_portainer.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_portainer.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_portainer.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_portainer.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_portainer.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_portainer.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_portainer.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_portainer.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_portainer.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_portainer.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_portainer.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_portainer.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_portainer.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_portainer.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_portainer.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 PRIVATEBIN

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_privatebin.cpu_limit

cgroup_privatebin.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_privatebin.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_privatebin.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_privatebin.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_privatebin.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_privatebin.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_privatebin.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_privatebin.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_privatebin.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_privatebin.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_privatebin.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_privatebin.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_privatebin.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_privatebin.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_privatebin.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_privatebin.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_privatebin.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_privatebin.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_privatebin.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_privatebin.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_privatebin.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_privatebin.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_privatebin.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_privatebin.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_privatebin.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 PROJECTSEND

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_projectsend.cpu_limit

cgroup_projectsend.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_projectsend.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_projectsend.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_projectsend.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_projectsend.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_projectsend.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_projectsend.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_projectsend.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_projectsend.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_projectsend.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_projectsend.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_projectsend.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_projectsend.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_projectsend.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_projectsend.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_projectsend.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_projectsend.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_projectsend.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_projectsend.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_projectsend.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_projectsend.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_projectsend.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_projectsend.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 PROMETHEUS

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_prometheus.cpu_limit

cgroup_prometheus.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_prometheus.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_prometheus.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_prometheus.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_prometheus.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_prometheus.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_prometheus.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_prometheus.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_prometheus.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_prometheus.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_prometheus.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_prometheus.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_prometheus.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_prometheus.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_prometheus.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_prometheus.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_prometheus.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_prometheus.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_prometheus.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_prometheus.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_prometheus.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_prometheus.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_prometheus.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_prometheus.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_prometheus.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 QBITTORRENT

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_qbittorrent.cpu_limit

cgroup_qbittorrent.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_qbittorrent.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_qbittorrent.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_qbittorrent.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_qbittorrent.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_qbittorrent.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_qbittorrent.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_qbittorrent.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_qbittorrent.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_qbittorrent.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_qbittorrent.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_qbittorrent.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_qbittorrent.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_qbittorrent.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_qbittorrent.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_qbittorrent.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_qbittorrent.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_qbittorrent.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_qbittorrent.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_qbittorrent.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_qbittorrent.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_qbittorrent.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_qbittorrent.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_qbittorrent.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_qbittorrent.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 RADARR

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_radarr.cpu_limit

cgroup_radarr.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_radarr.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_radarr.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_radarr.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_radarr.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_radarr.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_radarr.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_radarr.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_radarr.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_radarr.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_radarr.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_radarr.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_radarr.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_radarr.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_radarr.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_radarr.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_radarr.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_radarr.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_radarr.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_radarr.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_radarr.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_radarr.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_radarr.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_radarr.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_radarr.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 REDIS

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_redis.cpu_limit

cgroup_redis.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_redis.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_redis.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_redis.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_redis.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_redis.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_redis.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_redis.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_redis.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_redis.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_redis.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_redis.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_redis.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_redis.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_redis.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_redis.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_redis.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_redis.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_redis.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_redis.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_redis.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_redis.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_redis.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 SMTP

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_smtp.cpu_limit

cgroup_smtp.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_smtp.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_smtp.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_smtp.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_smtp.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_smtp.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_smtp.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_smtp.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_smtp.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_smtp.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_smtp.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_smtp.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_smtp.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_smtp.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_smtp.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_smtp.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_smtp.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_smtp.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_smtp.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_smtp.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_smtp.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_smtp.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_smtp.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 SONARR

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_sonarr.cpu_limit

cgroup_sonarr.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_sonarr.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_sonarr.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_sonarr.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_sonarr.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_sonarr.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_sonarr.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_sonarr.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_sonarr.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_sonarr.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_sonarr.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_sonarr.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_sonarr.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_sonarr.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_sonarr.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_sonarr.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_sonarr.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_sonarr.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_sonarr.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_sonarr.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_sonarr.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_sonarr.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_sonarr.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_sonarr.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_sonarr.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 SPEEDTEST

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_speedtest.cpu_limit

cgroup_speedtest.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_speedtest.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_speedtest.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_speedtest.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_speedtest.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_speedtest.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_speedtest.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_speedtest.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_speedtest.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_speedtest.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_speedtest.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_speedtest.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_speedtest.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_speedtest.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_speedtest.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_speedtest.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_speedtest.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_speedtest.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_speedtest.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_speedtest.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_speedtest.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_speedtest.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_speedtest.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_speedtest.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_speedtest.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 STATPING

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_statping.cpu_limit

cgroup_statping.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_statping.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_statping.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_statping.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_statping.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_statping.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_statping.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_statping.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_statping.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_statping.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_statping.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_statping.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_statping.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_statping.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_statping.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_statping.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_statping.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_statping.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_statping.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_statping.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_statping.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_statping.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_statping.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_statping.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_statping.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 TAUTULLI

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_tautulli.cpu_limit

cgroup_tautulli.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_tautulli.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_tautulli.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_tautulli.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_tautulli.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_tautulli.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_tautulli.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_tautulli.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_tautulli.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_tautulli.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_tautulli.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_tautulli.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_tautulli.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_tautulli.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_tautulli.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_tautulli.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_tautulli.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_tautulli.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_tautulli.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_tautulli.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_tautulli.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_tautulli.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_tautulli.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_tautulli.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_tautulli.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 TRAEFIK

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_traefik.cpu_limit

cgroup_traefik.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_traefik.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_traefik.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_traefik.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_traefik.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_traefik.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_traefik.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_traefik.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_traefik.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_traefik.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_traefik.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_traefik.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_traefik.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_traefik.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_traefik.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_traefik.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_traefik.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_traefik.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_traefik.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_traefik.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_traefik.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_traefik.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_traefik.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_traefik.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_traefik.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 YOPASS

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_yopass.cpu_limit

cgroup_yopass.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_yopass.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_yopass.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_yopass.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_yopass.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_yopass.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_yopass.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_yopass.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_yopass.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_yopass.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_yopass.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_yopass.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_yopass.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_yopass.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_yopass.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_yopass.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_yopass.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_yopass.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_yopass.io_full_pressure_stall_time


--------------------------------------------------------------------------------


 YOPASS-MEMCACHED

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_yopass-memcached.cpu_limit

cgroup_yopass-memcached.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_yopass-memcached.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_yopass-memcached.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_yopass-memcached.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_yopass-memcached.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_yopass-memcached.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_yopass-memcached.cpu_some_pressure_stall_time

CPU Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on CPU resource simultaneously. The ratios are
tracked as recent trends over 10-, 60-, and 300-second windows.
cgroup_yopass-memcached.cpu_full_pressure

The amount of time all non-idle processes have been stalled due to CPU
congestion.
cgroup_yopass-memcached.cpu_full_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_yopass-memcached.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_yopass-memcached.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_yopass-memcached.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_yopass-memcached.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_yopass-memcached.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_yopass-memcached.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass-memcached.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_yopass-memcached.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass-memcached.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_yopass-memcached.memory_full_pressure_stall_time



DISK


I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass-memcached.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_yopass-memcached.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_yopass-memcached.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_yopass-memcached.io_full_pressure_stall_time


--------------------------------------------------------------------------------


SENSORS

Readings of the configured system sensors.



TEMPERATURE


sensors.coretemp-isa-0000_temperature

sensors.coretemp-isa-0001_temperature

sensors.nvme-pci-4100_temperature

sensors.nvme-pci-4300_temperature

sensors.nvme-pci-4400_temperature


--------------------------------------------------------------------------------


DOCKER LOCAL

Docker containers state and disk usage.



CONTAINERS


docker_local.containers_state

docker_local.healthy_containers

docker_local.container_apache-drop_state

docker_local.container_bazarr_state

docker_local.container_bedrock-sloan-creative_state

docker_local.container_bedrock-sloan-flat_state

docker_local.container_cloudbeaver_state

docker_local.container_gitea_state

docker_local.container_grafana_state

docker_local.container_haproxy-hass_state

docker_local.container_haproxy-nodered_state

docker_local.container_haproxy-okd-api_state

docker_local.container_haproxy-okd-apps_state

docker_local.container_haproxy-zwave2mqtt_state

docker_local.container_hax-web_state

docker_local.container_hedgedoc_state

docker_local.container_jackett_state

docker_local.container_lidarr_state

docker_local.container_monredis_state

docker_local.container_mysql_state

docker_local.container_netdata_state

docker_local.container_ombi_state

docker_local.container_plex_state

docker_local.container_portainer_state

docker_local.container_privatebin_state

docker_local.container_projectsend_state

docker_local.container_prometheus_state

docker_local.container_qbittorrent_state

docker_local.container_radarr_state

docker_local.container_redis_state

docker_local.container_smtp_state

docker_local.container_sonarr_state

docker_local.container_speedtest_state

docker_local.container_statping_state

docker_local.container_tautulli_state

docker_local.container_traefik_state

docker_local.container_yopass_state

docker_local.container_apache-drop_health_status

docker_local.container_bazarr_health_status

docker_local.container_bedrock-sloan-creative_health_status

docker_local.container_bedrock-sloan-flat_health_status

docker_local.container_cloudbeaver_health_status

docker_local.container_gitea_health_status

docker_local.container_grafana_health_status

docker_local.container_haproxy-hass_health_status

docker_local.container_haproxy-nodered_health_status

docker_local.container_haproxy-okd-api_health_status

docker_local.container_haproxy-okd-apps_health_status

docker_local.container_haproxy-zwave2mqtt_health_status

docker_local.container_hax-web_health_status

docker_local.container_hedgedoc_health_status

docker_local.container_jackett_health_status

docker_local.container_lidarr_health_status

docker_local.container_monredis_health_status

docker_local.container_mysql_health_status

docker_local.container_netdata_health_status

docker_local.container_ombi_health_status

docker_local.container_plex_health_status

docker_local.container_portainer_health_status

docker_local.container_privatebin_health_status

docker_local.container_projectsend_health_status

docker_local.container_prometheus_health_status

docker_local.container_qbittorrent_health_status

docker_local.container_radarr_health_status

docker_local.container_redis_health_status

docker_local.container_smtp_health_status

docker_local.container_sonarr_health_status

docker_local.container_speedtest_health_status

docker_local.container_statping_health_status

docker_local.container_tautulli_health_status

docker_local.container_traefik_health_status

docker_local.container_yopass_health_status



IMAGES


docker_local.images_count

docker_local.images_size


--------------------------------------------------------------------------------


NETDATA MONITORING

Performance metrics for the operation of netdata itself and its plugins.



NETDATA


netdata.server_cpu

netdata.memory

netdata.memory_buffers

netdata.uptime



API


netdata.clients

netdata.requests

netdata.net

The netdata API response time measures the time netdata needed to serve
requests. This time includes everything, from the reception of the first byte of
a request, to the dispatch of the last byte of its reply, therefore it includes
all network latencies involved (i.e. a client over a slow network will influence
these metrics).
netdata.response_time

netdata.compression_ratio



QUERIES


netdata.queries

netdata.db_points_read

netdata.db_points_results

netdata.db_points_stored



SQLITE3


netdata.sqlite3_queries

netdata.sqlite3_queries_by_status

netdata.sqlite3_rows

netdata.sqlite3_metatada_cache

netdata.sqlite3_context_cache



STATSD


netdata.statsd_metrics

netdata.statsd_useful_metrics

netdata.statsd_events

netdata.statsd_reads

netdata.statsd_bytes

netdata.statsd_packets

netdata.tcp_connects

netdata.tcp_connected

netdata.private_charts



DBENGINE MEMORY


netdata.dbengine_memory

netdata.dbengine_buffers



DBENGINE METRICS


netdata.dbengine_metrics

netdata.dbengine_metrics_registry_operations

netdata.dbengine_metrics_registry_references



DBENGINE QUERY ROUTER


netdata.dbengine_cache_hit_ratio

netdata.dbengine_queries

netdata.dbengine_queries_running

netdata.dbengine_query_pages_metadata_source

netdata.dbengine_query_pages_data_source

netdata.dbengine_query_next_page

netdata.dbengine_query_next_page_issues

netdata.dbengine_query_pages_disk_load

netdata.dbengine_events

netdata.dbengine_prep_timings

netdata.dbengine_query_timings



DBENGINE MAIN CACHE


netdata.dbengine_main_cache_hit_ratio

netdata.dbengine_main_cache_operations

netdata.dbengine_main_cache_memory

netdata.dbengine_main_target_memory

netdata.dbengine_main_cache_pages

netdata.dbengine_main_cache_memory_changes

netdata.dbengine_main_cache_memory_migrations

netdata.dbengine_main_cache_events

netdata.dbengine_main_waste_events

netdata.dbengine_main_cache_workers



DBENGINE OPEN CACHE


netdata.dbengine_open_cache_hit_ratio

netdata.dbengine_open_cache_operations

netdata.dbengine_open_cache_memory

netdata.dbengine_open_target_memory

netdata.dbengine_open_cache_pages

netdata.dbengine_open_cache_memory_changes

netdata.dbengine_open_cache_memory_migrations

netdata.dbengine_open_cache_events

netdata.dbengine_open_waste_events

netdata.dbengine_open_cache_workers



DBENGINE EXTENT CACHE


netdata.dbengine_extent_cache_hit_ratio

netdata.dbengine_extent_cache_operations

netdata.dbengine_extent_cache_memory

netdata.dbengine_extent_target_memory

netdata.dbengine_extent_cache_pages

netdata.dbengine_extent_cache_memory_changes

netdata.dbengine_extent_cache_memory_migrations

netdata.dbengine_extent_cache_events

netdata.dbengine_extent_waste_events

netdata.dbengine_extent_cache_workers



DBENGINE IO


netdata.dbengine_compression_ratio

netdata.dbengine_io_throughput

netdata.dbengine_io_operations

netdata.dbengine_global_errors

netdata.dbengine_global_file_descriptors



APPS.PLUGIN


netdata.apps_cpu

netdata.apps_sizes

netdata.apps_fix

netdata.apps_children_fix



GO.D


netdata.execution_time_of_docker_local



PYTHON.D


netdata.runtime_sensors



MACHINE LEARNING


netdata.machine_learning_status_on_6958c25a-2737-11ee-bd35-0242ac140019

netdata.ml_models_consulted

netdata.metric_types_on_6958c25a-2737-11ee-bd35-0242ac140019

netdata.training_status_on_6958c25a-2737-11ee-bd35-0242ac140019

netdata.training_queue_0_stats

netdata.training_queue_1_stats

netdata.training_queue_2_stats

netdata.training_queue_3_stats

netdata.training_queue_0_time_stats

netdata.training_queue_1_time_stats

netdata.training_queue_2_time_stats

netdata.training_queue_3_time_stats

netdata.training_queue_0_results

netdata.training_queue_1_results

netdata.training_queue_2_results

netdata.training_queue_3_results



DICTIONARIES COLLECTORS


netdata.dictionaries.collectors.dictionaries

netdata.dictionaries.collectors.items

netdata.dictionaries.collectors.ops

netdata.dictionaries.collectors.memory



DICTIONARIES CONTEXTS


netdata.dictionaries.context.dictionaries

netdata.dictionaries.context.items

netdata.dictionaries.context.ops

netdata.dictionaries.context.callbacks

netdata.dictionaries.context.memory

netdata.dictionaries.context.spins



DICTIONARIES FUNCTIONS


netdata.dictionaries.functions.dictionaries

netdata.dictionaries.functions.items

netdata.dictionaries.functions.ops

netdata.dictionaries.functions.callbacks

netdata.dictionaries.functions.memory



DICTIONARIES HEALTH


netdata.dictionaries.health.dictionaries

netdata.dictionaries.health.items

netdata.dictionaries.health.ops

netdata.dictionaries.health.callbacks

netdata.dictionaries.health.memory

netdata.dictionaries.health.spins



DICTIONARIES HOSTS


netdata.dictionaries.rrdhost.dictionaries

netdata.dictionaries.rrdhost.items

netdata.dictionaries.rrdhost.ops

netdata.dictionaries.rrdhost.memory

netdata.dictionaries.rrdhost.spins



DICTIONARIES LABELS


netdata.dictionaries.labels.dictionaries

netdata.dictionaries.labels.items

netdata.dictionaries.labels.ops

netdata.dictionaries.labels.callbacks

netdata.dictionaries.labels.memory



DICTIONARIES OTHER


netdata.dictionaries.other.dictionaries

netdata.dictionaries.other.items

netdata.dictionaries.other.ops

netdata.dictionaries.other.memory



DICTIONARIES RRD


netdata.dictionaries.rrdset_rrddim.dictionaries

netdata.dictionaries.rrdset_rrddim.items

netdata.dictionaries.rrdset_rrddim.ops

netdata.dictionaries.rrdset_rrddim.callbacks

netdata.dictionaries.rrdset_rrddim.memory

netdata.dictionaries.rrdset_rrddim.spins



HEARTBEAT


netdata.heartbeat



STRINGS


netdata.strings_ops

netdata.strings_entries

netdata.strings_memory



WORKERS


netdata.workers_cpu



WORKERS ACLK HOST SYNC


netdata.workers_time_aclksync

netdata.workers_cpu_aclksync

netdata.workers_jobs_by_type_aclksync

netdata.workers_busy_time_by_type_aclksync



WORKERS CONTEXTS


netdata.workers_time_rrdcontext

netdata.workers_cpu_rrdcontext

netdata.workers_jobs_by_type_rrdcontext

netdata.workers_busy_time_by_type_rrdcontext

netdata.workers_rrdcontext_value_hub_queue_size

netdata.workers_rrdcontext_value_post_processing_queue_size



WORKERS DBENGINE INSTANCES


netdata.workers_time_dbengine

netdata.workers_cpu_dbengine

netdata.workers_jobs_by_type_dbengine

netdata.workers_busy_time_by_type_dbengine

netdata.workers_dbengine_value_opcodes_waiting

netdata.workers_dbengine_value_works_dispatched

netdata.workers_dbengine_value_works_executing



WORKERS GLOBAL STATISTICS


netdata.workers_time_stats

netdata.workers_cpu_stats

netdata.workers_jobs_by_type_stats

netdata.workers_busy_time_by_type_stats

netdata.workers_threads_stats



WORKERS HEALTH ALARMS


netdata.workers_time_health

netdata.workers_cpu_health

netdata.workers_jobs_by_type_health

netdata.workers_busy_time_by_type_health



WORKERS LIBUV THREADPOOL


netdata.workers_time_libuv

netdata.workers_cpu_libuv

netdata.workers_jobs_by_type_libuv

netdata.workers_busy_time_by_type_libuv

netdata.workers_threads_libuv



WORKERS METADATA SYNC


netdata.workers_time_metasync

netdata.workers_cpu_metasync

netdata.workers_jobs_by_type_metasync

netdata.workers_busy_time_by_type_metasync



WORKERS ML DETECTION


netdata.workers_time_mldetect

netdata.workers_cpu_mldetect

netdata.workers_jobs_by_type_mldetect

netdata.workers_busy_time_by_type_mldetect



WORKERS ML TRAINING


netdata.workers_time_mltrain

netdata.workers_cpu_mltrain

netdata.workers_jobs_by_type_mltrain

netdata.workers_busy_time_by_type_mltrain

netdata.workers_threads_mltrain



WORKERS PLUGIN CGROUPS


netdata.workers_time_cgroups

netdata.workers_cpu_cgroups

netdata.workers_jobs_by_type_cgroups

netdata.workers_busy_time_by_type_cgroups



WORKERS PLUGIN CGROUPS FIND


netdata.workers_time_cgroupsdisc

netdata.workers_cpu_cgroupsdisc

netdata.workers_jobs_by_type_cgroupsdisc

netdata.workers_busy_time_by_type_cgroupsdisc



WORKERS PLUGIN DISKSPACE


netdata.workers_time_diskspace

netdata.workers_cpu_diskspace

netdata.workers_jobs_by_type_diskspace

netdata.workers_busy_time_by_type_diskspace



WORKERS PLUGIN IDLEJITTER


netdata.workers_time_idlejitter

netdata.workers_cpu_idlejitter

netdata.workers_jobs_by_type_idlejitter

netdata.workers_busy_time_by_type_idlejitter



WORKERS PLUGIN PROC


netdata.workers_time_proc

netdata.workers_cpu_proc

netdata.workers_jobs_by_type_proc

netdata.workers_busy_time_by_type_proc



WORKERS PLUGIN PROC NETDEV


netdata.workers_time_netdev

netdata.workers_cpu_netdev

netdata.workers_jobs_by_type_netdev

netdata.workers_busy_time_by_type_netdev



WORKERS PLUGIN STATSD


netdata.workers_time_statsd

netdata.workers_cpu_statsd

netdata.workers_jobs_by_type_statsd

netdata.workers_busy_time_by_type_statsd



WORKERS PLUGIN STATSD FLUSH


netdata.workers_time_statsdflush

netdata.workers_cpu_statsdflush

netdata.workers_jobs_by_type_statsdflush

netdata.workers_busy_time_by_type_statsdflush



WORKERS PLUGIN TC


netdata.workers_time_tc

netdata.workers_cpu_tc

netdata.workers_jobs_by_type_tc

netdata.workers_busy_time_by_type_tc

netdata.workers_tc_value_tc_script_execution_time

netdata.workers_tc_value_number_of_devices

netdata.workers_tc_value_number_of_classes



WORKERS PLUGIN TIMEX


netdata.workers_time_timex

netdata.workers_cpu_timex

netdata.workers_jobs_by_type_timex

netdata.workers_busy_time_by_type_timex



WORKERS PLUGINS.D


netdata.workers_time_pluginsd

netdata.workers_cpu_pluginsd

netdata.workers_jobs_by_type_pluginsd

netdata.workers_busy_time_by_type_pluginsd

netdata.workers_threads_pluginsd



WORKERS REPLICATION SENDER


netdata.workers_time_replication

netdata.workers_cpu_replication

netdata.workers_jobs_by_type_replication

netdata.workers_busy_time_by_type_replication

netdata.workers_replication_value_pending_requests

netdata.workers_replication_value_no_room_requests

netdata.workers_replication_value_completion

netdata.workers_replication_rate_added_requests

netdata.workers_replication_rate_finished_requests

netdata.workers_replication_rate_sender_resets

netdata.workers_replication_value_senders_full



WORKERS SERVICE


netdata.workers_time_service

netdata.workers_cpu_service

netdata.workers_jobs_by_type_service

netdata.workers_busy_time_by_type_service



WORKERS WEB SERVER


netdata.workers_time_web

netdata.workers_cpu_web

netdata.workers_jobs_by_type_web

netdata.workers_busy_time_by_type_web


--------------------------------------------------------------------------------

 * System Overview
   * cpu
   * load
   * disk
   * ram
   * swap
   * network
   * processes
   * idlejitter
   * interrupts
   * softirqs
   * softnet
   * entropy
   * uptime
   * clock synchronization
   * ipc semaphores
   * ipc shared memory
 * CPUs
   * utilization
   * interrupts
   * softirqs
   * softnet
   * throttling
   * cpufreq
   * cpuidle
 * Memory
   * system
   * kernel
   * slab
   * hugepages
   * numa
   * ecc
   * fragmentation
 * Disks
   * dm-0
   * dm-1
   * nvme0n1
   * nvme1n1
   * nvme2n1
   * sda
   * sdb
   * sdc
   * sdd
   * sde
   * sdf
   * sdg
   * sdh
   * sdi
   * sdj
   * sdk
   * sdl
   * sdm
   * /
   * /dev
   * /dev/shm
 * ZFS Cache
   * size
   * accesses
   * efficiency
   * operations
   * hashes
 * ZFS pools
   * fast
   * storage
 * Networking Stack
   * tcp
   * broadcast
   * ecn
 * IPv4 Networking
   * sockets
   * packets
   * icmp
   * tcp
   * udp
 * IPv6 Networking
   * packets
   * errors
   * tcp6
 * Network Interfaces
   * br-076fd1df071d
   * br-6c8211df4b6b
   * br-fe979abe701a
   * eno1
   * eno4
   * veth0d390d0
   * veth0e83464
   * veth01ac962
   * veth1bdbee8
   * veth1ca47d1
   * veth1e183bb
   * veth2b6a90b
   * veth5d309e1
   * veth5d4486a
   * veth5ebd40a
   * veth8bf35e1
   * veth8e0c533
   * veth8ff63c9
   * veth9f0ac4f
   * veth10f52d1
   * veth52cfd3c
   * veth55b61a9
   * veth77c9ad9
   * veth83a7ea1
   * veth97f9022
   * veth142e238
   * veth410b6c2
   * veth3030df8
   * veth7667eb7
   * veth9437141
   * vetha5eee49
   * vetha7a7f3f
   * vetha88df0a
   * vetha856cb3
   * vethb77b930
   * vethb547444
   * vethca8287e
   * vethe614541
   * vethffdc3f7
   * eno2
   * eno3
   * idrac
   * docker0
 * Firewall (netfilter)
   * connection tracker
 * systemd Services
   * cpu
   * mem
   * swap
   * disk
 * Applications
   * cpu
   * disk
   * mem
   * processes
   * swap
   * network
 * User Groups
   * cpu
   * disk
   * mem
   * processes
   * swap
   * net
 * Users
   * cpu
   * disk
   * mem
   * processes
   * swap
   * net
 * Anomaly Detection
   * dimensions
   * anomaly rate
   * anomaly detection
 *  bazarr
   * cpu
   * mem
   * disk
 *  bedrock-sloan-creative
   * cpu
   * mem
   * disk
 *  bedrock-sloan-flat
   * cpu
   * mem
   * disk
 *  cloudbeaver
   * cpu
   * mem
   * disk
 *  gitea
   * cpu
   * mem
   * disk
 *  haproxy-hass
   * cpu
   * mem
   * disk
 *  haproxy-nodered
   * cpu
   * mem
   * disk
 *  haproxy-okd-api
   * cpu
   * mem
   * disk
 *  haproxy-okd-apps
   * cpu
   * mem
   * disk
 *  haproxy-zwave2mqtt
   * cpu
   * mem
   * disk
 *  hax-web
   * cpu
   * mem
   * disk
 *  hedgedoc
   * cpu
   * mem
   * disk
 *  jackett
   * cpu
   * mem
   * disk
 *  lidarr
   * cpu
   * mem
   * disk
 *  monredis
   * cpu
   * mem
   * disk
 *  mysql
   * cpu
   * mem
   * disk
 *  netdata
   * cpu
   * mem
   * disk
 *  ombi
   * cpu
   * mem
   * disk
 *  plex
   * cpu
   * mem
   * disk
 *  portainer
   * cpu
   * mem
   * disk
 *  privatebin
   * cpu
   * mem
   * disk
 *  projectsend
   * cpu
   * mem
   * disk
 *  prometheus
   * cpu
   * mem
   * disk
 *  qbittorrent
   * cpu
   * mem
   * disk
 *  radarr
   * cpu
   * mem
   * disk
 *  redis
   * cpu
   * mem
   * disk
 *  smtp
   * cpu
   * mem
   * disk
 *  sonarr
   * cpu
   * mem
   * disk
 *  speedtest
   * cpu
   * mem
   * disk
 *  statping
   * cpu
   * mem
   * disk
 *  tautulli
   * cpu
   * mem
   * disk
 *  traefik
   * cpu
   * mem
   * disk
 *  yopass
   * cpu
   * mem
   * disk
 *  yopass-memcached
   * cpu
   * mem
   * disk
 * Sensors
   * temperature
 * Docker local
   * containers
   * images
 * Netdata Monitoring
   * netdata
   * api
   * queries
   * sqlite3
   * statsd
   * dbengine memory
   * dbengine metrics
   * dbengine query router
   * dbengine main cache
   * dbengine open cache
   * dbengine extent cache
   * dbengine io
   * apps.plugin
   * go.d
   * python.d
   * machine learning
   * dictionaries collectors
   * dictionaries contexts
   * dictionaries functions
   * dictionaries health
   * dictionaries hosts
   * dictionaries labels
   * dictionaries other
   * dictionaries rrd
   * heartbeat
   * strings
   * workers
   * workers aclk host sync
   * workers contexts
   * workers dbengine instances
   * workers global statistics
   * workers health alarms
   * workers libuv threadpool
   * workers metadata sync
   * workers ML detection
   * workers ML training
   * workers plugin cgroups
   * workers plugin cgroups find
   * workers plugin diskspace
   * workers plugin idlejitter
   * workers plugin proc
   * workers plugin proc netdev
   * workers plugin statsd
   * workers plugin statsd flush
   * workers plugin tc
   * workers plugin timex
   * workers plugins.d
   * workers replication sender
   * workers service
   * workers web server
 * Add more charts
 * Add more alarms
 * Every second, Netdata collects 9.591 metrics on 4145e76070ae, presents them
   in 2.034 charts and monitors them with 480 alarms.
    
   netdata
   v1.40.0-104-nightly
 * Do you like Netdata?
   Give us a star!
   
   And share the word!



Netdata

Copyright 2020, Netdata, Inc.


Terms and conditions Privacy Policy
Released under GPL v3 or later. Netdata uses third party tools.



XSS PROTECTION

This dashboard is about to render data from server:



To protect your privacy, the dashboard will check all data transferred for cross
site scripting (XSS).
This is CPU intensive, so your browser might be a bit slower.

If you trust the remote server, you can disable XSS protection.
In this case, any remote dashboard decoration code (javascript) will also run.

If you don't trust the remote server, you should keep the protection on.
The dashboard will run slower and remote dashboard decoration code will not run,
but better be safe than sorry...

Keep protecting me I don't need this, the server is mine
×

PRINT THIS NETDATA DASHBOARD

netdata dashboards cannot be captured, since we are lazy loading and hiding all
but the visible charts.
To capture the whole page with all the charts rendered, a new browser window
will pop-up that will render all the charts at once. The new browser window will
maintain the current pan and zoom settings of the charts. So, align the charts
before proceeding.

This process will put some CPU and memory pressure on your browser.
For the netdata server, we will sequentially download all the charts, to avoid
congesting network and server resources.
Please, do not print netdata dashboards on paper!

Print Close
×

IMPORT A NETDATA SNAPSHOT

netdata can export and import dashboard snapshots. Any netdata can import the
snapshot of any other netdata. The snapshots are not uploaded to a server. They
are handled entirely by your web browser, on your computer.

Click here to select the netdata snapshot file to import

Browse for a snapshot file (or drag it and drop it here), then click Import to
render it.



FilenameHostnameOrigin URLCharts InfoSnapshot InfoTime RangeComments



Snapshot files contain both data and javascript code. Make sure you trust the
files you import! Import Close
×

EXPORT A SNAPSHOT

Please wait while we collect all the dashboard data...

Select the desired resolution of the snapshot. This is the seconds of data per
point.
 
 

 

Filename
Compression
 * Select Compression
 * 
 * uncompressed
 * 
 * pako.deflate (gzip, binary)
 * pako.deflate.base64 (gzip, ascii)
 * 
 * lzstring.uri (LZ, ascii)
 * lzstring.utf16 (LZ, utf16)
 * lzstring.base64 (LZ, ascii)

Comments
 
Select snaphost resolution. This controls the size the snapshot file.

The generated snapshot will include all charts of this dashboard, for the
visible timeframe, so align, pan and zoom the charts as needed. The scroll
position of the dashboard will also be saved. The snapshot will be downloaded as
a file, to your computer, that can be imported back into any netdata dashboard
(no need to import it back on this server).

Snapshot files include all the information of the dashboard, including the URL
of the origin server, its netdata unique ID, etc. So, if you share the snapshot
file with third parties, they will be able to access the origin server, if this
server is exposed on the internet.
Snapshots are handled entirely by the web browser. The netdata servers are not
aware of them.

Export Cancel
×

NETDATA ALARMS

 * Active
 * All
 * Log

loading...
loading...
loading...
Close
×

NETDATA DASHBOARD OPTIONS

These are browser settings. Each viewer has its own. They do not affect the
operation of your netdata server.
Settings take effect immediately and are saved permanently to browser local
storage (except the refresh on focus / always option).
To reset all options (including charts sizes) to their defaults, click here.

 * Performance
 * Synchronization
 * Visual
 * Locale

On FocusAlways
When to refresh the charts?
When set to On Focus, the charts will stop being updated if the page / tab does
not have the focus of the user. When set to Always, the charts will always be
refreshed. Set it to On Focus it to lower the CPU requirements of the browser
(and extend the battery of laptops and tablets) when this page does not have
your focus. Set to Always to work on another window (i.e. change the settings of
something) and have the charts auto-refresh in this window.
Non ZeroAll
Which dimensions to show?
When set to Non Zero, dimensions that have all their values (within the current
view) set to zero will not be transferred from the netdata server (except if all
dimensions of the chart are zero, in which case this setting does nothing - all
dimensions are transferred and shown). When set to All, all dimensions will
always be shown. Set it to Non Zero to lower the data transferred between
netdata and your browser, lower the CPU requirements of your browser (fewer
lines to draw) and increase the focus on the legends (fewer entries at the
legends).
DestroyHide
How to handle hidden charts?
When set to Destroy, charts that are not in the current viewport of the browser
(are above, or below the visible area of the page), will be destroyed and
re-created if and when they become visible again. When set to Hide, the
not-visible charts will be just hidden, to simplify the DOM and speed up your
browser. Set it to Destroy, to lower the memory requirements of your browser.
Set it to Hide for faster restoration of charts on page scrolling.
AsyncSync
Page scroll handling?
When set to Sync, charts will be examined for their visibility immediately after
scrolling. On slow computers this may impact the smoothness of page scrolling.
To update the page when scrolling ends, set it to Async. Set it to Sync for
immediate chart updates when scrolling. Set it to Async for smoother page
scrolling on slower computers.

ParallelSequential
Which chart refresh policy to use?
When set to parallel, visible charts are refreshed in parallel (all queries are
sent to netdata server in parallel) and are rendered asynchronously. When set to
sequential charts are refreshed one after another. Set it to parallel if your
browser can cope with it (most modern browsers do), set it to sequential if you
work on an older/slower computer.
ResyncBest Effort
Shall we re-sync chart refreshes?
When set to Resync, the dashboard will attempt to re-synchronize all the charts
so that they are refreshed concurrently. When set to Best Effort, each chart may
be refreshed with a little time difference to the others. Normally, the
dashboard starts refreshing them in parallel, but depending on the speed of your
computer and the network latencies, charts start having a slight time
difference. Setting this to Resync will attempt to re-synchronize the charts on
every update. Setting it to Best Effort may lower the pressure on your browser
and the network.
SyncDon't Sync
Sync hover selection on all charts?
When enabled, a selection on one chart will automatically select the same time
on all other visible charts and the legends of all visible charts will be
updated to show the selected values. When disabled, only the chart getting the
user's attention will be selected. Enable it to get better insights of the data.
Disable it if you are on a very slow computer that cannot actually do it.

RightBelow
Where do you want to see the legend?
Netdata can place the legend in two positions: Below charts (the default) or to
the Right of charts.
Switching this will reload the dashboard.
DarkWhite
Which theme to use?
Netdata comes with two themes: Dark (the default) and White.
Switching this will reload the dashboard.
Help MeNo Help
Do you need help?
Netdata can show some help in some areas to help you use the dashboard. If all
these balloons bother you, disable them using this switch.
Switching this will reload the dashboard.
PadDon't Pad
Enable data padding when panning and zooming?
When set to Pad the charts will be padded with more data, both before and after
the visible area, thus giving the impression the whole database is loaded. This
padding will happen only after the first pan or zoom operation on the chart
(initially all charts have only the visible data). When set to Don't Pad only
the visible data will be transferred from the netdata server, even after the
first pan and zoom operation.
SmoothRough
Enable Bézier lines on charts?
When set to Smooth the charts libraries that support it, will plot smooth curves
instead of simple straight lines to connect the points.
Keep in mind dygraphs, the main charting library in netdata dashboards, can only
smooth line charts. It cannot smooth area or stacked charts. When set to Rough,
this setting can lower the CPU resources consumed by your browser.

These settings are applied gradually, as charts are updated. To force them,
refresh the dashboard now.
Scale UnitsFixed Units
Enable auto-scaling of select units?
When set to Scale Units the values shown will dynamically be scaled (e.g. 1000
kilobits will be shown as 1 megabit). Netdata can auto-scale these original
units: kilobits/s, kilobytes/s, KB/s, KB, MB, and GB. When set to Fixed Units
all the values will be rendered using the original units maintained by the
netdata server.
CelsiusFahrenheit
Which units to use for temperatures?
Set the temperature units of the dashboard.
TimeSeconds
Convert seconds to time?
When set to Time, charts that present seconds will show DDd:HH:MM:SS. When set
to Seconds, the raw number of seconds will be presented.

Close
×

UPDATE CHECK

Your netdata version: v1.40.0-104-nightly




New version of netdata available!

Latest version: v1.46.0-82-nightly

Click here for the changes log and
click here for directions on updating your netdata installation.

We suggest to review the changes log for new features you may be interested, or
important bug fixes you may need.
Keeping your netdata updated is generally a good idea.

--------------------------------------------------------------------------------

For progress reports and key netdata updates: Join the Netdata Community
You can also follow netdata on twitter, follow netdata on facebook, or watch
netdata on github.
Check Now Close
×

SIGN IN

Signing-in to netdata.cloud will synchronize the list of your netdata monitored
nodes known at registry . This may include server hostnames, urls and
identification GUIDs.

After you upgrade all your netdata servers, your private registry will not be
needed any more.

Are you sure you want to proceed?

Cancel Sign In
×

DELETE ?

You are about to delete, from your personal list of netdata servers, the
following server:




Are you sure you want to do this?


Keep in mind, this server will be added back if and when you visit it again.


keep it delete it
×

SWITCH NETDATA REGISTRY IDENTITY

You can copy and paste the following ID to all your browsers (e.g. work and
home).
All the browsers with the same ID will identify you, so please don't share this
with others.

Either copy this ID and paste it to another browser, or paste here the ID you
have taken from another browser.
Keep in mind that:
 * when you switch ID, your previous ID will be lost forever - this is
   irreversible.
 * both IDs (your old and the new) must list this netdata at their personal
   lists.
 * both IDs have to be known by the registry: .
 * to get a new ID, just clear your browser cookies.


cancel impersonate
×



Checking known URLs for this server...



Checks may fail if you are viewing an HTTPS page and the server to be checked is
HTTP only.


Close




LEARN ABOUT NETDATA CLOUD!

Netdata Cloud is a FREE service that complements the Netdata Agent, to provide:
 * Infrastructure level dashboards (each chart aggregates data from multiple
   nodes)
 * Central dispatch of alert notifications
 * Custom dashboards editor
 * Intelligence assisted troubleshooting, to help surface the root cause of
   issues

Have a look, you will be surprised!

Remember my choice
Wow! Let’s go to Netdata Cloud
Later, stay at the agent dashboard