monitoring.p3x.de Open in urlscan Pro
193.26.156.65  Public Scan

URL: https://monitoring.p3x.de/
Submission Tags: phishingrod
Submission: On May 27 via api from DE — Scanned from DE

Form analysis 5 forms found in the DOM

<form id="optionsForm1" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="stop_updates_when_focus_is_lost" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
                data-on="On Focus" data-off="Always" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">On Focus</label><label class="btn btn-danger active toggle-off">Always</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>When to refresh the charts?</strong><br><small>When set to <b>On Focus</b>, the charts will stop being updated if the page / tab does not have the focus of the user. When set to <b>Always</b>, the charts will
              always be refreshed. Set it to <b>On Focus</b> it to lower the CPU requirements of the browser (and extend the battery of laptops and tablets) when this page does not have your focus. Set to <b>Always</b> to work on another window (i.e.
              change the settings of something) and have the charts auto-refresh in this window.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="eliminate_zero_dimensions" type="checkbox" checked="checked" data-toggle="toggle" data-on="Non Zero" data-off="All"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Non Zero</label><label class="btn btn-default active toggle-off">All</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which dimensions to show?</strong><br><small>When set to <b>Non Zero</b>, dimensions that have all their values (within the current view) set to zero will not be transferred from the netdata server (except if
              all dimensions of the chart are zero, in which case this setting does nothing - all dimensions are transferred and shown). When set to <b>All</b>, all dimensions will always be shown. Set it to <b>Non Zero</b> to lower the data
              transferred between netdata and your browser, lower the CPU requirements of your browser (fewer lines to draw) and increase the focus on the legends (fewer entries at the legends).</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="destroy_on_hide" type="checkbox" data-toggle="toggle" data-on="Destroy" data-off="Hide" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Destroy</label><label class="btn btn-default active toggle-off">Hide</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>How to handle hidden charts?</strong><br><small>When set to <b>Destroy</b>, charts that are not in the current viewport of the browser (are above, or below the visible area of the page), will be destroyed and
              re-created if and when they become visible again. When set to <b>Hide</b>, the not-visible charts will be just hidden, to simplify the DOM and speed up your browser. Set it to <b>Destroy</b>, to lower the memory requirements of your
              browser. Set it to <b>Hide</b> for faster restoration of charts on page scrolling.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="async_on_scroll" type="checkbox" data-toggle="toggle" data-on="Async" data-off="Sync" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Async</label><label class="btn btn-default active toggle-off">Sync</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Page scroll handling?</strong><br><small>When set to <b>Sync</b>, charts will be examined for their visibility immediately after scrolling. On slow computers this may impact the smoothness of page scrolling.
              To update the page when scrolling ends, set it to <b>Async</b>. Set it to <b>Sync</b> for immediate chart updates when scrolling. Set it to <b>Async</b> for smoother page scrolling on slower computers.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm2" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="parallel_refresher" type="checkbox" checked="checked" data-toggle="toggle" data-on="Parallel" data-off="Sequential"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Parallel</label><label class="btn btn-default active toggle-off">Sequential</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which chart refresh policy to use?</strong><br><small>When set to <b>parallel</b>, visible charts are refreshed in parallel (all queries are sent to netdata server in parallel) and are rendered
              asynchronously. When set to <b>sequential</b> charts are refreshed one after another. Set it to parallel if your browser can cope with it (most modern browsers do), set it to sequential if you work on an older/slower computer.</small>
          </td>
        </tr>
        <tr class="option-row" id="concurrent_refreshes_row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="concurrent_refreshes" type="checkbox" checked="checked" data-toggle="toggle" data-on="Resync" data-off="Best Effort"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Resync</label><label class="btn btn-default active toggle-off">Best Effort</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Shall we re-sync chart refreshes?</strong><br><small>When set to <b>Resync</b>, the dashboard will attempt to re-synchronize all the charts so that they are refreshed concurrently. When set to
              <b>Best Effort</b>, each chart may be refreshed with a little time difference to the others. Normally, the dashboard starts refreshing them in parallel, but depending on the speed of your computer and the network latencies, charts start
              having a slight time difference. Setting this to <b>Resync</b> will attempt to re-synchronize the charts on every update. Setting it to <b>Best Effort</b> may lower the pressure on your browser and the network.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="sync_selection" type="checkbox" checked="checked" data-toggle="toggle" data-on="Sync" data-off="Don't Sync" data-onstyle="success"
                data-offstyle="danger" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Sync</label><label class="btn btn-danger active toggle-off">Don't Sync</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Sync hover selection on all charts?</strong><br><small>When enabled, a selection on one chart will automatically select the same time on all other visible charts and the legends of all visible charts will be
              updated to show the selected values. When disabled, only the chart getting the user's attention will be selected. Enable it to get better insights of the data. Disable it if you are on a very slow computer that cannot actually do
              it.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm3" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="legend_right" type="checkbox" checked="checked" data-toggle="toggle" data-on="Right" data-off="Below" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Right</label><label class="btn btn-default active toggle-off">Below</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Where do you want to see the legend?</strong><br><small>Netdata can place the legend in two positions: <b>Below</b> charts (the default) or to the <b>Right</b> of
              charts.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="netdata_theme_control" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
                data-on="Dark" data-off="White" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Dark</label><label class="btn btn-danger active toggle-off">White</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which theme to use?</strong><br><small>Netdata comes with two themes: <b>Dark</b> (the default) and <b>White</b>.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="show_help" type="checkbox" checked="checked" data-toggle="toggle" data-on="Help Me" data-off="No Help" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Help Me</label><label class="btn btn-default active toggle-off">No Help</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Do you need help?</strong><br><small>Netdata can show some help in some areas to help you use the dashboard. If all these balloons bother you, disable them using this
              switch.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="pan_and_zoom_data_padding" type="checkbox" checked="checked" data-toggle="toggle" data-on="Pad" data-off="Don't Pad"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Pad</label><label class="btn btn-default active toggle-off">Don't Pad</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable data padding when panning and zooming?</strong><br><small>When set to <b>Pad</b> the charts will be padded with more data, both before and after the visible area, thus giving the impression the whole
              database is loaded. This padding will happen only after the first pan or zoom operation on the chart (initially all charts have only the visible data). When set to <b>Don't Pad</b> only the visible data will be transferred from the
              netdata server, even after the first pan and zoom operation.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="smooth_plot" type="checkbox" checked="checked" data-toggle="toggle" data-on="Smooth" data-off="Rough" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Smooth</label><label class="btn btn-default active toggle-off">Rough</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable Bézier lines on charts?</strong><br><small>When set to <b>Smooth</b> the charts libraries that support it, will plot smooth curves instead of simple straight lines to connect the points.<br>Keep in
              mind <a href="http://dygraphs.com" target="_blank">dygraphs</a>, the main charting library in netdata dashboards, can only smooth line charts. It cannot smooth area or stacked charts. When set to <b>Rough</b>, this setting can lower the
              CPU resources consumed by your browser.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm4" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td colspan="2" align="center"><small><b>These settings are applied gradually, as charts are updated. To force them, refresh the dashboard now</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 38px;"><input id="units_conversion" type="checkbox" checked="checked" data-toggle="toggle" data-on="Scale Units" data-off="Fixed Units"
                data-onstyle="success" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Scale Units</label><label class="btn btn-default active toggle-off">Fixed Units</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable auto-scaling of select units?</strong><br><small>When set to <b>Scale Units</b> the values shown will dynamically be scaled (e.g. 1000 kilobits will be shown as 1 megabit). Netdata can auto-scale these
              original units: <code>kilobits/s</code>, <code>kilobytes/s</code>, <code>KB/s</code>, <code>KB</code>, <code>MB</code>, and <code>GB</code>. When set to <b>Fixed Units</b> all the values will be rendered using the original units
              maintained by the netdata server.</small></td>
        </tr>
        <tr id="settingsLocaleTempRow" class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="units_temp" type="checkbox" checked="checked" data-toggle="toggle" data-on="Celsius" data-off="Fahrenheit" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Celsius</label><label class="btn btn-default active toggle-off">Fahrenheit</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which units to use for temperatures?</strong><br><small>Set the temperature units of the dashboard.</small></td>
        </tr>
        <tr id="settingsLocaleTimeRow" class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="seconds_as_time" type="checkbox" checked="checked" data-toggle="toggle" data-on="Time" data-off="Seconds" data-onstyle="success"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Time</label><label class="btn btn-default active toggle-off">Seconds</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Convert seconds to time?</strong><br><small>When set to <b>Time</b>, charts that present <code>seconds</code> will show <code>DDd:HH:MM:SS</code>. When set to <b>Seconds</b>, the raw number of seconds will be
              presented.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

#

<form action="#"><input class="form-control" id="switchRegistryPersonGUID" placeholder="your personal ID" maxlength="36" autocomplete="off" style="text-align:center;font-size:1.4em"></form>

Text Content

netdata

Real-time performance monitoring, done right!
VISITED NODES

galahadchevron_right
https://monitoring.p3x.de/
galahad
UTC +2
Playing

27.05.24 • 12:5813:05
• last
7min
0
2



NETDATA

REAL-TIME PERFORMANCE MONITORING, IN THE GREATEST POSSIBLE DETAIL

Drag charts to pan. Shift + wheel on them, to zoom in and out. Double-click on
them, to reset. Hover on them too!
system.cpu



SYSTEM OVERVIEW

Overview of the key system metrics.
0,2Disk ReadMiB/s
0,1Disk WriteMiB/s
13,6CPU%0,0100,0
0,3Net Inboundmegabits/s
8,1Net Outboundmegabits/s
70,1Used RAM%


CPU


Total CPU utilization (all cores). 100% here means there is no CPU idle time at
all. You can get per core usage at the CPUs section and per application usage at
the Applications Monitoring section.
Keep an eye on iowait

iowait
(0,3%). If it is constantly high, your disks are a bottleneck and they slow your
system down.
An important metric worth monitoring, is softirq

softirq
(0,17%). A constantly high percentage of softirq may indicate network driver
issues. The individual metrics can be found in the kernel documentation.
Total CPU utilization (system.cpu)
0,0
20,0
40,0
60,0
80,0
100,0
12:59:00
12:59:30
13:00:00
13:00:30
13:01:00
13:01:30
13:02:00
13:02:30
13:03:00
13:03:30
13:04:00
13:04:30
13:05:00
13:05:30

guest_nice


guest


steal


softirq


irq


user


system


nice


iowait
percentage
Mo., 27. Mai 2024|13:05:40

guest_nice0,0

guest0,0

steal0,0

softirq0,5

irq0,0

user4,0

system2,5

nice6,5

iowait0,0


CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
CPU some pressure (system.cpu_some_pressure)
10,0
20,0
30,0
40,0
50,0
60,0
70,0
80,0
90,0
12:59:00
12:59:30
13:00:00
13:00:30
13:01:00
13:01:30
13:02:00
13:02:30
13:03:00
13:03:30
13:04:00
13:04:30
13:05:00
13:05:30

some 10


some 60


some 300
percentage
Mo., 27. Mai 2024|13:05:40

some 102,4

some 6022,9

some 30025,7


The amount of time some processes have been waiting for CPU time.
CPU some pressure stall time (system.cpu_some_pressure_stall_time)
100,0
200,0
300,0
400,0
500,0
600,0
700,0
800,0
900,0
12:59:00
12:59:30
13:00:00
13:00:30
13:01:00
13:01:30
13:02:00
13:02:30
13:03:00
13:03:30
13:04:00
13:04:30
13:05:00
13:05:30

time
ms
Mo., 27. Mai 2024|13:05:40

time19,1




LOAD


Current system load, i.e. the number of processes using CPU or waiting for
system resources (usually CPU and disk). The 3 metrics refer to 1, 5 and 15
minute averages. The system calculates this once every 5 seconds. For more
information check this wikipedia article.
system.load



DISK


Total Disk I/O, for all physical disks. You can get detailed information about
each disk at the Disks section and per application Disk usage at the
Applications Monitoring section. Physical are all the disks that are listed in
/sys/block, but do not exist in /sys/devices/virtual/block.
system.io

Memory paged from/to disk. This is usually the total disk I/O of the system.
system.pgpgio

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
system.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
system.io_full_pressure_stall_time



RAM


System Random Access Memory (i.e. physical memory) usage.
system.ram

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.memory_some_pressure

The amount of time some processes have been waiting due to memory congestion.
system.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.memory_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
system.memory_full_pressure_stall_time



NETWORK


Total bandwidth of all physical network interfaces. This does not include lo,
VPNs, network bridges, IFB devices, bond interfaces, etc. Only the bandwidth of
physical network interfaces is aggregated. Physical are all the network
interfaces that are listed in /proc/net/dev, but do not exist in
/sys/devices/virtual/net.
system.net

Total IP traffic in the system.
system.ip

Total IPv6 Traffic.
system.ipv6



PROCESSES



System processes.

Running - running or ready to run (runnable). Blocked - currently blocked,
waiting for I/O to complete.

system.processes


The number of processes in different states.

Running - Process using the CPU at a particular moment. Sleeping
(uninterruptible) - Process will wake when a waited-upon resource becomes
available or after a time-out occurs during that wait. Mostly used by device
drivers waiting for disk or network I/O. Sleeping (interruptible) - Process is
waiting either for a particular time slot or for a particular event to occur.
Zombie - Process that has completed its execution, released the system
resources, but its entry is not removed from the process table. Usually occurs
in child processes when the parent process still needs to read its child’s exit
status. A process that stays a zombie for a long time is generally an error and
causes syst...

The number of processes in different states.

Running - Process using the CPU at a particular moment. Sleeping
(uninterruptible) - Process will wake when a waited-upon resource becomes
available or after a time-out occurs during that wait. Mostly used by device
drivers waiting for disk or network I/O. Sleeping (interruptible) - Process is
waiting either for a particular time slot or for a particular event to occur.
Zombie - Process that has completed its execution, released the system
resources, but its entry is not removed from the process table. Usually occurs
in child processes when the parent process still needs to read its child’s exit
status. A process that stays a zombie for a long time is generally an error and
causes system PID space leak. Stopped - Process is suspended from proceeding
further due to STOP or TSTP signals. In this state, a process will not do
anything (not even terminate) until it receives a CONT signal.

show more information
system.processes_state

The number of new processes created.
system.forks

The total number of processes in the system.
system.active_processes

Context Switches, is the switching of the CPU from one process, task or thread
to another. If there are many processes or threads willing to execute and very
few CPU cores available to handle them, the system is making more context
switching to balance the CPU resources among them. The whole process is
computationally intensive. The more the context switches, the slower the system
gets.
system.ctxt



IDLEJITTER


Idle jitter is calculated by netdata. A thread is spawned that requests to sleep
for a few microseconds. When the system wakes it up, it measures how many
microseconds have passed. The difference between the requested and the actual
duration of the sleep, is the idle jitter. This number is useful in real-time
environments, where CPU jitter can affect the quality of the service (like VoIP
media gateways).
system.idlejitter



INTERRUPTS

Interrupts are signals sent to the CPU by external devices (normally I/O
devices) or programs (running processes). They tell the CPU to stop its current
activities and execute the appropriate part of the operating system. Interrupt
types are hardware (generated by hardware devices to signal that they need some
attention from the OS), software (generated by programs when they want to
request a system call to be performed by the operating system), and traps
(generated by the CPU itself to indicate that some error or condition occurred
for which assistance from the operating system is needed).

Total number of CPU interrupts. Check system.interrupts that gives more detail
about each interrupt and also the CPUs section where interrupts are analyzed per
CPU core.
system.intr

CPU interrupts in detail. At the CPUs section, interrupts are analyzed per CPU
core. The last column in /proc/interrupts provides an interrupt description or
the device name that registered the handler for that interrupt.
system.interrupts



SOFTIRQS

Software interrupts (or "softirqs") are one of the oldest deferred-execution
mechanisms in the kernel. Several tasks among those executed by the kernel are
not critical: they can be deferred for a long period of time, if necessary. The
deferrable tasks can execute with all interrupts enabled (softirqs are patterned
after hardware interrupts). Taking them out of the interrupt handler helps keep
kernel response time small.


Total number of software interrupts in the system. At the CPUs section, softirqs
are analyzed per CPU core.

HI - high priority tasklets. TIMER - tasklets related to timer interrupts.
NET_TX, NET_RX - used for network transmit and receive processing. BLOCK -
handles block I/O completion events. IRQ_POLL - used by the IO subsystem to
increase performance (a NAPI like approach for block devices). TASKLET - handles
regular tasklets. SCHED - used by the scheduler to perform load-balancing and
other scheduling tasks. HRTIMER - used for high-resolution timers. RCU -
performs read-copy-update (RCU) processing.

system.softirqs



SOFTNET

Statistics for CPUs SoftIRQs related to network receive work. Break down per CPU
core can be found at CPU / softnet statistics. More information about
identifying and troubleshooting network driver related issues can be found at
Red Hat Enterprise Linux Network Performance Tuning Guide.

Processed - packets processed. Dropped - packets dropped because the network
device backlog was full. Squeezed - number of times the network device budget
was consumed or the time limit was reached, but more work was available.
ReceivedRPS - number of times this CPU has been woken up to process packets via
an Inter-processor Interrupt. FlowLimitCount - number of times the flow limit
has been reached (flow limiting is an optional Receive Packet Steering feature).


system.softnet_stat



ENTROPY


Entropy, is a pool of random numbers (/dev/random) that is mainly used in
cryptography. If the pool of entropy gets empty, processes requiring random
numbers may run a lot slower (it depends on the interface each program uses),
waiting for the pool to be replenished. Ideally a system with high entropy
demands should have a hardware device for that purpose (TPM is one such device).
There are also several software-only options you may install, like haveged,
although these are generally useful only in servers.
system.entropy



FILES


system.file_nr_used

system.file_nr_utilization



UPTIME


The amount of time the system has been running, including time spent in suspend.
system.uptime



CLOCK SYNCHRONIZATION

NTP lets you automatically sync your system time with a remote server. This
keeps your machine’s time accurate by syncing with servers that are known to
have accurate times.


The system clock synchronization state as provided by the ntp_adjtime() system
call. An unsynchronized clock may be the result of synchronization issues by the
NTP daemon or a hardware clock fault. It can take several minutes (usually up to
17) before NTP daemon selects a server to synchronize with.

State map: 0 - not synchronized, 1 - synchronized.

system.clock_sync_state


The kernel code can operate in various modes and with various features enabled
or disabled, as selected by the ntp_adjtime() system call. The system clock
status shows the value of the time_status variable in the kernel. The bits of
the variable are used to control these functions and record error conditions as
they exist.

UNSYNC - set/cleared by the caller to indicate clock unsynchronized (e.g., when
no peers are reachable). This flag is usually controlled by an application
program, but the operating system may also set it. CLOCKERR - set/cleared by the
external hardware clock driver to indicate hardware fault.

Status map: 0 - bit unset, 1 - bit set.

system.clock_status

A typical NTP client regularly polls one or more NTP servers. The client must
compute its time offset and round-trip delay. Time offset is the difference in
absolute time between the two clocks.
system.clock_sync_offset



IPC SEMAPHORES

System V semaphores is an inter-process communication (IPC) mechanism. It allows
processes or threads within a process to synchronize their actions. They are
often used to monitor and control the availability of system resources such as
shared memory segments. For details, see svipc(7). To see the host IPC semaphore
information, run ipcs -us. For limits, run ipcs -ls.

Number of allocated System V IPC semaphores. The system-wide limit on the number
of semaphores in all semaphore sets is specified in /proc/sys/kernel/sem file
(2nd field).
system.ipc_semaphores

Number of used System V IPC semaphore arrays (sets). Semaphores support
semaphore sets where each one is a counting semaphore. So when an application
requests semaphores, the kernel releases them in sets. The system-wide limit on
the maximum number of semaphore sets is specified in /proc/sys/kernel/sem file
(4th field).
system.ipc_semaphore_arrays



IPC SHARED MEMORY

System V shared memory is an inter-process communication (IPC) mechanism. It
allows processes to communicate information by sharing a region of memory. It is
the fastest form of inter-process communication available since no kernel
involvement occurs when data is passed between the processes (no copying).
Typically, processes must synchronize their access to a shared memory object,
using, for example, POSIX semaphores. For details, see svipc(7). To see the host
IPC shared memory information, run ipcs -um. For limits, run ipcs -lm.

Number of allocated System V IPC memory segments. The system-wide maximum number
of shared memory segments that can be created is specified in
/proc/sys/kernel/shmmni file.
system.shared_memory_segments

Amount of memory currently used by System V IPC memory segments. The run-time
limit on the maximum shared memory segment size that can be created is specified
in /proc/sys/kernel/shmmax file.
system.shared_memory_bytes


--------------------------------------------------------------------------------


CPUS

Detailed information for each CPU of the system. A summary of the system for all
CPUs can be found at the System Overview section.



UTILIZATION


cpu.cpu0

cpu.cpu1



INTERRUPTS

Total number of interrupts per CPU. To see the total number for the system check
the interrupts section. The last column in /proc/interrupts provides an
interrupt description or the device name that registered the handler for that
interrupt.

cpu.cpu0_interrupts

cpu.cpu1_interrupts



SOFTIRQS

Total number of software interrupts per CPU. To see the total number for the
system check the softirqs section.

cpu.cpu0_softirqs

cpu.cpu1_softirqs



SOFTNET

Statistics for CPUs SoftIRQs related to network receive work. Total for all CPU
cores can be found at System / softnet statistics. More information about
identifying and troubleshooting network driver related issues can be found at
Red Hat Enterprise Linux Network Performance Tuning Guide.

Processed - packets processed. Dropped - packets dropped because the network
device backlog was full. Squeezed - number of times the network device budget
was consumed or the time limit was reached, but more work was available.
ReceivedRPS - number of times this CPU has been woken up to process packets via
an Inter-processor Interrupt. FlowLimitCount - number of times the flow limit
has been reached (flow limiting is an optional Receive Packet Steering feature).


cpu.cpu0_softnet_stat

cpu.cpu1_softnet_stat


--------------------------------------------------------------------------------


MEMORY

Detailed information about the memory management of the system.



OVERVIEW


Available Memory is estimated by the kernel, as the amount of RAM that can be
used by userspace processes, without causing swapping.
mem.available

Committed Memory, is the sum of all memory which has been allocated by
processes.
mem.committed

mem.directmaps



OOM KILLS


The number of processes killed by Out of Memory Killer. The kernel's OOM killer
is summoned when the system runs short of free memory and is unable to proceed
without killing one or more processes. It tries to pick the process whose demise
will free the most memory while causing the least misery for users of the
system. This counter also includes processes within containers that have
exceeded the memory limit.
mem.oom_kill



SWAP


System swap memory usage. Swap space is used when the amount of physical memory
(RAM) is full. When the system needs more memory resources and the RAM is full,
inactive pages in memory are moved to the swap space (usually a disk, a disk
partition or a file).
mem.swap

mem.swap_cached


System swap I/O.

In - pages the system has swapped in from disk to RAM. Out - pages the system
has swapped out from RAM to disk.
mem.swapio



ZSWAP


mem.zswap



PAGE FAULTS



A page fault is a type of interrupt, called trap, raised by computer hardware
when a running program accesses a memory page that is mapped into the virtual
address space, but not actually loaded into main memory.



Minor - the page is loaded in memory at the time the fault is generated, but is
not marked in the memory management unit as being loaded in memory. Major -
generated when the system needs to load the memory page from disk or swap
memory.



mem.pgfaults



WRITEBACK


Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
mem.writeback



KERNEL



The total amount of memory being used by the kernel.

Slab - used by the kernel to cache data structures for its own use. KernelStack
- allocated for each task done by the kernel. PageTables - dedicated to the
lowest level of page tables (A page table is used to turn a virtual address into
a physical memory address). VmallocUsed - being used as virtual address space.
Percpu - allocated to the per-CPU allocator used to back per-CPU allocations
(excludes the cost of metadata). When you create a per-CPU variable, each
processor on the system gets its own copy of that variable.

mem.kernel



SLAB



Slab memory statistics.



Reclaimable - amount of memory which the kernel can reuse. Unreclaimable - can
not be reused even when the kernel is lacking memory.

mem.slab



RECLAIMING


mem.reclaiming



CMA


mem.cma



HUGEPAGES

Hugepages is a feature that allows the kernel to utilize the multiple page size
capabilities of modern hardware architectures. The kernel creates multiple pages
of virtual memory, mapped from both physical RAM and swap. There is a mechanism
in the CPU architecture called "Translation Lookaside Buffers" (TLB) to manage
the mapping of virtual memory pages to actual physical memory addresses. The TLB
is a limited hardware resource, so utilizing a large amount of physical memory
with the default page size consumes the TLB and adds processing overhead. By
utilizing Huge Pages, the kernel is able to create pages of much larger sizes,
each page consuming a single resource in the TLB. Huge Pages are pinned to
physical RAM and cannot be swapped/paged out.

mem.thp_compact



BALLOON


mem.balloon


--------------------------------------------------------------------------------


DISKS

Charts with performance information for all the system disks. Special care has
been given to present disk performance metrics in a way compatible with iostat
-x. netdata by default prevents rendering performance charts for individual
partitions and unmounted virtual disks. Disabled charts can still be enabled by
configuring the relative settings in the netdata configuration file.



SDA

disk.sda

disk.sda

disk_util.sda

The amount of data transferred to and from disk.
disk.sda

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sda

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sda


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sda

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sda

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sda

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sda

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sda

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sda

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sda

The average I/O operation size.
disk_avgsz.sda

The average discard operation size.
disk_ext_avgsz.sda

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sda

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sda

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sda

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sda

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sda



/


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._



/BOOT


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._boot



/DEV


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._dev

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._dev



/DEV/SHM


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._dev_shm

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._dev_shm



/RUN


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._run

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._run



/RUN/WRAPPERS


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._run_wrappers

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._run_wrappers


--------------------------------------------------------------------------------


NETWORKING STACK

Metrics for the networking stack of the system. These metrics are collected from
/proc/net/netstat or attaching kprobes to kernel functions, apply to both IPv4
and IPv6 traffic and are related to operation of the kernel networking stack.



TCP


ip.tcppackets

ip.tcperrors

ip.tcpopens

ip.tcpsock

ip.tcphandshake


TCP connection aborts.

BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a
packet with a sequence number beyond the last one for this connection - the
kernel responds with RST (closes the connection). UserClosed - happens when the
kernel receives data on an already closed connection and responds with RST.
NoMemory - happens when there are too many orphaned sockets (not attached to an
fd) and the kernel has to drop a connection - sometimes it will send an RST,
sometimes it won't. Timeout - happens when a connection times out. Linger -
happens when the kernel killed a socket that was already closed by the
application and lingered around for long enough. Failed - happens when the
kernel attempted to se...

TCP connection aborts.

BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a
packet with a sequence number beyond the last one for this connection - the
kernel responds with RST (closes the connection). UserClosed - happens when the
kernel receives data on an already closed connection and responds with RST.
NoMemory - happens when there are too many orphaned sockets (not attached to an
fd) and the kernel has to drop a connection - sometimes it will send an RST,
sometimes it won't. Timeout - happens when a connection times out. Linger -
happens when the kernel killed a socket that was already closed by the
application and lingered around for long enough. Failed - happens when the
kernel attempted to send an RST but failed because there was no memory
available.

show more information
ip.tcpconnaborts


The SYN queue of the kernel tracks TCP handshakes until connections get fully
established. It overflows when too many incoming TCP connection requests hang in
the half-open state and the server is not configured to fall back to SYN
cookies. Overflows are usually caused by SYN flood DoS attacks.

Drops - number of connections dropped because the SYN queue was full and SYN
cookies were disabled. Cookies - number of SYN cookies sent because the SYN
queue was full.

ip.tcp_syn_queue


The accept queue of the kernel holds the fully established TCP connections,
waiting to be handled by the listening application.

Overflows - the number of established connections that could not be handled
because the receive queue of the listening application was full. Drops - number
of incoming connections that could not be handled, including SYN floods,
overflows, out of memory, security issues, no route to destination, reception of
related ICMP messages, socket is broadcast or multicast.



ip.tcp_accept_queue


TCP prevents out-of-order packets by either sequencing them in the correct order
or by requesting the retransmission of out-of-order packets.

Timestamp - detected re-ordering using the timestamp option. SACK - detected
re-ordering using Selective Acknowledgment algorithm. FACK - detected
re-ordering using Forward Acknowledgment algorithm. Reno - detected re-ordering
using Fast Retransmit algorithm.

ip.tcpreorders


TCP maintains an out-of-order queue to keep the out-of-order packets in the TCP
communication.

InQueue - the TCP layer receives an out-of-order packet and has enough memory to
queue it. Dropped - the TCP layer receives an out-of-order packet but does not
have enough memory, so drops it. Merged - the received out-of-order packet has
an overlay with the previous packet. The overlay part will be dropped. All these
packets will also be counted into InQueue. Pruned - packets dropped from
out-of-order queue because of socket buffer overrun.

ip.tcpofo


SYN cookies are used to mitigate SYN flood.

Received - after sending a SYN cookie, it came back to us and passed the check.
Sent - an application was not able to accept a connection fast enough, so the
kernel could not store an entry in the queue for this connection. Instead of
dropping it, it sent a SYN cookie to the client. Failed - the MSS decoded from
the SYN cookie is invalid. When this counter is incremented, the received packet
won’t be treated as a SYN cookie.

ip.tcpsyncookies



SOCKETS


ip.sockstat_sockets


--------------------------------------------------------------------------------


IPV4 NETWORKING

Metrics for the IPv4 stack of the system. Internet Protocol version 4 (IPv4) is
the fourth version of the Internet Protocol (IP). It is one of the core
protocols of standards-based internetworking methods in the Internet. IPv4 is a
connectionless protocol for use on packet-switched networks. It operates on a
best effort delivery model, in that it does not guarantee delivery, nor does it
assure proper sequencing or avoidance of duplicate delivery. These aspects,
including data integrity, are addressed by an upper layer transport protocol,
such as the Transmission Control Protocol (TCP).



PACKETS



IPv4 packets statistics for this host.

Received - packets received by the IP layer. This counter will be increased even
if the packet is dropped later. Sent - packets sent via IP layer, for both
single cast and multicast packets. This counter does not include any packets
counted in Forwarded. Forwarded - input packets for which this host was not
their final IP destination, as a result of which an attempt was made to find a
route to forward them to that final destination. In hosts which do not act as IP
Gateways, this counter will include only those packets which were Source-Routed
and the Source-Route option processing was successful. Delivered - packets
delivered to the upper layer protocols, e.g. TCP, UDP, ICMP, and so on.

ipv4.packets



ERRORS



The number of discarded IPv4 packets.

InDiscards, OutDiscards - inbound and outbound packets which were chosen to be
discarded even though no errors had been detected to prevent their being
deliverable to a higher-layer protocol. InHdrErrors - input packets that have
been discarded due to errors in their IP headers, including bad checksums,
version number mismatch, other format errors, time-to-live exceeded, errors
discovered in processing their IP options, etc. OutNoRoutes - packets that have
been discarded because no route could be found to transmit them to their
destination. This includes any packets which a host cannot route because all of
its default gateways are down. InAddrErrors - input packets that have been
discarded du...

The number of discarded IPv4 packets.

InDiscards, OutDiscards - inbound and outbound packets which were chosen to be
discarded even though no errors had been detected to prevent their being
deliverable to a higher-layer protocol. InHdrErrors - input packets that have
been discarded due to errors in their IP headers, including bad checksums,
version number mismatch, other format errors, time-to-live exceeded, errors
discovered in processing their IP options, etc. OutNoRoutes - packets that have
been discarded because no route could be found to transmit them to their
destination. This includes any packets which a host cannot route because all of
its default gateways are down. InAddrErrors - input packets that have been
discarded due to invalid IP address or the destination IP address is not a local
address and IP forwarding is not enabled. InUnknownProtos - input packets which
were discarded because of an unknown or unsupported protocol.

show more information
ipv4.errors



BROADCAST


ipv4.bcast

ipv4.bcastpkts



MULTICAST


ipv4.mcast

ipv4.mcastpkts



TCP



The number of TCP sockets in the system in certain states.

Alloc - in any TCP state. Orphan - no longer attached to a socket descriptor in
any user processes, but for which the kernel is still required to maintain state
in order to complete the transport protocol. InUse - in any TCP state, excluding
TIME-WAIT and CLOSED. TimeWait - in the TIME-WAIT state.

ipv4.sockstat_tcp_sockets

The amount of memory used by allocated TCP sockets.
ipv4.sockstat_tcp_mem



ICMP



The number of transferred IPv4 ICMP messages.

Received, Sent - ICMP messages which the host received and attempted to send.
Both these counters include errors.

ipv4.icmp

The number of transferred IPv4 ICMP control messages.
ipv4.icmpmsg


The number of IPv4 ICMP errors.

InErrors - received ICMP messages but determined as having ICMP-specific errors,
e.g. bad ICMP checksums, bad length, etc. OutErrors - ICMP messages which this
host did not send due to problems discovered within ICMP such as a lack of
buffers. This counter does not include errors discovered outside the ICMP layer
such as the inability of IP to route the resultant datagram. InCsumErrors -
received ICMP messages with bad checksum.

ipv4.icmp_errors



UDP


The number of transferred UDP packets.
ipv4.udppackets


The number of errors encountered during transferring UDP packets.

RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no
kernel memory available, or the IP layer reported an error when trying to send
the packet and no error queue has been setup. InErrors - that is an aggregated
counter for all errors, excluding NoPorts. NoPorts - no application is listening
at the destination port. InCsumErrors - a UDP checksum failure is detected.
IgnoredMulti - ignored multicast packets.
ipv4.udperrors

The number of used UDP sockets.
ipv4.sockstat_udp_sockets

The amount of memory used by allocated UDP sockets.
ipv4.sockstat_udp_mem



ECN


ipv4.ecnpkts



FRAGMENTS



IPv4 reassembly statistics for this system.

OK - packets that have been successfully reassembled. Failed - failures detected
by the IP reassembly algorithm. This is not necessarily a count of discarded IP
fragments since some algorithms can lose track of the number of fragments by
combining them as they are received. All - received IP fragments which needed to
be reassembled.

ipv4.fragsin


IPv4 fragmentation statistics for this system.

OK - packets that have been successfully fragmented. Failed - packets that have
been discarded because they needed to be fragmented but could not be, e.g. due
to Don't Fragment (DF) flag was set. Created - fragments that have been
generated as a result of fragmentation.

ipv4.fragsout


--------------------------------------------------------------------------------


IPV6 NETWORKING

Metrics for the IPv6 stack of the system. Internet Protocol version 6 (IPv6) is
the most recent version of the Internet Protocol (IP), the communications
protocol that provides an identification and location system for computers on
networks and routes traffic across the Internet. IPv6 was developed by the
Internet Engineering Task Force (IETF) to deal with the long-anticipated problem
of IPv4 address exhaustion. IPv6 is intended to replace IPv4.



PACKETS



IPv6 packet statistics for this host.

Received - packets received by the IP layer. This counter will be increased even
if the packet is dropped later. Sent - packets sent via IP layer, for both
single cast and multicast packets. This counter does not include any packets
counted in Forwarded. Forwarded - input packets for which this host was not
their final IP destination, as a result of which an attempt was made to find a
route to forward them to that final destination. In hosts which do not act as IP
Gateways, this counter will include only those packets which were Source-Routed
and the Source-Route option processing was successful. Delivers - packets
delivered to the upper layer protocols, e.g. TCP, UDP, ICMP, and so on.

ipv6.packets


Total number of received IPv6 packets with ECN bits set in the system.

CEP - congestion encountered. NoECTP - non ECN-capable transport. ECTP0 and
ECTP1 - ECN capable transport.

ipv6.ect



ERRORS



The number of discarded IPv6 packets.

InDiscards, OutDiscards - packets which were chosen to be discarded even though
no errors had been detected to prevent their being deliverable to a higher-layer
protocol. InHdrErrors - errors in IP headers, including bad checksums, version
number mismatch, other format errors, time-to-live exceeded, etc. InAddrErrors -
invalid IP address or the destination IP address is not a local address and IP
forwarding is not enabled. InUnknownProtos - unknown or unsupported protocol.
InTooBigErrors - the size exceeded the link MTU. InTruncatedPkts - packet frame
did not carry enough data. InNoRoutes - no route could be found while
forwarding. OutNoRoutes - no route could be found for packets generated by this
host.

ipv6.errors



MULTICAST6


Total IPv6 multicast traffic.
ipv6.mcast

Total transferred IPv6 multicast packets.
ipv6.mcastpkts



TCP6


The number of TCP sockets in any state, excluding TIME-WAIT and CLOSED.
ipv6.sockstat6_tcp_sockets



ICMP6



The number of transferred ICMPv6 messages.

Received, Sent - ICMP messages which the host received and attempted to send.
Both these counters include errors.

ipv6.icmp


The number of ICMPv6 errors and error messages.

InErrors, OutErrors - bad ICMP messages (bad ICMP checksums, bad length, etc.).
InCsumErrors - wrong checksum.

ipv6.icmperrors

The number of ICMPv6 Echo messages.
ipv6.icmpechos


The number of transferred ICMPv6 Group Membership messages.

Multicast routers send Group Membership Query messages to learn which groups
have members on each of their attached physical networks. Host computers respond
by sending a Group Membership Report for each multicast group joined by the
host. A host computer can also send a Group Membership Report when it joins a
new multicast group. Group Membership Reduction messages are sent when a host
computer leaves a multicast group.

ipv6.groupmemb


The number of transferred ICMPv6 Router Discovery messages.

Router Solicitations message is sent from a computer host to any routers on the
local area network to request that they advertise their presence on the network.
Router Advertisement message is sent by a router on the local area network to
announce its IP address as available for routing.

ipv6.icmprouter


The number of transferred ICMPv6 Neighbour Discovery messages.

Neighbor Solicitations are used by nodes to determine the link layer address of
a neighbor, or to verify that a neighbor is still reachable via a cached link
layer address. Neighbor Advertisements are used by nodes to respond to a
Neighbor Solicitation message.

ipv6.icmpneighbor

The number of transferred ICMPv6 Multicast Listener Discovery (MLD) messages.
ipv6.icmpmldv2

The number of transferred ICMPv6 messages of certain types.
ipv6.icmptypes



UDP6


The number of transferred UDP packets.
ipv6.udppackets


The number of errors encountered during transferring UDP packets.

RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no
kernel memory available, or the IP layer reported an error when trying to send
the packet and no error queue has been setup. InErrors - that is an aggregated
counter for all errors, excluding NoPorts. NoPorts - no application is listening
at the destination port. InCsumErrors - a UDP checksum failure is detected.
IgnoredMulti - ignored multicast packets.
ipv6.udperrors

The number of used UDP sockets.
ipv6.sockstat6_udp_sockets



FRAGMENTS6



IPv6 fragmentation statistics for this system.

OK - packets that have been successfully fragmented. Failed - packets that have
been discarded because they needed to be fragmented but could not be, e.g. due
to Don't Fragment (DF) flag was set. All - fragments that have been generated as
a result of fragmentation.

ipv6.fragsout



RAW6


The number of used raw sockets.
ipv6.sockstat6_raw_sockets


--------------------------------------------------------------------------------


NETWORK INTERFACES

Performance metrics for network interfaces.

Netdata retrieves this data reading the /proc/net/dev file and /sys/class/net/
directory.




ENS3

net.ens3

net.ens3

The amount of traffic transferred by the network interface.
net.ens3

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.ens3

The interface's latest or current speed that the network adapter negotiated with
the device it is connected to. This does not give the max supported speed of the
NIC.
net_speed.ens3


The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.ens3


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.ens3

The current physical link state of the interface.
net_carrier.ens3

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.ens3



PODMAN0

net.podman0

net.podman0

The amount of traffic transferred by the network interface.
net.podman0

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.podman0


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.podman0

The current physical link state of the interface.
net_carrier.podman0

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.podman0



VETH0

net.veth0

net.veth0

The amount of traffic transferred by the network interface.
net.veth0

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.veth0


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.veth0

The current physical link state of the interface.
net_carrier.veth0

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.veth0



WG0

net.wg0

net.wg0

The amount of traffic transferred by the network interface.
net.wg0

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.wg0


The number of errors encountered by the network interface.

Inbound - bad packets received on this interface. It includes dropped packets
due to invalid length, CRC, frame alignment, and other errors. Outbound -
transmit problems. It includes frames transmission errors due to loss of
carrier, FIFO underrun/underflow, heartbeat, late collisions, and other
problems.

net_errors.wg0


The number of errors encountered by the network interface.

Frames - aggregated counter for dropped packets due to invalid length, FIFO
overflow, CRC, and frame alignment errors. Collisions - collisions during packet
transmissions. Carrier - aggregated counter for frame transmission errors due to
excessive collisions, loss of carrier, device FIFO underrun/underflow,
Heartbeat/SQE Test errors, and late collisions.

net_events.wg0


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.wg0

The current physical link state of the interface.
net_carrier.wg0

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.wg0


--------------------------------------------------------------------------------


FIREWALL (NETFILTER)

Performance metrics of the netfilter components.



CONNECTION TRACKER

Netfilter Connection Tracker performance metrics. The connection tracker keeps
track of all connections of the machine, inbound and outbound. It works by
keeping a database with all open connections, tracking network and address
translation and connection expectations.

The number of entries in the conntrack table.
netfilter.conntrack_sockets



NETLINK


netfilter.netlink_new

netfilter.netlink_changes

netfilter.netlink_expect

netfilter.netlink_errors

netfilter.netlink_search


--------------------------------------------------------------------------------


SYSTEMD ASTERISK


CPU


systemd_asterisk.cpu



MEM


systemd_asterisk.mem_usage



DISK


systemd_asterisk.io

systemd_asterisk.serviced_ops



PIDS


systemd_asterisk.pids_current


--------------------------------------------------------------------------------


SYSTEMD BEPASTY-SERVER-PASTE-P3X-DE-GUNICORN


CPU


systemd_bepasty-server-paste-p3x-de-gunicorn.cpu



MEM


systemd_bepasty-server-paste-p3x-de-gunicorn.mem_usage



DISK


systemd_bepasty-server-paste-p3x-de-gunicorn.io

systemd_bepasty-server-paste-p3x-de-gunicorn.serviced_ops



PIDS


systemd_bepasty-server-paste-p3x-de-gunicorn.pids_current


--------------------------------------------------------------------------------


SYSTEMD BLOCKY


CPU


systemd_blocky.cpu



MEM


systemd_blocky.mem_usage



DISK


systemd_blocky.io

systemd_blocky.serviced_ops



PIDS


systemd_blocky.pids_current


--------------------------------------------------------------------------------


SYSTEMD CHRONYD


CPU


systemd_chronyd.cpu



MEM


systemd_chronyd.mem_usage



DISK


systemd_chronyd.io

systemd_chronyd.serviced_ops



PIDS


systemd_chronyd.pids_current


--------------------------------------------------------------------------------


SYSTEMD COTURN


CPU


systemd_coturn.cpu



MEM


systemd_coturn.mem_usage



DISK


systemd_coturn.io

systemd_coturn.serviced_ops



PIDS


systemd_coturn.pids_current


--------------------------------------------------------------------------------


SYSTEMD CRON


CPU


systemd_cron.cpu



MEM


systemd_cron.mem_usage



DISK


systemd_cron.io

systemd_cron.serviced_ops



PIDS


systemd_cron.pids_current


--------------------------------------------------------------------------------


SYSTEMD DBUS


CPU


systemd_dbus.cpu



MEM


systemd_dbus.mem_usage



DISK


systemd_dbus.io

systemd_dbus.serviced_ops



PIDS


systemd_dbus.pids_current


--------------------------------------------------------------------------------


SYSTEMD FAIL2BAN


CPU


systemd_fail2ban.cpu



MEM


systemd_fail2ban.mem_usage



DISK


systemd_fail2ban.io

systemd_fail2ban.serviced_ops



PIDS


systemd_fail2ban.pids_current


--------------------------------------------------------------------------------


SYSTEMD GALENE


CPU


systemd_galene.cpu



MEM


systemd_galene.mem_usage



DISK


systemd_galene.io

systemd_galene.serviced_ops



PIDS


systemd_galene.pids_current


--------------------------------------------------------------------------------


SYSTEMD GRAFANA


CPU


systemd_grafana.cpu



MEM


systemd_grafana.mem_usage



DISK


systemd_grafana.io

systemd_grafana.serviced_ops



PIDS


systemd_grafana.pids_current


--------------------------------------------------------------------------------


SYSTEMD ICECAST


CPU


systemd_icecast.cpu



MEM


systemd_icecast.mem_usage



DISK


systemd_icecast.io

systemd_icecast.serviced_ops



PIDS


systemd_icecast.pids_current


--------------------------------------------------------------------------------


SYSTEMD IOTBENSCOMDESOCAT


CPU


systemd_iotbenscomdesocat.cpu



MEM


systemd_iotbenscomdesocat.mem_usage



DISK


systemd_iotbenscomdesocat.io

systemd_iotbenscomdesocat.serviced_ops



PIDS


systemd_iotbenscomdesocat.pids_current


--------------------------------------------------------------------------------


SYSTEMD IPERF3


CPU


systemd_iperf3.cpu



MEM


systemd_iperf3.mem_usage



DISK


systemd_iperf3.io

systemd_iperf3.serviced_ops



PIDS


systemd_iperf3.pids_current


--------------------------------------------------------------------------------


SYSTEMD JICOFO


CPU


systemd_jicofo.cpu



MEM


systemd_jicofo.mem_usage



DISK


systemd_jicofo.io

systemd_jicofo.serviced_ops



PIDS


systemd_jicofo.pids_current


--------------------------------------------------------------------------------


SYSTEMD JITSI-EXCALIDRAW


CPU


systemd_jitsi-excalidraw.cpu



MEM


systemd_jitsi-excalidraw.mem_usage



DISK


systemd_jitsi-excalidraw.io

systemd_jitsi-excalidraw.serviced_ops



PIDS


systemd_jitsi-excalidraw.pids_current


--------------------------------------------------------------------------------


SYSTEMD JITSI-VIDEOBRIDGE2


CPU


systemd_jitsi-videobridge2.cpu



MEM


systemd_jitsi-videobridge2.mem_usage



DISK


systemd_jitsi-videobridge2.io

systemd_jitsi-videobridge2.serviced_ops



PIDS


systemd_jitsi-videobridge2.pids_current


--------------------------------------------------------------------------------


SYSTEMD MOPIDY


CPU


systemd_mopidy.cpu



MEM


systemd_mopidy.mem_usage



DISK


systemd_mopidy.io

systemd_mopidy.serviced_ops



PIDS


systemd_mopidy.pids_current


--------------------------------------------------------------------------------


SYSTEMD MYSQL


CPU


systemd_mysql.cpu



MEM


systemd_mysql.mem_usage



DISK


systemd_mysql.io

systemd_mysql.serviced_ops



PIDS


systemd_mysql.pids_current


--------------------------------------------------------------------------------


SYSTEMD NETDATA


CPU


systemd_netdata.cpu



MEM


systemd_netdata.mem_usage



DISK


systemd_netdata.io

systemd_netdata.serviced_ops



PIDS


systemd_netdata.pids_current


--------------------------------------------------------------------------------


SYSTEMD NGINX


CPU


systemd_nginx.cpu



MEM


systemd_nginx.mem_usage



DISK


systemd_nginx.io

systemd_nginx.serviced_ops



PIDS


systemd_nginx.pids_current


--------------------------------------------------------------------------------


SYSTEMD NSCD


CPU


systemd_nscd.cpu



MEM


systemd_nscd.mem_usage



DISK


systemd_nscd.io

systemd_nscd.serviced_ops



PIDS


systemd_nscd.pids_current


--------------------------------------------------------------------------------


SYSTEMD OPENDKIM


CPU


systemd_opendkim.cpu



MEM


systemd_opendkim.mem_usage



DISK


systemd_opendkim.io

systemd_opendkim.serviced_ops



PIDS


systemd_opendkim.pids_current


--------------------------------------------------------------------------------


SYSTEMD PODMAN-VOSK


CPU


systemd_podman-vosk.cpu



MEM


systemd_podman-vosk.mem_usage



DISK


systemd_podman-vosk.io

systemd_podman-vosk.serviced_ops



PIDS


systemd_podman-vosk.pids_current


--------------------------------------------------------------------------------


SYSTEMD POSTGRESQL


CPU


systemd_postgresql.cpu



MEM


systemd_postgresql.mem_usage



DISK


systemd_postgresql.io

systemd_postgresql.serviced_ops



PIDS


systemd_postgresql.pids_current


--------------------------------------------------------------------------------


SYSTEMD PROMETHEUS


CPU


systemd_prometheus.cpu



MEM


systemd_prometheus.mem_usage



DISK


systemd_prometheus.io

systemd_prometheus.serviced_ops



PIDS


systemd_prometheus.pids_current


--------------------------------------------------------------------------------


SYSTEMD PROMETHEUS-NODE-EXPORTER


CPU


systemd_prometheus-node-exporter.cpu



MEM


systemd_prometheus-node-exporter.mem_usage



DISK


systemd_prometheus-node-exporter.io

systemd_prometheus-node-exporter.serviced_ops



PIDS


systemd_prometheus-node-exporter.pids_current


--------------------------------------------------------------------------------


SYSTEMD PROSODY


CPU


systemd_prosody.cpu



MEM


systemd_prosody.mem_usage



DISK


systemd_prosody.io

systemd_prosody.serviced_ops



PIDS


systemd_prosody.pids_current


--------------------------------------------------------------------------------


SYSTEMD QEMU-GUEST-AGENT


CPU


systemd_qemu-guest-agent.cpu



MEM


systemd_qemu-guest-agent.mem_usage



DISK


systemd_qemu-guest-agent.io

systemd_qemu-guest-agent.serviced_ops



PIDS


systemd_qemu-guest-agent.pids_current


--------------------------------------------------------------------------------


SYSTEMD REDIS-RSPAMD


CPU


systemd_redis-rspamd.cpu



MEM


systemd_redis-rspamd.mem_usage



DISK


systemd_redis-rspamd.io

systemd_redis-rspamd.serviced_ops



PIDS


systemd_redis-rspamd.pids_current


--------------------------------------------------------------------------------


SYSTEMD REDIS-SEARX


CPU


systemd_redis-searx.cpu



MEM


systemd_redis-searx.mem_usage



DISK


systemd_redis-searx.io

systemd_redis-searx.serviced_ops



PIDS


systemd_redis-searx.pids_current


--------------------------------------------------------------------------------


SYSTEMD RSPAMD


CPU


systemd_rspamd.cpu



MEM


systemd_rspamd.mem_usage



DISK


systemd_rspamd.io

systemd_rspamd.serviced_ops



PIDS


systemd_rspamd.pids_current


--------------------------------------------------------------------------------


SYSTEMD SNAPSERVER


CPU


systemd_snapserver.cpu



MEM


systemd_snapserver.mem_usage



DISK


systemd_snapserver.io

systemd_snapserver.serviced_ops



PIDS


systemd_snapserver.pids_current


--------------------------------------------------------------------------------


SYSTEMD SSHD


CPU


systemd_sshd.cpu



MEM


systemd_sshd.mem_usage



DISK


systemd_sshd.io

systemd_sshd.serviced_ops



PIDS


systemd_sshd.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-JOURNALD


CPU


systemd_systemd-journald.cpu



MEM


systemd_systemd-journald.mem_usage



DISK


systemd_systemd-journald.io

systemd_systemd-journald.serviced_ops



PIDS


systemd_systemd-journald.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-LOGIND


CPU


systemd_systemd-logind.cpu



MEM


systemd_systemd-logind.mem_usage



DISK


systemd_systemd-logind.io

systemd_systemd-logind.serviced_ops



PIDS


systemd_systemd-logind.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-NETWORKD


CPU


systemd_systemd-networkd.cpu



MEM


systemd_systemd-networkd.mem_usage



DISK


systemd_systemd-networkd.io

systemd_systemd-networkd.serviced_ops



PIDS


systemd_systemd-networkd.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-OOMD


CPU


systemd_systemd-oomd.cpu



MEM


systemd_systemd-oomd.mem_usage



DISK


systemd_systemd-oomd.io

systemd_systemd-oomd.serviced_ops



PIDS


systemd_systemd-oomd.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-RESOLVED


CPU


systemd_systemd-resolved.cpu



MEM


systemd_systemd-resolved.mem_usage



DISK


systemd_systemd-resolved.io

systemd_systemd-resolved.serviced_ops



PIDS


systemd_systemd-resolved.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-UDEVD


CPU


systemd_systemd-udevd.cpu



MEM


systemd_systemd-udevd.mem_usage



DISK


systemd_systemd-udevd.io

systemd_systemd-udevd.serviced_ops



PIDS


systemd_systemd-udevd.pids_current


--------------------------------------------------------------------------------


SYSTEMD UWSGI


CPU


systemd_uwsgi.cpu



MEM


systemd_uwsgi.mem_usage



DISK


systemd_uwsgi.io

systemd_uwsgi.serviced_ops



PIDS


systemd_uwsgi.pids_current


--------------------------------------------------------------------------------


APP


CPU


app.charts_d_plugin_cpu_utilization

app.chat_cpu_utilization

app.cron_cpu_utilization

app.dns_cpu_utilization

app.freeipmi_plugin_cpu_utilization

app.go_d_plugin_cpu_utilization

app.grafana_cpu_utilization

app.gui_cpu_utilization

app.httpd_cpu_utilization

app.java_cpu_utilization

app.kernel_cpu_utilization

app.khugepaged_cpu_utilization

app.ksmd_cpu_utilization

app.logs_cpu_utilization

app.mta_cpu_utilization

app.netdata_cpu_utilization

app.nfacct_plugin_cpu_utilization

app.node_cpu_utilization

app.nosql_cpu_utilization

app.other_cpu_utilization

app.pbx_cpu_utilization

app.python_d_plugin_cpu_utilization

app.sql_cpu_utilization

app.ssh_cpu_utilization

app.system_cpu_utilization

app.systemd-journal_plugin_cpu_utilization

app.tc-qos-helper_cpu_utilization

app.time_cpu_utilization

app.timedb_cpu_utilization

app.unicorn_cpu_utilization

app.uwsgi_cpu_utilization

app.vms_cpu_utilization

app.charts_d_plugin_cpu_context_switches

app.chat_cpu_context_switches

app.cron_cpu_context_switches

app.dns_cpu_context_switches

app.freeipmi_plugin_cpu_context_switches

app.go_d_plugin_cpu_context_switches

app.grafana_cpu_context_switches

app.gui_cpu_context_switches

app.httpd_cpu_context_switches

app.java_cpu_context_switches

app.kernel_cpu_context_switches

app.khugepaged_cpu_context_switches

app.ksmd_cpu_context_switches

app.logs_cpu_context_switches

app.mta_cpu_context_switches

app.netdata_cpu_context_switches

app.nfacct_plugin_cpu_context_switches

app.node_cpu_context_switches

app.nosql_cpu_context_switches

app.other_cpu_context_switches

app.pbx_cpu_context_switches

app.python_d_plugin_cpu_context_switches

app.sql_cpu_context_switches

app.ssh_cpu_context_switches

app.system_cpu_context_switches

app.systemd-journal_plugin_cpu_context_switches

app.tc-qos-helper_cpu_context_switches

app.time_cpu_context_switches

app.timedb_cpu_context_switches

app.unicorn_cpu_context_switches

app.uwsgi_cpu_context_switches

app.vms_cpu_context_switches



MEM


app.charts_d_plugin_mem_private_usage

app.chat_mem_private_usage

app.cron_mem_private_usage

app.dns_mem_private_usage

app.freeipmi_plugin_mem_private_usage

app.go_d_plugin_mem_private_usage

app.grafana_mem_private_usage

app.gui_mem_private_usage

app.httpd_mem_private_usage

app.java_mem_private_usage

app.kernel_mem_private_usage

app.khugepaged_mem_private_usage

app.ksmd_mem_private_usage

app.logs_mem_private_usage

app.mta_mem_private_usage

app.netdata_mem_private_usage

app.nfacct_plugin_mem_private_usage

app.node_mem_private_usage

app.nosql_mem_private_usage

app.other_mem_private_usage

app.pbx_mem_private_usage

app.python_d_plugin_mem_private_usage

app.sql_mem_private_usage

app.ssh_mem_private_usage

app.system_mem_private_usage

app.systemd-journal_plugin_mem_private_usage

app.tc-qos-helper_mem_private_usage

app.time_mem_private_usage

app.timedb_mem_private_usage

app.unicorn_mem_private_usage

app.uwsgi_mem_private_usage

app.vms_mem_private_usage

app.charts_d_plugin_mem_usage

app.chat_mem_usage

app.cron_mem_usage

app.dns_mem_usage

app.freeipmi_plugin_mem_usage

app.go_d_plugin_mem_usage

app.grafana_mem_usage

app.gui_mem_usage

app.httpd_mem_usage

app.java_mem_usage

app.kernel_mem_usage

app.khugepaged_mem_usage

app.ksmd_mem_usage

app.logs_mem_usage

app.mta_mem_usage

app.netdata_mem_usage

app.nfacct_plugin_mem_usage

app.node_mem_usage

app.nosql_mem_usage

app.other_mem_usage

app.pbx_mem_usage

app.python_d_plugin_mem_usage

app.sql_mem_usage

app.ssh_mem_usage

app.system_mem_usage

app.systemd-journal_plugin_mem_usage

app.tc-qos-helper_mem_usage

app.time_mem_usage

app.timedb_mem_usage

app.unicorn_mem_usage

app.uwsgi_mem_usage

app.vms_mem_usage

app.charts_d_plugin_mem_page_faults

app.chat_mem_page_faults

app.cron_mem_page_faults

app.dns_mem_page_faults

app.freeipmi_plugin_mem_page_faults

app.go_d_plugin_mem_page_faults

app.grafana_mem_page_faults

app.gui_mem_page_faults

app.httpd_mem_page_faults

app.java_mem_page_faults

app.kernel_mem_page_faults

app.khugepaged_mem_page_faults

app.ksmd_mem_page_faults

app.logs_mem_page_faults

app.mta_mem_page_faults

app.netdata_mem_page_faults

app.nfacct_plugin_mem_page_faults

app.node_mem_page_faults

app.nosql_mem_page_faults

app.other_mem_page_faults

app.pbx_mem_page_faults

app.python_d_plugin_mem_page_faults

app.sql_mem_page_faults

app.ssh_mem_page_faults

app.system_mem_page_faults

app.systemd-journal_plugin_mem_page_faults

app.tc-qos-helper_mem_page_faults

app.time_mem_page_faults

app.timedb_mem_page_faults

app.unicorn_mem_page_faults

app.uwsgi_mem_page_faults

app.vms_mem_page_faults

app.charts_d_plugin_swap_usage

app.charts_d_plugin_vmem_usage

app.chat_swap_usage

app.chat_vmem_usage

app.cron_swap_usage

app.cron_vmem_usage

app.dns_swap_usage

app.dns_vmem_usage

app.freeipmi_plugin_swap_usage

app.freeipmi_plugin_vmem_usage

app.go_d_plugin_swap_usage

app.go_d_plugin_vmem_usage

app.grafana_swap_usage

app.grafana_vmem_usage

app.gui_swap_usage

app.gui_vmem_usage

app.httpd_swap_usage

app.httpd_vmem_usage

app.java_swap_usage

app.java_vmem_usage

app.kernel_swap_usage

app.kernel_vmem_usage

app.khugepaged_swap_usage

app.khugepaged_vmem_usage

app.ksmd_swap_usage

app.ksmd_vmem_usage

app.logs_swap_usage

app.logs_vmem_usage

app.mta_swap_usage

app.mta_vmem_usage

app.netdata_swap_usage

app.netdata_vmem_usage

app.nfacct_plugin_swap_usage

app.nfacct_plugin_vmem_usage

app.node_swap_usage

app.node_vmem_usage

app.nosql_swap_usage

app.nosql_vmem_usage

app.other_swap_usage

app.other_vmem_usage

app.pbx_swap_usage

app.pbx_vmem_usage

app.python_d_plugin_swap_usage

app.python_d_plugin_vmem_usage

app.sql_swap_usage

app.sql_vmem_usage

app.ssh_swap_usage

app.ssh_vmem_usage

app.system_swap_usage

app.system_vmem_usage

app.systemd-journal_plugin_swap_usage

app.systemd-journal_plugin_vmem_usage

app.tc-qos-helper_swap_usage

app.tc-qos-helper_vmem_usage

app.time_swap_usage

app.time_vmem_usage

app.timedb_swap_usage

app.timedb_vmem_usage

app.unicorn_swap_usage

app.unicorn_vmem_usage

app.uwsgi_swap_usage

app.uwsgi_vmem_usage

app.vms_swap_usage

app.vms_vmem_usage



DISK


app.charts_d_plugin_disk_physical_io

app.chat_disk_physical_io

app.cron_disk_physical_io

app.dns_disk_physical_io

app.freeipmi_plugin_disk_physical_io

app.go_d_plugin_disk_physical_io

app.grafana_disk_physical_io

app.gui_disk_physical_io

app.httpd_disk_physical_io

app.java_disk_physical_io

app.kernel_disk_physical_io

app.khugepaged_disk_physical_io

app.ksmd_disk_physical_io

app.logs_disk_physical_io

app.mta_disk_physical_io

app.netdata_disk_physical_io

app.nfacct_plugin_disk_physical_io

app.node_disk_physical_io

app.nosql_disk_physical_io

app.other_disk_physical_io

app.pbx_disk_physical_io

app.python_d_plugin_disk_physical_io

app.sql_disk_physical_io

app.ssh_disk_physical_io

app.system_disk_physical_io

app.systemd-journal_plugin_disk_physical_io

app.tc-qos-helper_disk_physical_io

app.time_disk_physical_io

app.timedb_disk_physical_io

app.unicorn_disk_physical_io

app.uwsgi_disk_physical_io

app.vms_disk_physical_io

app.charts_d_plugin_disk_logical_io

app.chat_disk_logical_io

app.cron_disk_logical_io

app.dns_disk_logical_io

app.freeipmi_plugin_disk_logical_io

app.go_d_plugin_disk_logical_io

app.grafana_disk_logical_io

app.gui_disk_logical_io

app.httpd_disk_logical_io

app.java_disk_logical_io

app.kernel_disk_logical_io

app.khugepaged_disk_logical_io

app.ksmd_disk_logical_io

app.logs_disk_logical_io

app.mta_disk_logical_io

app.netdata_disk_logical_io

app.nfacct_plugin_disk_logical_io

app.node_disk_logical_io

app.nosql_disk_logical_io

app.other_disk_logical_io

app.pbx_disk_logical_io

app.python_d_plugin_disk_logical_io

app.sql_disk_logical_io

app.ssh_disk_logical_io

app.system_disk_logical_io

app.systemd-journal_plugin_disk_logical_io

app.tc-qos-helper_disk_logical_io

app.time_disk_logical_io

app.timedb_disk_logical_io

app.unicorn_disk_logical_io

app.uwsgi_disk_logical_io

app.vms_disk_logical_io



PROCESSES


app.charts_d_plugin_processes

app.chat_processes

app.cron_processes

app.dns_processes

app.freeipmi_plugin_processes

app.go_d_plugin_processes

app.grafana_processes

app.gui_processes

app.httpd_processes

app.java_processes

app.kernel_processes

app.khugepaged_processes

app.ksmd_processes

app.logs_processes

app.mta_processes

app.netdata_processes

app.nfacct_plugin_processes

app.node_processes

app.nosql_processes

app.other_processes

app.pbx_processes

app.python_d_plugin_processes

app.sql_processes

app.ssh_processes

app.system_processes

app.systemd-journal_plugin_processes

app.tc-qos-helper_processes

app.time_processes

app.timedb_processes

app.unicorn_processes

app.uwsgi_processes

app.vms_processes

app.charts_d_plugin_threads

app.chat_threads

app.cron_threads

app.dns_threads

app.freeipmi_plugin_threads

app.go_d_plugin_threads

app.grafana_threads

app.gui_threads

app.httpd_threads

app.java_threads

app.kernel_threads

app.khugepaged_threads

app.ksmd_threads

app.logs_threads

app.mta_threads

app.netdata_threads

app.nfacct_plugin_threads

app.node_threads

app.nosql_threads

app.other_threads

app.pbx_threads

app.python_d_plugin_threads

app.sql_threads

app.ssh_threads

app.system_threads

app.systemd-journal_plugin_threads

app.tc-qos-helper_threads

app.time_threads

app.timedb_threads

app.unicorn_threads

app.uwsgi_threads

app.vms_threads



FDS


app.charts_d_plugin_fds_open_limit

app.chat_fds_open_limit

app.cron_fds_open_limit

app.dns_fds_open_limit

app.freeipmi_plugin_fds_open_limit

app.go_d_plugin_fds_open_limit

app.grafana_fds_open_limit

app.gui_fds_open_limit

app.httpd_fds_open_limit

app.java_fds_open_limit

app.kernel_fds_open_limit

app.khugepaged_fds_open_limit

app.ksmd_fds_open_limit

app.logs_fds_open_limit

app.mta_fds_open_limit

app.netdata_fds_open_limit

app.nfacct_plugin_fds_open_limit

app.node_fds_open_limit

app.nosql_fds_open_limit

app.other_fds_open_limit

app.pbx_fds_open_limit

app.python_d_plugin_fds_open_limit

app.sql_fds_open_limit

app.ssh_fds_open_limit

app.system_fds_open_limit

app.systemd-journal_plugin_fds_open_limit

app.tc-qos-helper_fds_open_limit

app.time_fds_open_limit

app.timedb_fds_open_limit

app.unicorn_fds_open_limit

app.uwsgi_fds_open_limit

app.vms_fds_open_limit

app.charts_d_plugin_fds_open

app.chat_fds_open

app.cron_fds_open

app.dns_fds_open

app.freeipmi_plugin_fds_open

app.go_d_plugin_fds_open

app.grafana_fds_open

app.gui_fds_open

app.httpd_fds_open

app.java_fds_open

app.kernel_fds_open

app.khugepaged_fds_open

app.ksmd_fds_open

app.logs_fds_open

app.mta_fds_open

app.netdata_fds_open

app.nfacct_plugin_fds_open

app.node_fds_open

app.nosql_fds_open

app.other_fds_open

app.pbx_fds_open

app.python_d_plugin_fds_open

app.sql_fds_open

app.ssh_fds_open

app.system_fds_open

app.systemd-journal_plugin_fds_open

app.tc-qos-helper_fds_open

app.time_fds_open

app.timedb_fds_open

app.unicorn_fds_open

app.uwsgi_fds_open

app.vms_fds_open



UPTIME


app.charts_d_plugin_uptime

app.chat_uptime

app.cron_uptime

app.dns_uptime

app.freeipmi_plugin_uptime

app.go_d_plugin_uptime

app.grafana_uptime

app.gui_uptime

app.httpd_uptime

app.java_uptime

app.kernel_uptime

app.khugepaged_uptime

app.ksmd_uptime

app.logs_uptime

app.mta_uptime

app.netdata_uptime

app.nfacct_plugin_uptime

app.node_uptime

app.nosql_uptime

app.other_uptime

app.pbx_uptime

app.python_d_plugin_uptime

app.sql_uptime

app.ssh_uptime

app.system_uptime

app.systemd-journal_plugin_uptime

app.tc-qos-helper_uptime

app.time_uptime

app.timedb_uptime

app.unicorn_uptime

app.uwsgi_uptime

app.vms_uptime


--------------------------------------------------------------------------------


USER


CPU


user.acme_cpu_utilization

user.asterisk_cpu_utilization

user.bepasty_cpu_utilization

user.blocky_cpu_utilization

user.chrony_cpu_utilization

user.galene_cpu_utilization

user.grafana_cpu_utilization

user.iperf3_cpu_utilization

user.jicofo_cpu_utilization

user.jitsi-videobridge_cpu_utilization

user.knot-resolver_cpu_utilization

user.messagebus_cpu_utilization

user.mopidy_cpu_utilization

user.mysql_cpu_utilization

user.netdata_cpu_utilization

user.nginx_cpu_utilization

user.nobody_cpu_utilization

user.node-exporter_cpu_utilization

user.nscd_cpu_utilization

user.opendkim_cpu_utilization

user.postgres_cpu_utilization

user.prometheus_cpu_utilization

user.prosody_cpu_utilization

user.redis-rspamd_cpu_utilization

user.revive_cpu_utilization

user.root_cpu_utilization

user.rspamd_cpu_utilization

user.searx_cpu_utilization

user.snappymail_cpu_utilization

user.snapserver_cpu_utilization

user.sshd_cpu_utilization

user.systemd-network_cpu_utilization

user.systemd-oom_cpu_utilization

user.systemd-resolve_cpu_utilization

user.turnserver_cpu_utilization

user.uwsgi_cpu_utilization

user.acme_cpu_context_switches

user.asterisk_cpu_context_switches

user.bepasty_cpu_context_switches

user.blocky_cpu_context_switches

user.chrony_cpu_context_switches

user.galene_cpu_context_switches

user.grafana_cpu_context_switches

user.iperf3_cpu_context_switches

user.jicofo_cpu_context_switches

user.jitsi-videobridge_cpu_context_switches

user.knot-resolver_cpu_context_switches

user.messagebus_cpu_context_switches

user.mopidy_cpu_context_switches

user.mysql_cpu_context_switches

user.netdata_cpu_context_switches

user.nginx_cpu_context_switches

user.nobody_cpu_context_switches

user.node-exporter_cpu_context_switches

user.nscd_cpu_context_switches

user.opendkim_cpu_context_switches

user.postgres_cpu_context_switches

user.prometheus_cpu_context_switches

user.prosody_cpu_context_switches

user.redis-rspamd_cpu_context_switches

user.revive_cpu_context_switches

user.root_cpu_context_switches

user.rspamd_cpu_context_switches

user.searx_cpu_context_switches

user.snappymail_cpu_context_switches

user.snapserver_cpu_context_switches

user.sshd_cpu_context_switches

user.systemd-network_cpu_context_switches

user.systemd-oom_cpu_context_switches

user.systemd-resolve_cpu_context_switches

user.turnserver_cpu_context_switches

user.uwsgi_cpu_context_switches



MEM


user.acme_mem_private_usage

user.asterisk_mem_private_usage

user.bepasty_mem_private_usage

user.blocky_mem_private_usage

user.chrony_mem_private_usage

user.galene_mem_private_usage

user.grafana_mem_private_usage

user.iperf3_mem_private_usage

user.jicofo_mem_private_usage

user.jitsi-videobridge_mem_private_usage

user.knot-resolver_mem_private_usage

user.messagebus_mem_private_usage

user.mopidy_mem_private_usage

user.mysql_mem_private_usage

user.netdata_mem_private_usage

user.nginx_mem_private_usage

user.nobody_mem_private_usage

user.node-exporter_mem_private_usage

user.nscd_mem_private_usage

user.opendkim_mem_private_usage

user.postgres_mem_private_usage

user.prometheus_mem_private_usage

user.prosody_mem_private_usage

user.redis-rspamd_mem_private_usage

user.revive_mem_private_usage

user.root_mem_private_usage

user.rspamd_mem_private_usage

user.searx_mem_private_usage

user.snappymail_mem_private_usage

user.snapserver_mem_private_usage

user.sshd_mem_private_usage

user.systemd-network_mem_private_usage

user.systemd-oom_mem_private_usage

user.systemd-resolve_mem_private_usage

user.turnserver_mem_private_usage

user.uwsgi_mem_private_usage

user.acme_mem_usage

user.asterisk_mem_usage

user.bepasty_mem_usage

user.blocky_mem_usage

user.chrony_mem_usage

user.galene_mem_usage

user.grafana_mem_usage

user.iperf3_mem_usage

user.jicofo_mem_usage

user.jitsi-videobridge_mem_usage

user.knot-resolver_mem_usage

user.messagebus_mem_usage

user.mopidy_mem_usage

user.mysql_mem_usage

user.netdata_mem_usage

user.nginx_mem_usage

user.nobody_mem_usage

user.node-exporter_mem_usage

user.nscd_mem_usage

user.opendkim_mem_usage

user.postgres_mem_usage

user.prometheus_mem_usage

user.prosody_mem_usage

user.redis-rspamd_mem_usage

user.revive_mem_usage

user.root_mem_usage

user.rspamd_mem_usage

user.searx_mem_usage

user.snappymail_mem_usage

user.snapserver_mem_usage

user.sshd_mem_usage

user.systemd-network_mem_usage

user.systemd-oom_mem_usage

user.systemd-resolve_mem_usage

user.turnserver_mem_usage

user.uwsgi_mem_usage

user.acme_mem_page_faults

user.asterisk_mem_page_faults

user.bepasty_mem_page_faults

user.blocky_mem_page_faults

user.chrony_mem_page_faults

user.galene_mem_page_faults

user.grafana_mem_page_faults

user.iperf3_mem_page_faults

user.jicofo_mem_page_faults

user.jitsi-videobridge_mem_page_faults

user.knot-resolver_mem_page_faults

user.messagebus_mem_page_faults

user.mopidy_mem_page_faults

user.mysql_mem_page_faults

user.netdata_mem_page_faults

user.nginx_mem_page_faults

user.nobody_mem_page_faults

user.node-exporter_mem_page_faults

user.nscd_mem_page_faults

user.opendkim_mem_page_faults

user.postgres_mem_page_faults

user.prometheus_mem_page_faults

user.prosody_mem_page_faults

user.redis-rspamd_mem_page_faults

user.revive_mem_page_faults

user.root_mem_page_faults

user.rspamd_mem_page_faults

user.searx_mem_page_faults

user.snappymail_mem_page_faults

user.snapserver_mem_page_faults

user.sshd_mem_page_faults

user.systemd-network_mem_page_faults

user.systemd-oom_mem_page_faults

user.systemd-resolve_mem_page_faults

user.turnserver_mem_page_faults

user.uwsgi_mem_page_faults

user.acme_swap_usage

user.acme_vmem_usage

user.asterisk_swap_usage

user.asterisk_vmem_usage

user.bepasty_swap_usage

user.bepasty_vmem_usage

user.blocky_swap_usage

user.blocky_vmem_usage

user.chrony_swap_usage

user.chrony_vmem_usage

user.galene_swap_usage

user.galene_vmem_usage

user.grafana_swap_usage

user.grafana_vmem_usage

user.iperf3_swap_usage

user.iperf3_vmem_usage

user.jicofo_swap_usage

user.jicofo_vmem_usage

user.jitsi-videobridge_swap_usage

user.jitsi-videobridge_vmem_usage

user.knot-resolver_swap_usage

user.knot-resolver_vmem_usage

user.messagebus_swap_usage

user.messagebus_vmem_usage

user.mopidy_swap_usage

user.mopidy_vmem_usage

user.mysql_swap_usage

user.mysql_vmem_usage

user.netdata_swap_usage

user.netdata_vmem_usage

user.nginx_swap_usage

user.nginx_vmem_usage

user.nobody_swap_usage

user.nobody_vmem_usage

user.node-exporter_swap_usage

user.node-exporter_vmem_usage

user.nscd_swap_usage

user.nscd_vmem_usage

user.opendkim_swap_usage

user.opendkim_vmem_usage

user.postgres_swap_usage

user.postgres_vmem_usage

user.prometheus_swap_usage

user.prometheus_vmem_usage

user.prosody_swap_usage

user.prosody_vmem_usage

user.redis-rspamd_swap_usage

user.redis-rspamd_vmem_usage

user.revive_swap_usage

user.revive_vmem_usage

user.root_swap_usage

user.root_vmem_usage

user.rspamd_swap_usage

user.rspamd_vmem_usage

user.searx_swap_usage

user.searx_vmem_usage

user.snappymail_swap_usage

user.snappymail_vmem_usage

user.snapserver_swap_usage

user.snapserver_vmem_usage

user.sshd_swap_usage

user.sshd_vmem_usage

user.systemd-network_swap_usage

user.systemd-network_vmem_usage

user.systemd-oom_swap_usage

user.systemd-oom_vmem_usage

user.systemd-resolve_swap_usage

user.systemd-resolve_vmem_usage

user.turnserver_swap_usage

user.turnserver_vmem_usage

user.uwsgi_swap_usage

user.uwsgi_vmem_usage



DISK


user.acme_disk_physical_io

user.asterisk_disk_physical_io

user.bepasty_disk_physical_io

user.blocky_disk_physical_io

user.chrony_disk_physical_io

user.galene_disk_physical_io

user.grafana_disk_physical_io

user.iperf3_disk_physical_io

user.jicofo_disk_physical_io

user.jitsi-videobridge_disk_physical_io

user.knot-resolver_disk_physical_io

user.messagebus_disk_physical_io

user.mopidy_disk_physical_io

user.mysql_disk_physical_io

user.netdata_disk_physical_io

user.nginx_disk_physical_io

user.nobody_disk_physical_io

user.node-exporter_disk_physical_io

user.nscd_disk_physical_io

user.opendkim_disk_physical_io

user.postgres_disk_physical_io

user.prometheus_disk_physical_io

user.prosody_disk_physical_io

user.redis-rspamd_disk_physical_io

user.revive_disk_physical_io

user.root_disk_physical_io

user.rspamd_disk_physical_io

user.searx_disk_physical_io

user.snappymail_disk_physical_io

user.snapserver_disk_physical_io

user.sshd_disk_physical_io

user.systemd-network_disk_physical_io

user.systemd-oom_disk_physical_io

user.systemd-resolve_disk_physical_io

user.turnserver_disk_physical_io

user.uwsgi_disk_physical_io

user.acme_disk_logical_io

user.asterisk_disk_logical_io

user.bepasty_disk_logical_io

user.blocky_disk_logical_io

user.chrony_disk_logical_io

user.galene_disk_logical_io

user.grafana_disk_logical_io

user.iperf3_disk_logical_io

user.jicofo_disk_logical_io

user.jitsi-videobridge_disk_logical_io

user.knot-resolver_disk_logical_io

user.messagebus_disk_logical_io

user.mopidy_disk_logical_io

user.mysql_disk_logical_io

user.netdata_disk_logical_io

user.nginx_disk_logical_io

user.nobody_disk_logical_io

user.node-exporter_disk_logical_io

user.nscd_disk_logical_io

user.opendkim_disk_logical_io

user.postgres_disk_logical_io

user.prometheus_disk_logical_io

user.prosody_disk_logical_io

user.redis-rspamd_disk_logical_io

user.revive_disk_logical_io

user.root_disk_logical_io

user.rspamd_disk_logical_io

user.searx_disk_logical_io

user.snappymail_disk_logical_io

user.snapserver_disk_logical_io

user.sshd_disk_logical_io

user.systemd-network_disk_logical_io

user.systemd-oom_disk_logical_io

user.systemd-resolve_disk_logical_io

user.turnserver_disk_logical_io

user.uwsgi_disk_logical_io



PROCESSES


user.acme_processes

user.asterisk_processes

user.bepasty_processes

user.blocky_processes

user.chrony_processes

user.galene_processes

user.grafana_processes

user.iperf3_processes

user.jicofo_processes

user.jitsi-videobridge_processes

user.knot-resolver_processes

user.messagebus_processes

user.mopidy_processes

user.mysql_processes

user.netdata_processes

user.nginx_processes

user.nobody_processes

user.node-exporter_processes

user.nscd_processes

user.opendkim_processes

user.postgres_processes

user.prometheus_processes

user.prosody_processes

user.redis-rspamd_processes

user.revive_processes

user.root_processes

user.rspamd_processes

user.searx_processes

user.snappymail_processes

user.snapserver_processes

user.sshd_processes

user.systemd-network_processes

user.systemd-oom_processes

user.systemd-resolve_processes

user.turnserver_processes

user.uwsgi_processes

user.acme_threads

user.asterisk_threads

user.bepasty_threads

user.blocky_threads

user.chrony_threads

user.galene_threads

user.grafana_threads

user.iperf3_threads

user.jicofo_threads

user.jitsi-videobridge_threads

user.knot-resolver_threads

user.messagebus_threads

user.mopidy_threads

user.mysql_threads

user.netdata_threads

user.nginx_threads

user.nobody_threads

user.node-exporter_threads

user.nscd_threads

user.opendkim_threads

user.postgres_threads

user.prometheus_threads

user.prosody_threads

user.redis-rspamd_threads

user.revive_threads

user.root_threads

user.rspamd_threads

user.searx_threads

user.snappymail_threads

user.snapserver_threads

user.sshd_threads

user.systemd-network_threads

user.systemd-oom_threads

user.systemd-resolve_threads

user.turnserver_threads

user.uwsgi_threads



FDS


user.acme_fds_open_limit

user.asterisk_fds_open_limit

user.bepasty_fds_open_limit

user.blocky_fds_open_limit

user.chrony_fds_open_limit

user.galene_fds_open_limit

user.grafana_fds_open_limit

user.iperf3_fds_open_limit

user.jicofo_fds_open_limit

user.jitsi-videobridge_fds_open_limit

user.knot-resolver_fds_open_limit

user.messagebus_fds_open_limit

user.mopidy_fds_open_limit

user.mysql_fds_open_limit

user.netdata_fds_open_limit

user.nginx_fds_open_limit

user.nobody_fds_open_limit

user.node-exporter_fds_open_limit

user.nscd_fds_open_limit

user.opendkim_fds_open_limit

user.postgres_fds_open_limit

user.prometheus_fds_open_limit

user.prosody_fds_open_limit

user.redis-rspamd_fds_open_limit

user.revive_fds_open_limit

user.root_fds_open_limit

user.rspamd_fds_open_limit

user.searx_fds_open_limit

user.snappymail_fds_open_limit

user.snapserver_fds_open_limit

user.sshd_fds_open_limit

user.systemd-network_fds_open_limit

user.systemd-oom_fds_open_limit

user.systemd-resolve_fds_open_limit

user.turnserver_fds_open_limit

user.uwsgi_fds_open_limit

user.acme_fds_open

user.asterisk_fds_open

user.bepasty_fds_open

user.blocky_fds_open

user.chrony_fds_open

user.galene_fds_open

user.grafana_fds_open

user.iperf3_fds_open

user.jicofo_fds_open

user.jitsi-videobridge_fds_open

user.knot-resolver_fds_open

user.messagebus_fds_open

user.mopidy_fds_open

user.mysql_fds_open

user.netdata_fds_open

user.nginx_fds_open

user.nobody_fds_open

user.node-exporter_fds_open

user.nscd_fds_open

user.opendkim_fds_open

user.postgres_fds_open

user.prometheus_fds_open

user.prosody_fds_open

user.redis-rspamd_fds_open

user.revive_fds_open

user.root_fds_open

user.rspamd_fds_open

user.searx_fds_open

user.snappymail_fds_open

user.snapserver_fds_open

user.sshd_fds_open

user.systemd-network_fds_open

user.systemd-oom_fds_open

user.systemd-resolve_fds_open

user.turnserver_fds_open

user.uwsgi_fds_open



UPTIME


user.acme_uptime

user.asterisk_uptime

user.bepasty_uptime

user.blocky_uptime

user.chrony_uptime

user.galene_uptime

user.grafana_uptime

user.iperf3_uptime

user.jicofo_uptime

user.jitsi-videobridge_uptime

user.knot-resolver_uptime

user.messagebus_uptime

user.mopidy_uptime

user.mysql_uptime

user.netdata_uptime

user.nginx_uptime

user.nobody_uptime

user.node-exporter_uptime

user.nscd_uptime

user.opendkim_uptime

user.postgres_uptime

user.prometheus_uptime

user.prosody_uptime

user.redis-rspamd_uptime

user.revive_uptime

user.root_uptime

user.rspamd_uptime

user.searx_uptime

user.snappymail_uptime

user.snapserver_uptime

user.sshd_uptime

user.systemd-network_uptime

user.systemd-oom_uptime

user.systemd-resolve_uptime

user.turnserver_uptime

user.uwsgi_uptime


--------------------------------------------------------------------------------


USERGROUP


CPU


usergroup.asterisk_cpu_utilization

usergroup.bepasty_cpu_utilization

usergroup.blocky_cpu_utilization

usergroup.chrony_cpu_utilization

usergroup.galene_cpu_utilization

usergroup.grafana_cpu_utilization

usergroup.iperf3_cpu_utilization

usergroup.jitsi-meet_cpu_utilization

usergroup.knot-resolver_cpu_utilization

usergroup.messagebus_cpu_utilization

usergroup.mopidy_cpu_utilization

usergroup.mysql_cpu_utilization

usergroup.netdata_cpu_utilization

usergroup.nginx_cpu_utilization

usergroup.node-exporter_cpu_utilization

usergroup.nogroup_cpu_utilization

usergroup.nscd_cpu_utilization

usergroup.opendkim_cpu_utilization

usergroup.postgres_cpu_utilization

usergroup.prometheus_cpu_utilization

usergroup.prosody_cpu_utilization

usergroup.redis-rspamd_cpu_utilization

usergroup.root_cpu_utilization

usergroup.rspamd_cpu_utilization

usergroup.searx_cpu_utilization

usergroup.snappymail_cpu_utilization

usergroup.snapserver_cpu_utilization

usergroup.sshd_cpu_utilization

usergroup.systemd-network_cpu_utilization

usergroup.systemd-oom_cpu_utilization

usergroup.systemd-resolve_cpu_utilization

usergroup.turnserver_cpu_utilization

usergroup.uwsgi_cpu_utilization

usergroup.asterisk_cpu_context_switches

usergroup.bepasty_cpu_context_switches

usergroup.blocky_cpu_context_switches

usergroup.chrony_cpu_context_switches

usergroup.galene_cpu_context_switches

usergroup.grafana_cpu_context_switches

usergroup.iperf3_cpu_context_switches

usergroup.jitsi-meet_cpu_context_switches

usergroup.knot-resolver_cpu_context_switches

usergroup.messagebus_cpu_context_switches

usergroup.mopidy_cpu_context_switches

usergroup.mysql_cpu_context_switches

usergroup.netdata_cpu_context_switches

usergroup.nginx_cpu_context_switches

usergroup.node-exporter_cpu_context_switches

usergroup.nogroup_cpu_context_switches

usergroup.nscd_cpu_context_switches

usergroup.opendkim_cpu_context_switches

usergroup.postgres_cpu_context_switches

usergroup.prometheus_cpu_context_switches

usergroup.prosody_cpu_context_switches

usergroup.redis-rspamd_cpu_context_switches

usergroup.root_cpu_context_switches

usergroup.rspamd_cpu_context_switches

usergroup.searx_cpu_context_switches

usergroup.snappymail_cpu_context_switches

usergroup.snapserver_cpu_context_switches

usergroup.sshd_cpu_context_switches

usergroup.systemd-network_cpu_context_switches

usergroup.systemd-oom_cpu_context_switches

usergroup.systemd-resolve_cpu_context_switches

usergroup.turnserver_cpu_context_switches

usergroup.uwsgi_cpu_context_switches



MEM


usergroup.asterisk_mem_private_usage

usergroup.bepasty_mem_private_usage

usergroup.blocky_mem_private_usage

usergroup.chrony_mem_private_usage

usergroup.galene_mem_private_usage

usergroup.grafana_mem_private_usage

usergroup.iperf3_mem_private_usage

usergroup.jitsi-meet_mem_private_usage

usergroup.knot-resolver_mem_private_usage

usergroup.messagebus_mem_private_usage

usergroup.mopidy_mem_private_usage

usergroup.mysql_mem_private_usage

usergroup.netdata_mem_private_usage

usergroup.nginx_mem_private_usage

usergroup.node-exporter_mem_private_usage

usergroup.nogroup_mem_private_usage

usergroup.nscd_mem_private_usage

usergroup.opendkim_mem_private_usage

usergroup.postgres_mem_private_usage

usergroup.prometheus_mem_private_usage

usergroup.prosody_mem_private_usage

usergroup.redis-rspamd_mem_private_usage

usergroup.root_mem_private_usage

usergroup.rspamd_mem_private_usage

usergroup.searx_mem_private_usage

usergroup.snappymail_mem_private_usage

usergroup.snapserver_mem_private_usage

usergroup.sshd_mem_private_usage

usergroup.systemd-network_mem_private_usage

usergroup.systemd-oom_mem_private_usage

usergroup.systemd-resolve_mem_private_usage

usergroup.turnserver_mem_private_usage

usergroup.uwsgi_mem_private_usage

usergroup.asterisk_mem_usage

usergroup.bepasty_mem_usage

usergroup.blocky_mem_usage

usergroup.chrony_mem_usage

usergroup.galene_mem_usage

usergroup.grafana_mem_usage

usergroup.iperf3_mem_usage

usergroup.jitsi-meet_mem_usage

usergroup.knot-resolver_mem_usage

usergroup.messagebus_mem_usage

usergroup.mopidy_mem_usage

usergroup.mysql_mem_usage

usergroup.netdata_mem_usage

usergroup.nginx_mem_usage

usergroup.node-exporter_mem_usage

usergroup.nogroup_mem_usage

usergroup.nscd_mem_usage

usergroup.opendkim_mem_usage

usergroup.postgres_mem_usage

usergroup.prometheus_mem_usage

usergroup.prosody_mem_usage

usergroup.redis-rspamd_mem_usage

usergroup.root_mem_usage

usergroup.rspamd_mem_usage

usergroup.searx_mem_usage

usergroup.snappymail_mem_usage

usergroup.snapserver_mem_usage

usergroup.sshd_mem_usage

usergroup.systemd-network_mem_usage

usergroup.systemd-oom_mem_usage

usergroup.systemd-resolve_mem_usage

usergroup.turnserver_mem_usage

usergroup.uwsgi_mem_usage

usergroup.asterisk_mem_page_faults

usergroup.bepasty_mem_page_faults

usergroup.blocky_mem_page_faults

usergroup.chrony_mem_page_faults

usergroup.galene_mem_page_faults

usergroup.grafana_mem_page_faults

usergroup.iperf3_mem_page_faults

usergroup.jitsi-meet_mem_page_faults

usergroup.knot-resolver_mem_page_faults

usergroup.messagebus_mem_page_faults

usergroup.mopidy_mem_page_faults

usergroup.mysql_mem_page_faults

usergroup.netdata_mem_page_faults

usergroup.nginx_mem_page_faults

usergroup.node-exporter_mem_page_faults

usergroup.nogroup_mem_page_faults

usergroup.nscd_mem_page_faults

usergroup.opendkim_mem_page_faults

usergroup.postgres_mem_page_faults

usergroup.prometheus_mem_page_faults

usergroup.prosody_mem_page_faults

usergroup.redis-rspamd_mem_page_faults

usergroup.root_mem_page_faults

usergroup.rspamd_mem_page_faults

usergroup.searx_mem_page_faults

usergroup.snappymail_mem_page_faults

usergroup.snapserver_mem_page_faults

usergroup.sshd_mem_page_faults

usergroup.systemd-network_mem_page_faults

usergroup.systemd-oom_mem_page_faults

usergroup.systemd-resolve_mem_page_faults

usergroup.turnserver_mem_page_faults

usergroup.uwsgi_mem_page_faults

usergroup.asterisk_swap_usage

usergroup.asterisk_vmem_usage

usergroup.bepasty_swap_usage

usergroup.bepasty_vmem_usage

usergroup.blocky_swap_usage

usergroup.blocky_vmem_usage

usergroup.chrony_swap_usage

usergroup.chrony_vmem_usage

usergroup.galene_swap_usage

usergroup.galene_vmem_usage

usergroup.grafana_swap_usage

usergroup.grafana_vmem_usage

usergroup.iperf3_swap_usage

usergroup.iperf3_vmem_usage

usergroup.jitsi-meet_swap_usage

usergroup.jitsi-meet_vmem_usage

usergroup.knot-resolver_swap_usage

usergroup.knot-resolver_vmem_usage

usergroup.messagebus_swap_usage

usergroup.messagebus_vmem_usage

usergroup.mopidy_swap_usage

usergroup.mopidy_vmem_usage

usergroup.mysql_swap_usage

usergroup.mysql_vmem_usage

usergroup.netdata_swap_usage

usergroup.netdata_vmem_usage

usergroup.nginx_swap_usage

usergroup.nginx_vmem_usage

usergroup.node-exporter_swap_usage

usergroup.node-exporter_vmem_usage

usergroup.nogroup_swap_usage

usergroup.nogroup_vmem_usage

usergroup.nscd_swap_usage

usergroup.nscd_vmem_usage

usergroup.opendkim_swap_usage

usergroup.opendkim_vmem_usage

usergroup.postgres_swap_usage

usergroup.postgres_vmem_usage

usergroup.prometheus_swap_usage

usergroup.prometheus_vmem_usage

usergroup.prosody_swap_usage

usergroup.prosody_vmem_usage

usergroup.redis-rspamd_swap_usage

usergroup.redis-rspamd_vmem_usage

usergroup.root_swap_usage

usergroup.root_vmem_usage

usergroup.rspamd_swap_usage

usergroup.rspamd_vmem_usage

usergroup.searx_swap_usage

usergroup.searx_vmem_usage

usergroup.snappymail_swap_usage

usergroup.snappymail_vmem_usage

usergroup.snapserver_swap_usage

usergroup.snapserver_vmem_usage

usergroup.sshd_swap_usage

usergroup.sshd_vmem_usage

usergroup.systemd-network_swap_usage

usergroup.systemd-network_vmem_usage

usergroup.systemd-oom_swap_usage

usergroup.systemd-oom_vmem_usage

usergroup.systemd-resolve_swap_usage

usergroup.systemd-resolve_vmem_usage

usergroup.turnserver_swap_usage

usergroup.turnserver_vmem_usage

usergroup.uwsgi_swap_usage

usergroup.uwsgi_vmem_usage



DISK


usergroup.asterisk_disk_physical_io

usergroup.bepasty_disk_physical_io

usergroup.blocky_disk_physical_io

usergroup.chrony_disk_physical_io

usergroup.galene_disk_physical_io

usergroup.grafana_disk_physical_io

usergroup.iperf3_disk_physical_io

usergroup.jitsi-meet_disk_physical_io

usergroup.knot-resolver_disk_physical_io

usergroup.messagebus_disk_physical_io

usergroup.mopidy_disk_physical_io

usergroup.mysql_disk_physical_io

usergroup.netdata_disk_physical_io

usergroup.nginx_disk_physical_io

usergroup.node-exporter_disk_physical_io

usergroup.nogroup_disk_physical_io

usergroup.nscd_disk_physical_io

usergroup.opendkim_disk_physical_io

usergroup.postgres_disk_physical_io

usergroup.prometheus_disk_physical_io

usergroup.prosody_disk_physical_io

usergroup.redis-rspamd_disk_physical_io

usergroup.root_disk_physical_io

usergroup.rspamd_disk_physical_io

usergroup.searx_disk_physical_io

usergroup.snappymail_disk_physical_io

usergroup.snapserver_disk_physical_io

usergroup.sshd_disk_physical_io

usergroup.systemd-network_disk_physical_io

usergroup.systemd-oom_disk_physical_io

usergroup.systemd-resolve_disk_physical_io

usergroup.turnserver_disk_physical_io

usergroup.uwsgi_disk_physical_io

usergroup.asterisk_disk_logical_io

usergroup.bepasty_disk_logical_io

usergroup.blocky_disk_logical_io

usergroup.chrony_disk_logical_io

usergroup.galene_disk_logical_io

usergroup.grafana_disk_logical_io

usergroup.iperf3_disk_logical_io

usergroup.jitsi-meet_disk_logical_io

usergroup.knot-resolver_disk_logical_io

usergroup.messagebus_disk_logical_io

usergroup.mopidy_disk_logical_io

usergroup.mysql_disk_logical_io

usergroup.netdata_disk_logical_io

usergroup.nginx_disk_logical_io

usergroup.node-exporter_disk_logical_io

usergroup.nogroup_disk_logical_io

usergroup.nscd_disk_logical_io

usergroup.opendkim_disk_logical_io

usergroup.postgres_disk_logical_io

usergroup.prometheus_disk_logical_io

usergroup.prosody_disk_logical_io

usergroup.redis-rspamd_disk_logical_io

usergroup.root_disk_logical_io

usergroup.rspamd_disk_logical_io

usergroup.searx_disk_logical_io

usergroup.snappymail_disk_logical_io

usergroup.snapserver_disk_logical_io

usergroup.sshd_disk_logical_io

usergroup.systemd-network_disk_logical_io

usergroup.systemd-oom_disk_logical_io

usergroup.systemd-resolve_disk_logical_io

usergroup.turnserver_disk_logical_io

usergroup.uwsgi_disk_logical_io



PROCESSES


usergroup.asterisk_processes

usergroup.bepasty_processes

usergroup.blocky_processes

usergroup.chrony_processes

usergroup.galene_processes

usergroup.grafana_processes

usergroup.iperf3_processes

usergroup.jitsi-meet_processes

usergroup.knot-resolver_processes

usergroup.messagebus_processes

usergroup.mopidy_processes

usergroup.mysql_processes

usergroup.netdata_processes

usergroup.nginx_processes

usergroup.node-exporter_processes

usergroup.nogroup_processes

usergroup.nscd_processes

usergroup.opendkim_processes

usergroup.postgres_processes

usergroup.prometheus_processes

usergroup.prosody_processes

usergroup.redis-rspamd_processes

usergroup.root_processes

usergroup.rspamd_processes

usergroup.searx_processes

usergroup.snappymail_processes

usergroup.snapserver_processes

usergroup.sshd_processes

usergroup.systemd-network_processes

usergroup.systemd-oom_processes

usergroup.systemd-resolve_processes

usergroup.turnserver_processes

usergroup.uwsgi_processes

usergroup.asterisk_threads

usergroup.bepasty_threads

usergroup.blocky_threads

usergroup.chrony_threads

usergroup.galene_threads

usergroup.grafana_threads

usergroup.iperf3_threads

usergroup.jitsi-meet_threads

usergroup.knot-resolver_threads

usergroup.messagebus_threads

usergroup.mopidy_threads

usergroup.mysql_threads

usergroup.netdata_threads

usergroup.nginx_threads

usergroup.node-exporter_threads

usergroup.nogroup_threads

usergroup.nscd_threads

usergroup.opendkim_threads

usergroup.postgres_threads

usergroup.prometheus_threads

usergroup.prosody_threads

usergroup.redis-rspamd_threads

usergroup.root_threads

usergroup.rspamd_threads

usergroup.searx_threads

usergroup.snappymail_threads

usergroup.snapserver_threads

usergroup.sshd_threads

usergroup.systemd-network_threads

usergroup.systemd-oom_threads

usergroup.systemd-resolve_threads

usergroup.turnserver_threads

usergroup.uwsgi_threads



FDS


usergroup.asterisk_fds_open_limit

usergroup.bepasty_fds_open_limit

usergroup.blocky_fds_open_limit

usergroup.chrony_fds_open_limit

usergroup.galene_fds_open_limit

usergroup.grafana_fds_open_limit

usergroup.iperf3_fds_open_limit

usergroup.jitsi-meet_fds_open_limit

usergroup.knot-resolver_fds_open_limit

usergroup.messagebus_fds_open_limit

usergroup.mopidy_fds_open_limit

usergroup.mysql_fds_open_limit

usergroup.netdata_fds_open_limit

usergroup.nginx_fds_open_limit

usergroup.node-exporter_fds_open_limit

usergroup.nogroup_fds_open_limit

usergroup.nscd_fds_open_limit

usergroup.opendkim_fds_open_limit

usergroup.postgres_fds_open_limit

usergroup.prometheus_fds_open_limit

usergroup.prosody_fds_open_limit

usergroup.redis-rspamd_fds_open_limit

usergroup.root_fds_open_limit

usergroup.rspamd_fds_open_limit

usergroup.searx_fds_open_limit

usergroup.snappymail_fds_open_limit

usergroup.snapserver_fds_open_limit

usergroup.sshd_fds_open_limit

usergroup.systemd-network_fds_open_limit

usergroup.systemd-oom_fds_open_limit

usergroup.systemd-resolve_fds_open_limit

usergroup.turnserver_fds_open_limit

usergroup.uwsgi_fds_open_limit

usergroup.asterisk_fds_open

usergroup.bepasty_fds_open

usergroup.blocky_fds_open

usergroup.chrony_fds_open

usergroup.galene_fds_open

usergroup.grafana_fds_open

usergroup.iperf3_fds_open

usergroup.jitsi-meet_fds_open

usergroup.knot-resolver_fds_open

usergroup.messagebus_fds_open

usergroup.mopidy_fds_open

usergroup.mysql_fds_open

usergroup.netdata_fds_open

usergroup.nginx_fds_open

usergroup.node-exporter_fds_open

usergroup.nogroup_fds_open

usergroup.nscd_fds_open

usergroup.opendkim_fds_open

usergroup.postgres_fds_open

usergroup.prometheus_fds_open

usergroup.prosody_fds_open

usergroup.redis-rspamd_fds_open

usergroup.root_fds_open

usergroup.rspamd_fds_open

usergroup.searx_fds_open

usergroup.snappymail_fds_open

usergroup.snapserver_fds_open

usergroup.sshd_fds_open

usergroup.systemd-network_fds_open

usergroup.systemd-oom_fds_open

usergroup.systemd-resolve_fds_open

usergroup.turnserver_fds_open

usergroup.uwsgi_fds_open



UPTIME


usergroup.asterisk_uptime

usergroup.bepasty_uptime

usergroup.blocky_uptime

usergroup.chrony_uptime

usergroup.galene_uptime

usergroup.grafana_uptime

usergroup.iperf3_uptime

usergroup.jitsi-meet_uptime

usergroup.knot-resolver_uptime

usergroup.messagebus_uptime

usergroup.mopidy_uptime

usergroup.mysql_uptime

usergroup.netdata_uptime

usergroup.nginx_uptime

usergroup.node-exporter_uptime

usergroup.nogroup_uptime

usergroup.nscd_uptime

usergroup.opendkim_uptime

usergroup.postgres_uptime

usergroup.prometheus_uptime

usergroup.prosody_uptime

usergroup.redis-rspamd_uptime

usergroup.root_uptime

usergroup.rspamd_uptime

usergroup.searx_uptime

usergroup.snappymail_uptime

usergroup.snapserver_uptime

usergroup.sshd_uptime

usergroup.systemd-network_uptime

usergroup.systemd-oom_uptime

usergroup.systemd-resolve_uptime

usergroup.turnserver_uptime

usergroup.uwsgi_uptime


--------------------------------------------------------------------------------


ANOMALY DETECTION

Charts relating to anomaly detection, increased anomalous dimensions or a higher
than usual anomaly_rate could be signs of some abnormal behaviour. Read our
anomaly detection guide for more details.



DIMENSIONS


Total count of dimensions considered anomalous or normal.
anomaly_detection.dimensions_on_0d2519b6-b986-11ee-a8eb-ca9ea63e7530



ANOMALY RATE


Percentage of anomalous dimensions.
anomaly_detection.anomaly_rate_on_0d2519b6-b986-11ee-a8eb-ca9ea63e7530

anomaly_detection.type_anomaly_rate_on_0d2519b6-b986-11ee-a8eb-ca9ea63e7530



ANOMALY DETECTION


Flags (0 or 1) to show when an anomaly event has been triggered by the detector.
anomaly_detection.anomaly_detection_on_0d2519b6-b986-11ee-a8eb-ca9ea63e7530

anomaly_detection.ml_running_on_0d2519b6-b986-11ee-a8eb-ca9ea63e7530


--------------------------------------------------------------------------------


 7E1882DC96FA

Container resource utilization metrics. Netdata reads this information from
cgroups (abbreviated from control groups), a Linux kernel feature that limits
and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a
collection of processes. cgroups together with namespaces (that offer isolation
between processes) provide what we usually call: containers.
cgroup_7e1882dc96fa.cpu_limit

cgroup_7e1882dc96fa.mem_usage_limit



CPU


Total CPU utilization within the configured or system-wide (if not set) limits.
When the CPU utilization of a cgroup exceeds the limit for the configured
period, the tasks belonging to its hierarchy will be throttled and are not
allowed to run again until the next period.
cgroup_7e1882dc96fa.cpu_limit

Total CPU utilization within the system-wide CPU resources (all cores). The
amount of time spent by tasks of the cgroup in user and kernel modes.
cgroup_7e1882dc96fa.cpu

The percentage of runnable periods when tasks in a cgroup have been throttled.
The tasks have not been allowed to run because they have exhausted all of the
available time as specified by their CPU quota.
cgroup_7e1882dc96fa.throttled

The total time duration for which tasks in a cgroup have been throttled. When an
application has used its allotted CPU quota for a given period, it gets
throttled until the next period.
cgroup_7e1882dc96fa.throttled_duration

CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
cgroup_7e1882dc96fa.cpu_some_pressure

The amount of time some processes have been waiting for CPU time.
cgroup_7e1882dc96fa.cpu_some_pressure_stall_time



MEM


RAM utilization within the configured or system-wide (if not set) limits. When
the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing
the tasks belonging to the cgroup.
cgroup_7e1882dc96fa.mem_utilization

RAM usage within the configured or system-wide (if not set) limits. When the RAM
usage of a cgroup exceeds the limit, OOM killer will start killing the tasks
belonging to the cgroup.
cgroup_7e1882dc96fa.mem_usage_limit

The amount of used RAM and swap memory.
cgroup_7e1882dc96fa.mem_usage

Memory usage statistics. The individual metrics are described in the memory.stat
section for cgroup-v1 and cgroup-v2.
cgroup_7e1882dc96fa.mem

Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
cgroup_7e1882dc96fa.writeback


Memory page fault statistics.

Pgfault - all page faults. Swap - major page faults.

cgroup_7e1882dc96fa.pgfaults

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_7e1882dc96fa.mem_some_pressure

The amount of time some processes have been waiting due to memory congestion.
cgroup_7e1882dc96fa.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_7e1882dc96fa.mem_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
cgroup_7e1882dc96fa.memory_full_pressure_stall_time



DISK


The amount of data transferred to and from specific devices as seen by the CFQ
scheduler. It is not updated when the CFQ scheduler is operating on a request
queue.
cgroup_7e1882dc96fa.io

The number of I/O operations performed on specific devices as seen by the CFQ
scheduler.
cgroup_7e1882dc96fa.serviced_ops

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_7e1882dc96fa.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
cgroup_7e1882dc96fa.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
cgroup_7e1882dc96fa.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
cgroup_7e1882dc96fa.io_full_pressure_stall_time



PIDS


cgroup_7e1882dc96fa.pids_current


--------------------------------------------------------------------------------


LOGIND

Keeps track of user logins and sessions by querying the systemd-logind API.



SESSIONS


Local and remote sessions.
logind.sessions


Sessions of each session type.

Graphical - sessions are running under one of X11, Mir, or Wayland. Console -
sessions are usually regular text mode local logins, but depending on how the
system is configured may have an associated GUI. Other - sessions are those that
do not fall into the above categories (such as sessions for cron jobs or systemd
timer units).

logind.sessions_type


Sessions in each session state.

Online - logged in and running in the background. Closing - nominally logged
out, but some processes belonging to it are still around. Active - logged in and
running in the foreground.

logind.sessions_state



USERS



Users in each user state.

Offline - users are not logged in. Closing - users are in the process of logging
out without lingering. Online - users are logged in, but have no active
sessions. Lingering - users are not logged in, but have one or more services
still running. Active - users are logged in, and have at least one active
session.

logind.users_state


--------------------------------------------------------------------------------


MYSQL LOCAL

Performance metrics for mysql, the open-source relational database management
system (RDBMS).



BANDWIDTH


The amount of data sent to mysql clients (out) and received from mysql clients
(in).
mysql_local.net



QUERIES


The number of statements executed by the server.
 * queries counts the statements executed within stored SQL programs.
 * questions counts the statements sent to the mysql server by mysql clients.
 * slow queries counts the number of statements that took more than
   long_query_time seconds to be executed. For more information about slow
   queries check the mysql slow query log.

mysql_local.queries

mysql_local.queries_type



HANDLERS


Usage of the internal handlers of mysql. This chart provides very good insights
of what the mysql server is actually doing. (if the chart is not showing all
these dimensions it is because they are zero - set Which dimensions to show? to
All from the dashboard settings, to render even the zero values)
 * commit, the number of internal COMMIT statements.
 * delete, the number of times that rows have been deleted from tables.
 * prepare, a counter for the prepare phase of two-phase commit operations.
 * read first, the number of times the first entry in an index was read. A high
   value suggests that the server is doing a lot of full index scans; e.g.
   SELECT col1 FROM foo, with col1 indexed.
 * read key, the number of requests to read a r...

Usage of the internal handlers of mysql. This chart provides very good insights
of what the mysql server is actually doing. (if the chart is not showing all
these dimensions it is because they are zero - set Which dimensions to show? to
All from the dashboard settings, to render even the zero values)
 * commit, the number of internal COMMIT statements.
 * delete, the number of times that rows have been deleted from tables.
 * prepare, a counter for the prepare phase of two-phase commit operations.
 * read first, the number of times the first entry in an index was read. A high
   value suggests that the server is doing a lot of full index scans; e.g.
   SELECT col1 FROM foo, with col1 indexed.
 * read key, the number of requests to read a row based on a key. If this value
   is high, it is a good indication that your tables are properly indexed for
   your queries.
 * read next, the number of requests to read the next row in key order. This
   value is incremented if you are querying an index column with a range
   constraint or if you are doing an index scan.
 * read prev, the number of requests to read the previous row in key order. This
   read method is mainly used to optimize ORDER BY ... DESC.
 * read rnd, the number of requests to read a row based on a fixed position. A
   high value indicates you are doing a lot of queries that require sorting of
   the result. You probably have a lot of queries that require MySQL to scan
   entire tables or you have joins that do not use keys properly.
 * read rnd next, the number of requests to read the next row in the data file.
   This value is high if you are doing a lot of table scans. Generally this
   suggests that your tables are not properly indexed or that your queries are
   not written to take advantage of the indexes you have.
 * rollback, the number of requests for a storage engine to perform a rollback
   operation.
 * savepoint, the number of requests for a storage engine to place a savepoint.
 * savepoint rollback, the number of requests for a storage engine to roll back
   to a savepoint.
 * update, the number of requests to update a row in a table.
 * write, the number of requests to insert a row in a table.

show more information
mysql_local.handlers



OPEN CACHE


mysql_local.table_open_cache_overflows



LOCKS


MySQL table locks counters:
 * immediate, the number of times that a request for a table lock could be
   granted immediately.
 * waited, the number of times that a request for a table lock could not be
   granted immediately and a wait was needed. If this is high and you have
   performance problems, you should first optimize your queries, and then either
   split your table or tables or use replication.

mysql_local.table_locks



ISSUES


mysql_local.join_issues

mysql_local.sort_issues



TEMPORARIES


mysql_local.tmp



CONNECTIONS


mysql_local.connections

mysql_local.connections_active

mysql_local.connection_errors



THREADS


mysql_local.threads

mysql_local.threads_creation_rate

mysql_local.thread_cache_misses



INNODB


mysql_local.innodb_io

mysql_local.innodb_io_ops

mysql_local.innodb_io_pending_ops

mysql_local.innodb_log

mysql_local.innodb_cur_row_lock

mysql_local.innodb_rows

mysql_local.innodb_buffer_pool_pages

mysql_local.innodb_buffer_pool_flush_pages_requests

mysql_local.innodb_buffer_pool_bytes

mysql_local.innodb_buffer_pool_read_ahead

mysql_local.innodb_buffer_pool_read_ahead_rnd

mysql_local.innodb_buffer_pool_ops

A deadlock happens when two or more transactions mutually hold and request for
locks, creating a cycle of dependencies. For more information about how to
minimize and handle deadlocks.
mysql_local.innodb_deadlocks



MYISAM


mysql_local.key_blocks

mysql_local.key_requests

mysql_local.key_disk_ops



FILES


mysql_local.files

mysql_local.files_rate



OPEN TABLES


mysql_local.opened_tables

mysql_local.open_tables



PROCESS LIST


mysql_local.process_list_fetch_duration

mysql_local.process_list_queries_count

mysql_local.process_list_longest_query_duration



QCACHE


mysql_local.qcache_ops

mysql_local.qcache

mysql_local.qcache_freemem

mysql_local.qcache_memblocks


--------------------------------------------------------------------------------


PROMETHEUS FAWKES LOCAL


ASTERISK BRIDGES


prometheus_fawkes_local.asterisk_bridges_count-eid=ca:9e:a6:3e:75:30



ASTERISK CALLS


prometheus_fawkes_local.asterisk_calls_count-eid=ca:9e:a6:3e:75:30

prometheus_fawkes_local.asterisk_calls_sum-eid=ca:9e:a6:3e:75:30



ASTERISK CHANNELS


prometheus_fawkes_local.asterisk_channels_count-eid=ca:9e:a6:3e:75:30



ASTERISK CORE


prometheus_fawkes_local.asterisk_core_last_reload_seconds-eid=ca:9e:a6:3e:75:30

prometheus_fawkes_local.asterisk_core_properties-build_date=2024-04-29_16:44:03_UTC-build_host=localhost-build_kernel=6.6.23-build_options=BUILD_NATIVE,_OPTIONAL_API-build_os=Linux-eid=ca:9e:a6:3e:75:30-version=20.5.2

prometheus_fawkes_local.asterisk_core_scrape_time_ms-eid=ca:9e:a6:3e:75:30

prometheus_fawkes_local.asterisk_core_uptime_seconds-eid=ca:9e:a6:3e:75:30



ASTERISK ENDPOINTS


prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/3305-resource=3305-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/3309-resource=3309-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/gigaset-resource=gigaset-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/grandstream1-resource=grandstream1-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/grandstream2-resource=grandstream2-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/hendrik-resource=hendrik-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/homeassistant-resource=homeassistant-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/sipgate1-trunk-resource=sipgate1-trunk-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/sipgate2-trunk-resource=sipgate2-trunk-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/snom1-resource=snom1-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/snom2-resource=snom2-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_channels_count-eid=ca:9e:a6:3e:75:30-id=PJSIP/steve-resource=steve-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_count-eid=ca:9e:a6:3e:75:30

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/3305-resource=3305-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/3309-resource=3309-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/gigaset-resource=gigaset-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/grandstream1-resource=grandstream1-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/grandstream2-resource=grandstream2-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/hendrik-resource=hendrik-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/homeassistant-resource=homeassistant-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/sipgate1-trunk-resource=sipgate1-trunk-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/sipgate2-trunk-resource=sipgate2-trunk-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/snom1-resource=snom1-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/snom2-resource=snom2-tech=PJSIP

prometheus_fawkes_local.asterisk_endpoints_state-eid=ca:9e:a6:3e:75:30-id=PJSIP/steve-resource=steve-tech=PJSIP



ASTERISK PJSIP


prometheus_fawkes_local.asterisk_pjsip_outbound_registration_status-channel_type=PJSIP-domain=sip:sip.sipgate.de-eid=ca:9e:a6:3e:75:30-username=sip:2077834e0@sip.sipgate.de

prometheus_fawkes_local.asterisk_pjsip_outbound_registration_status-channel_type=PJSIP-domain=sip:sip.sipgate.de-eid=ca:9e:a6:3e:75:30-username=sip:8723833e0@sip.sipgate.de


--------------------------------------------------------------------------------


SYSTEMD UNITS SERVICE-UNITS

systemd provides a dependency system between various entities called "units" of
11 different types. Units encapsulate various objects that are relevant for
system boot-up and maintenance. Units may be active (meaning started, bound,
plugged in, depending on the unit type), or inactive (meaning stopped, unbound,
unplugged), as well as in the process of being activated or deactivated, i.e.
between the two states (these states are called activating, deactivating). A
special failed state is available as well, which is very similar to inactive and
is entered when the service failed in some way (process returned error code on
exit, or crashed, an operation timed out, or after too many restarts). For
details, see systemd(1).



SERVICE UNITS


systemdunits_service-units.unit_acme-ads.p3x.de_service_state

systemdunits_service-units.unit_acme-fixperms_service_state

systemdunits_service-units.unit_acme-galene.p3x.de_service_state

systemdunits_service-units.unit_acme-grafana.p3x.de_service_state

systemdunits_service-units.unit_acme-lockfiles_service_state

systemdunits_service-units.unit_acme-meet.p3x.de_service_state

systemdunits_service-units.unit_acme-meet.xd0.de_service_state

systemdunits_service-units.unit_acme-monitoring.p3x.de_service_state

systemdunits_service-units.unit_acme-p3x.de_service_state

systemdunits_service-units.unit_acme-paste.p3x.de_service_state

systemdunits_service-units.unit_acme-search.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-ads.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-ca_service_state

systemdunits_service-units.unit_acme-selfsigned-galene.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-grafana.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-meet.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-meet.xd0.de_service_state

systemdunits_service-units.unit_acme-selfsigned-monitoring.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-paste.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-search.p3x.de_service_state

systemdunits_service-units.unit_acme-selfsigned-turn.p3x.de_service_state

systemdunits_service-units.unit_acme-turn.p3x.de_service_state

systemdunits_service-units.unit_activate-virtual-mail-users_service_state

systemdunits_service-units.unit_asterisk_service_state

systemdunits_service-units.unit_audit_service_state

systemdunits_service-units.unit_bepasty-server-paste.p3x.de-gunicorn_service_state

systemdunits_service-units.unit_blocky_service_state

systemdunits_service-units.unit_chronyd_service_state

systemdunits_service-units.unit_coturn_service_state

systemdunits_service-units.unit_cron_service_state

systemdunits_service-units.unit_dbus_service_state

systemdunits_service-units.unit_dhparams-gen-dovecot2_service_state

systemdunits_service-units.unit_dhparams-init_service_state

systemdunits_service-units.unit_domainname_service_state

systemdunits_service-units.unit_dovecot2_service_state

systemdunits_service-units.unit_emergency_service_state

systemdunits_service-units.unit_fail2ban_service_state

systemdunits_service-units.unit_galene_service_state

systemdunits_service-units.unit_generate-shutdown-ramfs_service_state

systemdunits_service-units.unit_getty@tty1_service_state

systemdunits_service-units.unit_grafana_service_state

systemdunits_service-units.unit_icecast_service_state

systemdunits_service-units.unit_iotBensComDeSocat_service_state

systemdunits_service-units.unit_iperf3_service_state

systemdunits_service-units.unit_jicofo_service_state

systemdunits_service-units.unit_jitsi-excalidraw_service_state

systemdunits_service-units.unit_jitsi-meet-init-secrets_service_state

systemdunits_service-units.unit_jitsi-videobridge2_service_state

systemdunits_service-units.unit_kmod-static-nodes_service_state

systemdunits_service-units.unit_kres-cache-gc_service_state

systemdunits_service-units.unit_kresd@1_service_state

systemdunits_service-units.unit_logrotate-checkconf_service_state

systemdunits_service-units.unit_logrotate_service_state

systemdunits_service-units.unit_mkswap-swapfile_service_state

systemdunits_service-units.unit_modprobe@configfs_service_state

systemdunits_service-units.unit_modprobe@drm_service_state

systemdunits_service-units.unit_modprobe@efi_pstore_service_state

systemdunits_service-units.unit_modprobe@fuse_service_state

systemdunits_service-units.unit_mopidy_service_state

systemdunits_service-units.unit_mount-pstore_service_state

systemdunits_service-units.unit_mysql_service_state

systemdunits_service-units.unit_netdata_service_state

systemdunits_service-units.unit_network-local-commands_service_state

systemdunits_service-units.unit_nftables_service_state

systemdunits_service-units.unit_nginx-config-reload_service_state

systemdunits_service-units.unit_nginx_service_state

systemdunits_service-units.unit_nix-daemon_service_state

systemdunits_service-units.unit_nix-gc_service_state

systemdunits_service-units.unit_nix-optimise_service_state

systemdunits_service-units.unit_nixos-rebuild-switch-to-configuration_service_state

systemdunits_service-units.unit_nixos-upgrade_service_state

systemdunits_service-units.unit_nscd_service_state

systemdunits_service-units.unit_opendkim_service_state

systemdunits_service-units.unit_phpfpm-mypool_service_state

systemdunits_service-units.unit_phpfpm-revive_service_state

systemdunits_service-units.unit_phpfpm-snappymail_service_state

systemdunits_service-units.unit_podman-vosk_service_state

systemdunits_service-units.unit_podman_service_state

systemdunits_service-units.unit_postfix-setup_service_state

systemdunits_service-units.unit_postfix_service_state

systemdunits_service-units.unit_postgresql_service_state

systemdunits_service-units.unit_postgresqlBackup_service_state

systemdunits_service-units.unit_prometheus-node-exporter_service_state

systemdunits_service-units.unit_prometheus_service_state

systemdunits_service-units.unit_prosody_service_state

systemdunits_service-units.unit_qemu-guest-agent_service_state

systemdunits_service-units.unit_redis-rspamd_service_state

systemdunits_service-units.unit_redis-searx_service_state

systemdunits_service-units.unit_reload-systemd-vconsole-setup_service_state

systemdunits_service-units.unit_rescue_service_state

systemdunits_service-units.unit_revive-setup_service_state

systemdunits_service-units.unit_rspamd_service_state

systemdunits_service-units.unit_searx-init_service_state

systemdunits_service-units.unit_snapserver_service_state

systemdunits_service-units.unit_sshd_service_state

systemdunits_service-units.unit_suid-sgid-wrappers_service_state

systemdunits_service-units.unit_systemd-ask-password-console_service_state

systemdunits_service-units.unit_systemd-ask-password-wall_service_state

systemdunits_service-units.unit_systemd-boot-random-seed_service_state

systemdunits_service-units.unit_systemd-fsck-root_service_state

systemdunits_service-units.unit_systemd-fsck@dev-disk-by-uuid-9649-6573_service_state

systemdunits_service-units.unit_systemd-journal-catalog-update_service_state

systemdunits_service-units.unit_systemd-journal-flush_service_state

systemdunits_service-units.unit_systemd-journald_service_state

systemdunits_service-units.unit_systemd-logind_service_state

systemdunits_service-units.unit_systemd-modules-load_service_state

systemdunits_service-units.unit_systemd-networkd-wait-online_service_state

systemdunits_service-units.unit_systemd-networkd_service_state

systemdunits_service-units.unit_systemd-oomd_service_state

systemdunits_service-units.unit_systemd-pstore_service_state

systemdunits_service-units.unit_systemd-random-seed_service_state

systemdunits_service-units.unit_systemd-remount-fs_service_state

systemdunits_service-units.unit_systemd-resolved_service_state

systemdunits_service-units.unit_systemd-rfkill_service_state

systemdunits_service-units.unit_systemd-sysctl_service_state

systemdunits_service-units.unit_systemd-tmpfiles-clean_service_state

systemdunits_service-units.unit_systemd-tmpfiles-resetup_service_state

systemdunits_service-units.unit_systemd-tmpfiles-setup-dev-early_service_state

systemdunits_service-units.unit_systemd-tmpfiles-setup-dev_service_state

systemdunits_service-units.unit_systemd-tmpfiles-setup_service_state

systemdunits_service-units.unit_systemd-udev-trigger_service_state

systemdunits_service-units.unit_systemd-udevd_service_state

systemdunits_service-units.unit_systemd-update-done_service_state

systemdunits_service-units.unit_systemd-update-utmp_service_state

systemdunits_service-units.unit_systemd-user-sessions_service_state

systemdunits_service-units.unit_systemd-vconsole-setup_service_state

systemdunits_service-units.unit_user-runtime-dir@0_service_state

systemdunits_service-units.unit_user@0_service_state

systemdunits_service-units.unit_uwsgi_service_state


--------------------------------------------------------------------------------

 * System Overview
   * cpu
   * load
   * disk
   * ram
   * network
   * processes
   * idlejitter
   * interrupts
   * softirqs
   * softnet
   * entropy
   * files
   * uptime
   * clock synchronization
   * ipc semaphores
   * ipc shared memory
 * CPUs
   * utilization
   * interrupts
   * softirqs
   * softnet
 * Memory
   * overview
   * OOM kills
   * swap
   * zswap
   * page faults
   * writeback
   * kernel
   * slab
   * reclaiming
   * cma
   * hugepages
   * balloon
 * Disks
   * sda
   * /
   * /boot
   * /dev
   * /dev/shm
   * /run
   * /run/wrappers
 * Networking Stack
   * tcp
   * sockets
 * IPv4 Networking
   * packets
   * errors
   * broadcast
   * multicast
   * tcp
   * icmp
   * udp
   * ecn
   * fragments
 * IPv6 Networking
   * packets
   * errors
   * multicast6
   * tcp6
   * icmp6
   * udp6
   * fragments6
   * raw6
 * Network Interfaces
   * ens3
   * podman0
   * veth0
   * wg0
 * Firewall (netfilter)
   * connection tracker
   * netlink
 * systemd asterisk
   * cpu
   * mem
   * disk
   * pids
 * systemd bepasty-server-paste-p3x-de-gunicorn
   * cpu
   * mem
   * disk
   * pids
 * systemd blocky
   * cpu
   * mem
   * disk
   * pids
 * systemd chronyd
   * cpu
   * mem
   * disk
   * pids
 * systemd coturn
   * cpu
   * mem
   * disk
   * pids
 * systemd cron
   * cpu
   * mem
   * disk
   * pids
 * systemd dbus
   * cpu
   * mem
   * disk
   * pids
 * systemd fail2ban
   * cpu
   * mem
   * disk
   * pids
 * systemd galene
   * cpu
   * mem
   * disk
   * pids
 * systemd grafana
   * cpu
   * mem
   * disk
   * pids
 * systemd icecast
   * cpu
   * mem
   * disk
   * pids
 * systemd iotbenscomdesocat
   * cpu
   * mem
   * disk
   * pids
 * systemd iperf3
   * cpu
   * mem
   * disk
   * pids
 * systemd jicofo
   * cpu
   * mem
   * disk
   * pids
 * systemd jitsi-excalidraw
   * cpu
   * mem
   * disk
   * pids
 * systemd jitsi-videobridge2
   * cpu
   * mem
   * disk
   * pids
 * systemd mopidy
   * cpu
   * mem
   * disk
   * pids
 * systemd mysql
   * cpu
   * mem
   * disk
   * pids
 * systemd netdata
   * cpu
   * mem
   * disk
   * pids
 * systemd nginx
   * cpu
   * mem
   * disk
   * pids
 * systemd nscd
   * cpu
   * mem
   * disk
   * pids
 * systemd opendkim
   * cpu
   * mem
   * disk
   * pids
 * systemd podman-vosk
   * cpu
   * mem
   * disk
   * pids
 * systemd postgresql
   * cpu
   * mem
   * disk
   * pids
 * systemd prometheus
   * cpu
   * mem
   * disk
   * pids
 * systemd prometheus-node-exporter
   * cpu
   * mem
   * disk
   * pids
 * systemd prosody
   * cpu
   * mem
   * disk
   * pids
 * systemd qemu-guest-agent
   * cpu
   * mem
   * disk
   * pids
 * systemd redis-rspamd
   * cpu
   * mem
   * disk
   * pids
 * systemd redis-searx
   * cpu
   * mem
   * disk
   * pids
 * systemd rspamd
   * cpu
   * mem
   * disk
   * pids
 * systemd snapserver
   * cpu
   * mem
   * disk
   * pids
 * systemd sshd
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-journald
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-logind
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-networkd
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-oomd
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-resolved
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-udevd
   * cpu
   * mem
   * disk
   * pids
 * systemd uwsgi
   * cpu
   * mem
   * disk
   * pids
 * app
   * cpu
   * mem
   * disk
   * processes
   * fds
   * uptime
 * user
   * cpu
   * mem
   * disk
   * processes
   * fds
   * uptime
 * usergroup
   * cpu
   * mem
   * disk
   * processes
   * fds
   * uptime
 * Anomaly Detection
   * dimensions
   * anomaly rate
   * anomaly detection
 *  7e1882dc96fa
   * cpu
   * mem
   * disk
   * pids
 * Logind
   * sessions
   * users
 * MySQL local
   * bandwidth
   * queries
   * handlers
   * open cache
   * locks
   * issues
   * temporaries
   * connections
   * threads
   * innodb
   * myisam
   * files
   * open tables
   * process list
   * qcache
 * prometheus fawkes local
   * asterisk bridges
   * asterisk calls
   * asterisk channels
   * asterisk core
   * asterisk endpoints
   * asterisk pjsip
 * systemd units service-units
   * service units
 * Add more charts
 * Add more alarms
 * Every second, Netdata collects 4.582 metrics on galahad, presents them in
   2.023 charts and monitors them with 97 alarms.
    
   netdata
   v1.44.3
 * Do you like Netdata?
   Give us a star!
   
   And share the word!



Netdata

Copyright 2020, Netdata, Inc.


Terms and conditions Privacy Policy
Released under GPL v3 or later. Netdata uses third party tools.



XSS PROTECTION

This dashboard is about to render data from server:



To protect your privacy, the dashboard will check all data transferred for cross
site scripting (XSS).
This is CPU intensive, so your browser might be a bit slower.

If you trust the remote server, you can disable XSS protection.
In this case, any remote dashboard decoration code (javascript) will also run.

If you don't trust the remote server, you should keep the protection on.
The dashboard will run slower and remote dashboard decoration code will not run,
but better be safe than sorry...

Keep protecting me I don't need this, the server is mine
×

PRINT THIS NETDATA DASHBOARD

netdata dashboards cannot be captured, since we are lazy loading and hiding all
but the visible charts.
To capture the whole page with all the charts rendered, a new browser window
will pop-up that will render all the charts at once. The new browser window will
maintain the current pan and zoom settings of the charts. So, align the charts
before proceeding.

This process will put some CPU and memory pressure on your browser.
For the netdata server, we will sequentially download all the charts, to avoid
congesting network and server resources.
Please, do not print netdata dashboards on paper!

Print Close
×

IMPORT A NETDATA SNAPSHOT

netdata can export and import dashboard snapshots. Any netdata can import the
snapshot of any other netdata. The snapshots are not uploaded to a server. They
are handled entirely by your web browser, on your computer.

Click here to select the netdata snapshot file to import

Browse for a snapshot file (or drag it and drop it here), then click Import to
render it.



FilenameHostnameOrigin URLCharts InfoSnapshot InfoTime RangeComments



Snapshot files contain both data and javascript code. Make sure you trust the
files you import! Import Close
×

EXPORT A SNAPSHOT

Please wait while we collect all the dashboard data...

Select the desired resolution of the snapshot. This is the seconds of data per
point.
 
 

 

Filename
Compression
 * Select Compression
 * 
 * uncompressed
 * 
 * pako.deflate (gzip, binary)
 * pako.deflate.base64 (gzip, ascii)
 * 
 * lzstring.uri (LZ, ascii)
 * lzstring.utf16 (LZ, utf16)
 * lzstring.base64 (LZ, ascii)

Comments
 
Select snaphost resolution. This controls the size the snapshot file.

The generated snapshot will include all charts of this dashboard, for the
visible timeframe, so align, pan and zoom the charts as needed. The scroll
position of the dashboard will also be saved. The snapshot will be downloaded as
a file, to your computer, that can be imported back into any netdata dashboard
(no need to import it back on this server).

Snapshot files include all the information of the dashboard, including the URL
of the origin server, its netdata unique ID, etc. So, if you share the snapshot
file with third parties, they will be able to access the origin server, if this
server is exposed on the internet.
Snapshots are handled entirely by the web browser. The netdata servers are not
aware of them.

Export Cancel
×

NETDATA ALARMS

 * Active
 * All
 * Log

loading...
loading...
loading...
Close
×

NETDATA DASHBOARD OPTIONS

These are browser settings. Each viewer has its own. They do not affect the
operation of your netdata server.
Settings take effect immediately and are saved permanently to browser local
storage (except the refresh on focus / always option).
To reset all options (including charts sizes) to their defaults, click here.

 * Performance
 * Synchronization
 * Visual
 * Locale

On FocusAlways
When to refresh the charts?
When set to On Focus, the charts will stop being updated if the page / tab does
not have the focus of the user. When set to Always, the charts will always be
refreshed. Set it to On Focus it to lower the CPU requirements of the browser
(and extend the battery of laptops and tablets) when this page does not have
your focus. Set to Always to work on another window (i.e. change the settings of
something) and have the charts auto-refresh in this window.
Non ZeroAll
Which dimensions to show?
When set to Non Zero, dimensions that have all their values (within the current
view) set to zero will not be transferred from the netdata server (except if all
dimensions of the chart are zero, in which case this setting does nothing - all
dimensions are transferred and shown). When set to All, all dimensions will
always be shown. Set it to Non Zero to lower the data transferred between
netdata and your browser, lower the CPU requirements of your browser (fewer
lines to draw) and increase the focus on the legends (fewer entries at the
legends).
DestroyHide
How to handle hidden charts?
When set to Destroy, charts that are not in the current viewport of the browser
(are above, or below the visible area of the page), will be destroyed and
re-created if and when they become visible again. When set to Hide, the
not-visible charts will be just hidden, to simplify the DOM and speed up your
browser. Set it to Destroy, to lower the memory requirements of your browser.
Set it to Hide for faster restoration of charts on page scrolling.
AsyncSync
Page scroll handling?
When set to Sync, charts will be examined for their visibility immediately after
scrolling. On slow computers this may impact the smoothness of page scrolling.
To update the page when scrolling ends, set it to Async. Set it to Sync for
immediate chart updates when scrolling. Set it to Async for smoother page
scrolling on slower computers.

ParallelSequential
Which chart refresh policy to use?
When set to parallel, visible charts are refreshed in parallel (all queries are
sent to netdata server in parallel) and are rendered asynchronously. When set to
sequential charts are refreshed one after another. Set it to parallel if your
browser can cope with it (most modern browsers do), set it to sequential if you
work on an older/slower computer.
ResyncBest Effort
Shall we re-sync chart refreshes?
When set to Resync, the dashboard will attempt to re-synchronize all the charts
so that they are refreshed concurrently. When set to Best Effort, each chart may
be refreshed with a little time difference to the others. Normally, the
dashboard starts refreshing them in parallel, but depending on the speed of your
computer and the network latencies, charts start having a slight time
difference. Setting this to Resync will attempt to re-synchronize the charts on
every update. Setting it to Best Effort may lower the pressure on your browser
and the network.
SyncDon't Sync
Sync hover selection on all charts?
When enabled, a selection on one chart will automatically select the same time
on all other visible charts and the legends of all visible charts will be
updated to show the selected values. When disabled, only the chart getting the
user's attention will be selected. Enable it to get better insights of the data.
Disable it if you are on a very slow computer that cannot actually do it.

RightBelow
Where do you want to see the legend?
Netdata can place the legend in two positions: Below charts (the default) or to
the Right of charts.
Switching this will reload the dashboard.
DarkWhite
Which theme to use?
Netdata comes with two themes: Dark (the default) and White.
Switching this will reload the dashboard.
Help MeNo Help
Do you need help?
Netdata can show some help in some areas to help you use the dashboard. If all
these balloons bother you, disable them using this switch.
Switching this will reload the dashboard.
PadDon't Pad
Enable data padding when panning and zooming?
When set to Pad the charts will be padded with more data, both before and after
the visible area, thus giving the impression the whole database is loaded. This
padding will happen only after the first pan or zoom operation on the chart
(initially all charts have only the visible data). When set to Don't Pad only
the visible data will be transferred from the netdata server, even after the
first pan and zoom operation.
SmoothRough
Enable Bézier lines on charts?
When set to Smooth the charts libraries that support it, will plot smooth curves
instead of simple straight lines to connect the points.
Keep in mind dygraphs, the main charting library in netdata dashboards, can only
smooth line charts. It cannot smooth area or stacked charts. When set to Rough,
this setting can lower the CPU resources consumed by your browser.

These settings are applied gradually, as charts are updated. To force them,
refresh the dashboard now.
Scale UnitsFixed Units
Enable auto-scaling of select units?
When set to Scale Units the values shown will dynamically be scaled (e.g. 1000
kilobits will be shown as 1 megabit). Netdata can auto-scale these original
units: kilobits/s, kilobytes/s, KB/s, KB, MB, and GB. When set to Fixed Units
all the values will be rendered using the original units maintained by the
netdata server.
CelsiusFahrenheit
Which units to use for temperatures?
Set the temperature units of the dashboard.
TimeSeconds
Convert seconds to time?
When set to Time, charts that present seconds will show DDd:HH:MM:SS. When set
to Seconds, the raw number of seconds will be presented.

Close
×

UPDATE CHECK

Your netdata version: v1.44.3




New version of netdata available!

Latest version: v1.45.5

Click here for the changes log and
click here for directions on updating your netdata installation.

We suggest to review the changes log for new features you may be interested, or
important bug fixes you may need.
Keeping your netdata updated is generally a good idea.

--------------------------------------------------------------------------------

For progress reports and key netdata updates: Join the Netdata Community
You can also follow netdata on twitter, follow netdata on facebook, or watch
netdata on github.
Check Now Close
×

SIGN IN

Signing-in to netdata.cloud will synchronize the list of your netdata monitored
nodes known at registry . This may include server hostnames, urls and
identification GUIDs.

After you upgrade all your netdata servers, your private registry will not be
needed any more.

Are you sure you want to proceed?

Cancel Sign In
×

DELETE ?

You are about to delete, from your personal list of netdata servers, the
following server:




Are you sure you want to do this?


Keep in mind, this server will be added back if and when you visit it again.


keep it delete it
×

SWITCH NETDATA REGISTRY IDENTITY

You can copy and paste the following ID to all your browsers (e.g. work and
home).
All the browsers with the same ID will identify you, so please don't share this
with others.

Either copy this ID and paste it to another browser, or paste here the ID you
have taken from another browser.
Keep in mind that:
 * when you switch ID, your previous ID will be lost forever - this is
   irreversible.
 * both IDs (your old and the new) must list this netdata at their personal
   lists.
 * both IDs have to be known by the registry: .
 * to get a new ID, just clear your browser cookies.


cancel impersonate
×



Checking known URLs for this server...



Checks may fail if you are viewing an HTTPS page and the server to be checked is
HTTP only.


Close