monitoring.roro.digital Open in urlscan Pro
151.231.108.3  Public Scan

URL: https://monitoring.roro.digital/
Submission Tags: phishingrod
Submission: On September 20 via api from DE — Scanned from GB

Form analysis 5 forms found in the DOM

<form id="optionsForm1" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="stop_updates_when_focus_is_lost" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
                data-on="On Focus" data-off="Always" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">On Focus</label><label class="btn btn-danger active toggle-off">Always</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>When to refresh the charts?</strong><br><small>When set to <b>On Focus</b>, the charts will stop being updated if the page / tab does not have the focus of the user. When set to <b>Always</b>, the charts will
              always be refreshed. Set it to <b>On Focus</b> it to lower the CPU requirements of the browser (and extend the battery of laptops and tablets) when this page does not have your focus. Set to <b>Always</b> to work on another window (i.e.
              change the settings of something) and have the charts auto-refresh in this window.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="eliminate_zero_dimensions" type="checkbox" checked="checked" data-toggle="toggle" data-on="Non Zero" data-off="All"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Non Zero</label><label class="btn btn-default active toggle-off">All</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which dimensions to show?</strong><br><small>When set to <b>Non Zero</b>, dimensions that have all their values (within the current view) set to zero will not be transferred from the netdata server (except if
              all dimensions of the chart are zero, in which case this setting does nothing - all dimensions are transferred and shown). When set to <b>All</b>, all dimensions will always be shown. Set it to <b>Non Zero</b> to lower the data
              transferred between netdata and your browser, lower the CPU requirements of your browser (fewer lines to draw) and increase the focus on the legends (fewer entries at the legends).</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="destroy_on_hide" type="checkbox" data-toggle="toggle" data-on="Destroy" data-off="Hide" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Destroy</label><label class="btn btn-default active toggle-off">Hide</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>How to handle hidden charts?</strong><br><small>When set to <b>Destroy</b>, charts that are not in the current viewport of the browser (are above, or below the visible area of the page), will be destroyed and
              re-created if and when they become visible again. When set to <b>Hide</b>, the not-visible charts will be just hidden, to simplify the DOM and speed up your browser. Set it to <b>Destroy</b>, to lower the memory requirements of your
              browser. Set it to <b>Hide</b> for faster restoration of charts on page scrolling.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="async_on_scroll" type="checkbox" data-toggle="toggle" data-on="Async" data-off="Sync" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Async</label><label class="btn btn-default active toggle-off">Sync</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Page scroll handling?</strong><br><small>When set to <b>Sync</b>, charts will be examined for their visibility immediately after scrolling. On slow computers this may impact the smoothness of page scrolling.
              To update the page when scrolling ends, set it to <b>Async</b>. Set it to <b>Sync</b> for immediate chart updates when scrolling. Set it to <b>Async</b> for smoother page scrolling on slower computers.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm2" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="parallel_refresher" type="checkbox" checked="checked" data-toggle="toggle" data-on="Parallel" data-off="Sequential"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Parallel</label><label class="btn btn-default active toggle-off">Sequential</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which chart refresh policy to use?</strong><br><small>When set to <b>parallel</b>, visible charts are refreshed in parallel (all queries are sent to netdata server in parallel) and are rendered
              asynchronously. When set to <b>sequential</b> charts are refreshed one after another. Set it to parallel if your browser can cope with it (most modern browsers do), set it to sequential if you work on an older/slower computer.</small>
          </td>
        </tr>
        <tr class="option-row" id="concurrent_refreshes_row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="concurrent_refreshes" type="checkbox" checked="checked" data-toggle="toggle" data-on="Resync" data-off="Best Effort"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Resync</label><label class="btn btn-default active toggle-off">Best Effort</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Shall we re-sync chart refreshes?</strong><br><small>When set to <b>Resync</b>, the dashboard will attempt to re-synchronize all the charts so that they are refreshed concurrently. When set to
              <b>Best Effort</b>, each chart may be refreshed with a little time difference to the others. Normally, the dashboard starts refreshing them in parallel, but depending on the speed of your computer and the network latencies, charts start
              having a slight time difference. Setting this to <b>Resync</b> will attempt to re-synchronize the charts on every update. Setting it to <b>Best Effort</b> may lower the pressure on your browser and the network.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="sync_selection" type="checkbox" checked="checked" data-toggle="toggle" data-on="Sync" data-off="Don't Sync" data-onstyle="success"
                data-offstyle="danger" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Sync</label><label class="btn btn-danger active toggle-off">Don't Sync</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Sync hover selection on all charts?</strong><br><small>When enabled, a selection on one chart will automatically select the same time on all other visible charts and the legends of all visible charts will be
              updated to show the selected values. When disabled, only the chart getting the user's attention will be selected. Enable it to get better insights of the data. Disable it if you are on a very slow computer that cannot actually do
              it.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm3" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="legend_right" type="checkbox" checked="checked" data-toggle="toggle" data-on="Right" data-off="Below" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Right</label><label class="btn btn-default active toggle-off">Below</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Where do you want to see the legend?</strong><br><small>Netdata can place the legend in two positions: <b>Below</b> charts (the default) or to the <b>Right</b> of
              charts.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="netdata_theme_control" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
                data-on="Dark" data-off="White" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Dark</label><label class="btn btn-danger active toggle-off">White</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which theme to use?</strong><br><small>Netdata comes with two themes: <b>Dark</b> (the default) and <b>White</b>.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="show_help" type="checkbox" checked="checked" data-toggle="toggle" data-on="Help Me" data-off="No Help" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Help Me</label><label class="btn btn-default active toggle-off">No Help</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Do you need help?</strong><br><small>Netdata can show some help in some areas to help you use the dashboard. If all these balloons bother you, disable them using this
              switch.<br><b>Switching this will reload the dashboard</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="pan_and_zoom_data_padding" type="checkbox" checked="checked" data-toggle="toggle" data-on="Pad" data-off="Don't Pad"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Pad</label><label class="btn btn-default active toggle-off">Don't Pad</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable data padding when panning and zooming?</strong><br><small>When set to <b>Pad</b> the charts will be padded with more data, both before and after the visible area, thus giving the impression the whole
              database is loaded. This padding will happen only after the first pan or zoom operation on the chart (initially all charts have only the visible data). When set to <b>Don't Pad</b> only the visible data will be transferred from the
              netdata server, even after the first pan and zoom operation.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="smooth_plot" type="checkbox" checked="checked" data-toggle="toggle" data-on="Smooth" data-off="Rough" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Smooth</label><label class="btn btn-default active toggle-off">Rough</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable Bézier lines on charts?</strong><br><small>When set to <b>Smooth</b> the charts libraries that support it, will plot smooth curves instead of simple straight lines to connect the points.<br>Keep in
              mind <a href="http://dygraphs.com" target="_blank">dygraphs</a>, the main charting library in netdata dashboards, can only smooth line charts. It cannot smooth area or stacked charts. When set to <b>Rough</b>, this setting can lower the
              CPU resources consumed by your browser.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

<form id="optionsForm4" class="form-horizontal">
  <div class="form-group">
    <table>
      <tbody>
        <tr class="option-row">
          <td colspan="2" align="center"><small><b>These settings are applied gradually, as charts are updated. To force them, refresh the dashboard now</b>.</small></td>
        </tr>
        <tr class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 38px;"><input id="units_conversion" type="checkbox" checked="checked" data-toggle="toggle" data-on="Scale Units" data-off="Fixed Units"
                data-onstyle="success" data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Scale Units</label><label class="btn btn-default active toggle-off">Fixed Units</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Enable auto-scaling of select units?</strong><br><small>When set to <b>Scale Units</b> the values shown will dynamically be scaled (e.g. 1000 kilobits will be shown as 1 megabit). Netdata can auto-scale these
              original units: <code>kilobits/s</code>, <code>kilobytes/s</code>, <code>KB/s</code>, <code>KB</code>, <code>MB</code>, and <code>GB</code>. When set to <b>Fixed Units</b> all the values will be rendered using the original units
              maintained by the netdata server.</small></td>
        </tr>
        <tr id="settingsLocaleTempRow" class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="units_temp" type="checkbox" checked="checked" data-toggle="toggle" data-on="Celsius" data-off="Fahrenheit" data-width="110px">
              <div class="toggle-group"><label class="btn btn-primary toggle-on">Celsius</label><label class="btn btn-default active toggle-off">Fahrenheit</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Which units to use for temperatures?</strong><br><small>Set the temperature units of the dashboard.</small></td>
        </tr>
        <tr id="settingsLocaleTimeRow" class="option-row">
          <td class="option-control">
            <div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 19px;"><input id="seconds_as_time" type="checkbox" checked="checked" data-toggle="toggle" data-on="Time" data-off="Seconds" data-onstyle="success"
                data-width="110px">
              <div class="toggle-group"><label class="btn btn-success toggle-on">Time</label><label class="btn btn-default active toggle-off">Seconds</label><span class="toggle-handle btn btn-default"></span></div>
            </div>
          </td>
          <td class="option-info"><strong>Convert seconds to time?</strong><br><small>When set to <b>Time</b>, charts that present <code>seconds</code> will show <code>DDd:HH:MM:SS</code>. When set to <b>Seconds</b>, the raw number of seconds will be
              presented.</small></td>
        </tr>
      </tbody>
    </table>
  </div>
</form>

#

<form action="#"><input class="form-control" id="switchRegistryPersonGUID" placeholder="your personal ID" maxlength="36" autocomplete="off" style="text-align:center;font-size:1.4em"></form>

Text Content

netdata

Real-time performance monitoring, done right!
VISITED NODES

mariachevron_right
https://monitoring.roro.digital/
maria
UTC +1
Playing

20/09/2024 • 05:5806:05
• last
7min
0
0



NETDATA

REAL-TIME PERFORMANCE MONITORING, IN THE GREATEST POSSIBLE DETAIL

Drag charts to pan. Shift + wheel on them, to zoom in and out. Double-click on
them, to reset. Hover on them too!
system.cpu



SYSTEM OVERVIEW

Overview of the key system metrics.
0.0Disk ReadKiB/s
439.0Disk WriteKiB/s
16.8CPU%0.0100.0
24.3Net Inboundkilobits/s
0.17Net Outboundmegabits/s
23.2Used RAM%


CPU


Total CPU utilization (all cores). 100% here means there is no CPU idle time at
all. You can get per core usage at the CPUs section and per application usage at
the Applications Monitoring section.
Keep an eye on iowait

iowait
(0.0000%). If it is constantly high, your disks are a bottleneck and they slow
your system down.
An important metric worth monitoring, is softirq

softirq
(0.50%). A constantly high percentage of softirq may indicate network driver
issues. The individual metrics can be found in the kernel documentation.
Total CPU utilization (system.cpu)
0.0
20.0
40.0
60.0
80.0
100.0
05:59:30
06:00:00
06:00:30
06:01:00
06:01:30
06:02:00
06:02:30
06:03:00
06:03:30
06:04:00
06:04:30
06:05:00
06:05:30
06:06:00

guest_nice


guest


steal


softirq


irq


user


system


nice


iowait
percentage
Fri, 20 Sept 2024|06:06:04

guest_nice0.0

guest0.0

steal0.0

softirq0.5

irq1.0

user0.5

system13.1

nice1.5

iowait0.3


CPU Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on CPU. The ratios are tracked as recent trends
over 10-, 60-, and 300-second windows.
CPU some pressure (system.cpu_some_pressure)
0.00
0.10
0.20
0.30
0.40
0.50
05:59:30
06:00:00
06:00:30
06:01:00
06:01:30
06:02:00
06:02:30
06:03:00
06:03:30
06:04:00
06:04:30
06:05:00
06:05:30
06:06:00

some 10


some 60


some 300
percentage
Fri, 20 Sept 2024|06:06:04

some 100.51

some 600.19

some 3000.09


The amount of time some processes have been waiting for CPU time.
CPU some pressure stall time (system.cpu_some_pressure_stall_time)
2.0
4.0
6.0
8.0
10.0
12.0
14.0
16.0
05:59:30
06:00:00
06:00:30
06:01:00
06:01:30
06:02:00
06:02:30
06:03:00
06:03:30
06:04:00
06:04:30
06:05:00
06:05:30
06:06:00

time
ms
Fri, 20 Sept 2024|06:06:04

time9.3




LOAD


Current system load, i.e. the number of processes using CPU or waiting for
system resources (usually CPU and disk). The 3 metrics refer to 1, 5 and 15
minute averages. The system calculates this once every 5 seconds. For more
information check this wikipedia article.
system.load



DISK


Total Disk I/O, for all physical disks. You can get detailed information about
each disk at the Disks section and per application Disk usage at the
Applications Monitoring section. Physical are all the disks that are listed in
/sys/block, but do not exist in /sys/devices/virtual/block.
system.io

Memory paged from/to disk. This is usually the total disk I/O of the system.
system.pgpgio

I/O Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on I/O. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.io_some_pressure

The amount of time some processes have been waiting due to I/O congestion.
system.io_some_pressure_stall_time

I/O Pressure Stall Information. Full line indicates the share of time in which
all non-idle tasks are stalled on I/O resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.io_full_pressure

The amount of time all non-idle processes have been stalled due to I/O
congestion.
system.io_full_pressure_stall_time



RAM


System Random Access Memory (i.e. physical memory) usage.
system.ram

Memory Pressure Stall Information. Some indicates the share of time in which at
least some tasks are stalled on memory. In this state the CPU is still doing
productive work. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.memory_some_pressure

The amount of time some processes have been waiting due to memory congestion.
system.memory_some_pressure_stall_time

Memory Pressure Stall Information. Full indicates the share of time in which all
non-idle tasks are stalled on memory resource simultaneously. In this state
actual CPU cycles are going to waste, and a workload that spends extended time
in this state is considered to be thrashing. This has severe impact on
performance. The ratios are tracked as recent trends over 10-, 60-, and
300-second windows.
system.memory_full_pressure

The amount of time all non-idle processes have been stalled due to memory
congestion.
system.memory_full_pressure_stall_time



NETWORK


Total bandwidth of all physical network interfaces. This does not include lo,
VPNs, network bridges, IFB devices, bond interfaces, etc. Only the bandwidth of
physical network interfaces is aggregated. Physical are all the network
interfaces that are listed in /proc/net/dev, but do not exist in
/sys/devices/virtual/net.
system.net

Total IP traffic in the system.
system.ip

Total IPv6 Traffic.
system.ipv6



PROCESSES



System processes.

Running - running or ready to run (runnable). Blocked - currently blocked,
waiting for I/O to complete.

system.processes


The number of processes in different states.

Running - Process using the CPU at a particular moment. Sleeping
(uninterruptible) - Process will wake when a waited-upon resource becomes
available or after a time-out occurs during that wait. Mostly used by device
drivers waiting for disk or network I/O. Sleeping (interruptible) - Process is
waiting either for a particular time slot or for a particular event to occur.
Zombie - Process that has completed its execution, released the system
resources, but its entry is not removed from the process table. Usually occurs
in child processes when the parent process still needs to read its child’s exit
status. A process that stays a zombie for a long time is generally an error and
causes syst...

The number of processes in different states.

Running - Process using the CPU at a particular moment. Sleeping
(uninterruptible) - Process will wake when a waited-upon resource becomes
available or after a time-out occurs during that wait. Mostly used by device
drivers waiting for disk or network I/O. Sleeping (interruptible) - Process is
waiting either for a particular time slot or for a particular event to occur.
Zombie - Process that has completed its execution, released the system
resources, but its entry is not removed from the process table. Usually occurs
in child processes when the parent process still needs to read its child’s exit
status. A process that stays a zombie for a long time is generally an error and
causes system PID space leak. Stopped - Process is suspended from proceeding
further due to STOP or TSTP signals. In this state, a process will not do
anything (not even terminate) until it receives a CONT signal.

show more information
system.processes_state

The number of new processes created.
system.forks

The total number of processes in the system.
system.active_processes

Context Switches, is the switching of the CPU from one process, task or thread
to another. If there are many processes or threads willing to execute and very
few CPU cores available to handle them, the system is making more context
switching to balance the CPU resources among them. The whole process is
computationally intensive. The more the context switches, the slower the system
gets.
system.ctxt



IDLEJITTER


Idle jitter is calculated by netdata. A thread is spawned that requests to sleep
for a few microseconds. When the system wakes it up, it measures how many
microseconds have passed. The difference between the requested and the actual
duration of the sleep, is the idle jitter. This number is useful in real-time
environments, where CPU jitter can affect the quality of the service (like VoIP
media gateways).
system.idlejitter



INTERRUPTS

Interrupts are signals sent to the CPU by external devices (normally I/O
devices) or programs (running processes). They tell the CPU to stop its current
activities and execute the appropriate part of the operating system. Interrupt
types are hardware (generated by hardware devices to signal that they need some
attention from the OS), software (generated by programs when they want to
request a system call to be performed by the operating system), and traps
(generated by the CPU itself to indicate that some error or condition occurred
for which assistance from the operating system is needed).

Total number of CPU interrupts. Check system.interrupts that gives more detail
about each interrupt and also the CPUs section where interrupts are analyzed per
CPU core.
system.intr

CPU interrupts in detail. At the CPUs section, interrupts are analyzed per CPU
core. The last column in /proc/interrupts provides an interrupt description or
the device name that registered the handler for that interrupt.
system.interrupts

system.irq_full_pressure

system.irq_full_pressure_stall_time



SOFTIRQS

Software interrupts (or "softirqs") are one of the oldest deferred-execution
mechanisms in the kernel. Several tasks among those executed by the kernel are
not critical: they can be deferred for a long period of time, if necessary. The
deferrable tasks can execute with all interrupts enabled (softirqs are patterned
after hardware interrupts). Taking them out of the interrupt handler helps keep
kernel response time small.


Total number of software interrupts in the system. At the CPUs section, softirqs
are analyzed per CPU core.

HI - high priority tasklets. TIMER - tasklets related to timer interrupts.
NET_TX, NET_RX - used for network transmit and receive processing. BLOCK -
handles block I/O completion events. IRQ_POLL - used by the IO subsystem to
increase performance (a NAPI like approach for block devices). TASKLET - handles
regular tasklets. SCHED - used by the scheduler to perform load-balancing and
other scheduling tasks. HRTIMER - used for high-resolution timers. RCU -
performs read-copy-update (RCU) processing.

system.softirqs



SOFTNET

Statistics for CPUs SoftIRQs related to network receive work. Break down per CPU
core can be found at CPU / softnet statistics. More information about
identifying and troubleshooting network driver related issues can be found at
Red Hat Enterprise Linux Network Performance Tuning Guide.

Processed - packets processed. Dropped - packets dropped because the network
device backlog was full. Squeezed - number of times the network device budget
was consumed or the time limit was reached, but more work was available.
ReceivedRPS - number of times this CPU has been woken up to process packets via
an Inter-processor Interrupt. FlowLimitCount - number of times the flow limit
has been reached (flow limiting is an optional Receive Packet Steering feature).


system.softnet_stat



ENTROPY


Entropy, is a pool of random numbers (/dev/random) that is mainly used in
cryptography. If the pool of entropy gets empty, processes requiring random
numbers may run a lot slower (it depends on the interface each program uses),
waiting for the pool to be replenished. Ideally a system with high entropy
demands should have a hardware device for that purpose (TPM is one such device).
There are also several software-only options you may install, like haveged,
although these are generally useful only in servers.
system.entropy



FILES


system.file_nr_used

system.file_nr_utilization



UPTIME


The amount of time the system has been running, including time spent in suspend.
system.uptime



CLOCK SYNCHRONIZATION

NTP lets you automatically sync your system time with a remote server. This
keeps your machine’s time accurate by syncing with servers that are known to
have accurate times.


The system clock synchronization state as provided by the ntp_adjtime() system
call. An unsynchronized clock may be the result of synchronization issues by the
NTP daemon or a hardware clock fault. It can take several minutes (usually up to
17) before NTP daemon selects a server to synchronize with.

State map: 0 - not synchronized, 1 - synchronized.

system.clock_sync_state


The kernel code can operate in various modes and with various features enabled
or disabled, as selected by the ntp_adjtime() system call. The system clock
status shows the value of the time_status variable in the kernel. The bits of
the variable are used to control these functions and record error conditions as
they exist.

UNSYNC - set/cleared by the caller to indicate clock unsynchronized (e.g., when
no peers are reachable). This flag is usually controlled by an application
program, but the operating system may also set it. CLOCKERR - set/cleared by the
external hardware clock driver to indicate hardware fault.

Status map: 0 - bit unset, 1 - bit set.

system.clock_status

A typical NTP client regularly polls one or more NTP servers. The client must
compute its time offset and round-trip delay. Time offset is the difference in
absolute time between the two clocks.
system.clock_sync_offset



IPC SEMAPHORES

System V semaphores is an inter-process communication (IPC) mechanism. It allows
processes or threads within a process to synchronize their actions. They are
often used to monitor and control the availability of system resources such as
shared memory segments. For details, see svipc(7). To see the host IPC semaphore
information, run ipcs -us. For limits, run ipcs -ls.

Number of allocated System V IPC semaphores. The system-wide limit on the number
of semaphores in all semaphore sets is specified in /proc/sys/kernel/sem file
(2nd field).
system.ipc_semaphores

Number of used System V IPC semaphore arrays (sets). Semaphores support
semaphore sets where each one is a counting semaphore. So when an application
requests semaphores, the kernel releases them in sets. The system-wide limit on
the maximum number of semaphore sets is specified in /proc/sys/kernel/sem file
(4th field).
system.ipc_semaphore_arrays



IPC SHARED MEMORY

System V shared memory is an inter-process communication (IPC) mechanism. It
allows processes to communicate information by sharing a region of memory. It is
the fastest form of inter-process communication available since no kernel
involvement occurs when data is passed between the processes (no copying).
Typically, processes must synchronize their access to a shared memory object,
using, for example, POSIX semaphores. For details, see svipc(7). To see the host
IPC shared memory information, run ipcs -um. For limits, run ipcs -lm.

Number of allocated System V IPC memory segments. The system-wide maximum number
of shared memory segments that can be created is specified in
/proc/sys/kernel/shmmni file.
system.shared_memory_segments

Amount of memory currently used by System V IPC memory segments. The run-time
limit on the maximum shared memory segment size that can be created is specified
in /proc/sys/kernel/shmmax file.
system.shared_memory_bytes


--------------------------------------------------------------------------------


CPUS

Detailed information for each CPU of the system. A summary of the system for all
CPUs can be found at the System Overview section.



CPUFREQ


The frequency measures the number of cycles your CPU executes per second.
cpu.cpufreq



THROTTLING

CPU throttling is commonly used to automatically slow down the computer when
possible to use less energy and conserve battery.

The number of adjustments made to the clock speed of the CPU based on it's core
temperature.
cpu.core_throttling



POWERCAP


cpu.powercap_intel_rapl_zone_package-0

cpu.powercap_intel_rapl_subzones_package-0


--------------------------------------------------------------------------------


MEMORY

Detailed information about the memory management of the system.



OVERVIEW


Available Memory is estimated by the kernel, as the amount of RAM that can be
used by userspace processes, without causing swapping.
mem.available

Committed Memory, is the sum of all memory which has been allocated by
processes.
mem.committed

mem.directmaps



OOM KILLS


The number of processes killed by Out of Memory Killer. The kernel's OOM killer
is summoned when the system runs short of free memory and is unable to proceed
without killing one or more processes. It tries to pick the process whose demise
will free the most memory while causing the least misery for users of the
system. This counter also includes processes within containers that have
exceeded the memory limit.
mem.oom_kill



ZSWAP


mem.zswapio



SWAP



System swap I/O.

In - pages the system has swapped in from disk to RAM. Out - pages the system
has swapped out from RAM to disk.
mem.swapio



PAGE FAULTS



A page fault is a type of interrupt, called trap, raised by computer hardware
when a running program accesses a memory page that is mapped into the virtual
address space, but not actually loaded into main memory.



Minor - the page is loaded in memory at the time the fault is generated, but is
not marked in the memory management unit as being loaded in memory. Major -
generated when the system needs to load the memory page from disk or swap
memory.



mem.pgfaults



WRITEBACK


Dirty is the amount of memory waiting to be written to disk. Writeback is how
much memory is actively being written to disk.
mem.writeback



KERNEL



The total amount of memory being used by the kernel.

Slab - used by the kernel to cache data structures for its own use. KernelStack
- allocated for each task done by the kernel. PageTables - dedicated to the
lowest level of page tables (A page table is used to turn a virtual address into
a physical memory address). VmallocUsed - being used as virtual address space.
Percpu - allocated to the per-CPU allocator used to back per-CPU allocations
(excludes the cost of metadata). When you create a per-CPU variable, each
processor on the system gets its own copy of that variable.

mem.kernel



SLAB



Slab memory statistics.



Reclaimable - amount of memory which the kernel can reuse. Unreclaimable - can
not be reused even when the kernel is lacking memory.

mem.slab



RECLAIMING


mem.reclaiming



CMA


mem.cma



HUGEPAGES

Hugepages is a feature that allows the kernel to utilize the multiple page size
capabilities of modern hardware architectures. The kernel creates multiple pages
of virtual memory, mapped from both physical RAM and swap. There is a mechanism
in the CPU architecture called "Translation Lookaside Buffers" (TLB) to manage
the mapping of virtual memory pages to actual physical memory addresses. The TLB
is a limited hardware resource, so utilizing a large amount of physical memory
with the default page size consumes the TLB and adds processing overhead. By
utilizing Huge Pages, the kernel is able to create pages of much larger sizes,
each page consuming a single resource in the TLB. Huge Pages are pinned to
physical RAM and cannot be swapped/paged out.

mem.thp

mem.thp_details

mem.thp_faults

mem.thp_file

mem.thp_zero

mem.thp_collapse

mem.thp_split

mem.thp_swapout

mem.thp_compact



DEDUPER (KSM)

Kernel Same-page Merging (KSM) performance monitoring, read from several files
in /sys/kernel/mm/ksm/. KSM is a memory-saving de-duplication feature in the
Linux kernel. The KSM daemon ksmd periodically scans those areas of user memory
which have been registered with it, looking for pages of identical content which
can be replaced by a single write-protected page.

mem.ksm_cow



BALLOON


mem.balloon



ECC

ECC memory is a type of computer data storage that uses an error correction code
(ECC) to detect and correct n-bit data corruption which occurs in memory.
Typically, ECC memory maintains a memory system immune to single-bit errors: the
data that is read from each word is always the same as the data that had been
written to it, even if one of the bits actually stored has been flipped to the
wrong state.

Memory errors can be classified into two types: Soft errors, which randomly
corrupt bits but do not leave physical damage. Soft errors are transient in
nature and are not repeatable, can be because of electrical or magnetic
interference. Hard errors, which corrupt bits in a repeatable manner because of
a physical/hardware defect or an environmental problem.


The amount of memory with physical corruption problems, identified by ECC and
set aside by the kernel so it does not get used.
mem.hwcorrupt



FRAGMENTATION

These charts show whether the kernel will compact memory or direct reclaim to
satisfy a high-order allocation. The extfrag/extfrag_index file in debugfs shows
what the fragmentation index for each order is in each zone in the system.Values
tending towards 0 imply allocations would fail due to lack of memory, values
towards 1000 imply failures are due to fragmentation and -1 implies that the
allocation will succeed as long as watermarks are met.

mem.fragmentation_index_node_0_dma

mem.fragmentation_index_node_0_dma32

mem.fragmentation_index_node_0_normal


--------------------------------------------------------------------------------


DISKS

Charts with performance information for all the system disks. Special care has
been given to present disk performance metrics in a way compatible with iostat
-x. netdata by default prevents rendering performance charts for individual
partitions and unmounted virtual disks. Disabled charts can still be enabled by
configuring the relative settings in the netdata configuration file.



IO

disk.sda

disk.sda

The amount of data transferred to and from disk.
disk.sda



SDA

disk_util.sda

The amount of discarded data that are no longer in use by a mounted file system.
disk_ext.sda

Completed disk I/O operations. Keep in mind the number of operations requested
might be higher, since the system is able to merge adjacent to each other (see
merged operations chart).
disk_ops.sda


The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush...

The number (after merges) of completed discard/flush requests.

Discard commands inform disks which blocks of data are no longer considered to
be in use and therefore can be erased internally. They are useful for
solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming
enables the SSD to handle garbage collection more efficiently, which would
otherwise slow future write operations to the involved blocks down.

Flush operations transfer all modified in-core data (i.e., modified buffer cache
pages) to the disk device so that all changed information can be retrieved even
if the system crashes or is rebooted. Flush requests are executed by disks.
Flush requests are not tracked for partitions. Before being merged, flush
operations are counted as writes.

show more information
disk_ext_ops.sda

I/O operations currently in progress. This metric is a snapshot - it is not an
average over the last interval.
disk_qops.sda

Backlog is an indication of the duration of pending disk operations. On every
I/O event the system is multiplying the time spent doing I/O since the last
update of this field with the number of pending operations. While not accurate,
this metric can provide an indication of the expected completion time of the
operations in progress.
disk_backlog.sda

Disk Busy Time measures the amount of time the disk was busy with something.
disk_busy.sda

Disk Utilization measures the amount of time the disk was busy with something.
This is not related to its performance. 100% means that the system always had an
outstanding operation on the disk. Keep in mind that depending on the underlying
technology of the disk, 100% here may or may not be an indication of congestion.
disk_util.sda

The average time for I/O requests issued to the device to be served. This
includes the time spent by the requests in queue and the time spent servicing
them.
disk_await.sda

The average time for discard/flush requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent
servicing them.
disk_ext_await.sda

The average I/O operation size.
disk_avgsz.sda

The average discard operation size.
disk_ext_avgsz.sda

The average service time for completed I/O operations. This metric is calculated
using the total busy time of the disk and the number of completed operations. If
the disk is able to execute multiple parallel operations the reporting average
service time will be misleading.
disk_svctm.sda

The number of merged disk operations. The system is able to merge adjacent I/O
operations, for example two 4KB reads can become one 8KB read before given to
disk.
disk_mops.sda

The number of merged discard disk operations. Discard operations which are
adjacent to each other may be merged for efficiency.
disk_ext_mops.sda

The sum of the duration of all completed I/O operations. This number can exceed
the interval if the disk is able to execute I/O operations in parallel.
disk_iotime.sda

The sum of the duration of all completed discard/flush operations. This number
can exceed the interval if the disk is able to execute discard/flush operations
in parallel.
disk_ext_iotime.sda



/


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._



/BOOT


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._boot



/DEV


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._dev

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._dev



/DEV/HUGEPAGES


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._dev_hugepages

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._dev_hugepages



/DEV/SHM


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._dev_shm

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._dev_shm



/RUN


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._run

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._run



/RUN/KEYS


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._run_keys

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._run_keys



/RUN/WRAPPERS


Disk space utilization. reserved for root is automatically reserved by the
system to prevent the root user from getting out of space.
disk_space._run_wrappers

Inodes (or index nodes) are filesystem objects (e.g. files and directories). On
many types of file system implementations, the maximum number of inodes is fixed
at filesystem creation, limiting the maximum number of files the filesystem can
hold. It is possible for a device to run out of inodes. When this happens, new
files cannot be created on the device, even though there may be free space
available.
disk_inodes._run_wrappers


--------------------------------------------------------------------------------


NETWORKING STACK

Metrics for the networking stack of the system. These metrics are collected from
/proc/net/netstat or attaching kprobes to kernel functions, apply to both IPv4
and IPv6 traffic and are related to operation of the kernel networking stack.



TCP


ip.tcppackets

ip.tcperrors

ip.tcpopens

ip.tcpsock

ip.tcphandshake


TCP connection aborts.

BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a
packet with a sequence number beyond the last one for this connection - the
kernel responds with RST (closes the connection). UserClosed - happens when the
kernel receives data on an already closed connection and responds with RST.
NoMemory - happens when there are too many orphaned sockets (not attached to an
fd) and the kernel has to drop a connection - sometimes it will send an RST,
sometimes it won't. Timeout - happens when a connection times out. Linger -
happens when the kernel killed a socket that was already closed by the
application and lingered around for long enough. Failed - happens when the
kernel attempted to se...

TCP connection aborts.

BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a
packet with a sequence number beyond the last one for this connection - the
kernel responds with RST (closes the connection). UserClosed - happens when the
kernel receives data on an already closed connection and responds with RST.
NoMemory - happens when there are too many orphaned sockets (not attached to an
fd) and the kernel has to drop a connection - sometimes it will send an RST,
sometimes it won't. Timeout - happens when a connection times out. Linger -
happens when the kernel killed a socket that was already closed by the
application and lingered around for long enough. Failed - happens when the
kernel attempted to send an RST but failed because there was no memory
available.

show more information
ip.tcpconnaborts


The SYN queue of the kernel tracks TCP handshakes until connections get fully
established. It overflows when too many incoming TCP connection requests hang in
the half-open state and the server is not configured to fall back to SYN
cookies. Overflows are usually caused by SYN flood DoS attacks.

Drops - number of connections dropped because the SYN queue was full and SYN
cookies were disabled. Cookies - number of SYN cookies sent because the SYN
queue was full.

ip.tcp_syn_queue


The accept queue of the kernel holds the fully established TCP connections,
waiting to be handled by the listening application.

Overflows - the number of established connections that could not be handled
because the receive queue of the listening application was full. Drops - number
of incoming connections that could not be handled, including SYN floods,
overflows, out of memory, security issues, no route to destination, reception of
related ICMP messages, socket is broadcast or multicast.



ip.tcp_accept_queue


TCP prevents out-of-order packets by either sequencing them in the correct order
or by requesting the retransmission of out-of-order packets.

Timestamp - detected re-ordering using the timestamp option. SACK - detected
re-ordering using Selective Acknowledgment algorithm. FACK - detected
re-ordering using Forward Acknowledgment algorithm. Reno - detected re-ordering
using Fast Retransmit algorithm.

ip.tcpreorders


TCP maintains an out-of-order queue to keep the out-of-order packets in the TCP
communication.

InQueue - the TCP layer receives an out-of-order packet and has enough memory to
queue it. Dropped - the TCP layer receives an out-of-order packet but does not
have enough memory, so drops it. Merged - the received out-of-order packet has
an overlay with the previous packet. The overlay part will be dropped. All these
packets will also be counted into InQueue. Pruned - packets dropped from
out-of-order queue because of socket buffer overrun.

ip.tcpofo


SYN cookies are used to mitigate SYN flood.

Received - after sending a SYN cookie, it came back to us and passed the check.
Sent - an application was not able to accept a connection fast enough, so the
kernel could not store an entry in the queue for this connection. Instead of
dropping it, it sent a SYN cookie to the client. Failed - the MSS decoded from
the SYN cookie is invalid. When this counter is incremented, the received packet
won’t be treated as a SYN cookie.

ip.tcpsyncookies

The number of times a socket was put in memory pressure due to a non fatal
memory allocation failure (the kernel attempts to work around this situation by
reducing the send buffers, etc).
ip.tcpmemorypressures



SOCKETS


ip.sockstat_sockets


--------------------------------------------------------------------------------


IPV4 NETWORKING

Metrics for the IPv4 stack of the system. Internet Protocol version 4 (IPv4) is
the fourth version of the Internet Protocol (IP). It is one of the core
protocols of standards-based internetworking methods in the Internet. IPv4 is a
connectionless protocol for use on packet-switched networks. It operates on a
best effort delivery model, in that it does not guarantee delivery, nor does it
assure proper sequencing or avoidance of duplicate delivery. These aspects,
including data integrity, are addressed by an upper layer transport protocol,
such as the Transmission Control Protocol (TCP).



PACKETS



IPv4 packets statistics for this host.

Received - packets received by the IP layer. This counter will be increased even
if the packet is dropped later. Sent - packets sent via IP layer, for both
single cast and multicast packets. This counter does not include any packets
counted in Forwarded. Forwarded - input packets for which this host was not
their final IP destination, as a result of which an attempt was made to find a
route to forward them to that final destination. In hosts which do not act as IP
Gateways, this counter will include only those packets which were Source-Routed
and the Source-Route option processing was successful. Delivered - packets
delivered to the upper layer protocols, e.g. TCP, UDP, ICMP, and so on.

ipv4.packets



ERRORS



The number of discarded IPv4 packets.

InDiscards, OutDiscards - inbound and outbound packets which were chosen to be
discarded even though no errors had been detected to prevent their being
deliverable to a higher-layer protocol. InHdrErrors - input packets that have
been discarded due to errors in their IP headers, including bad checksums,
version number mismatch, other format errors, time-to-live exceeded, errors
discovered in processing their IP options, etc. OutNoRoutes - packets that have
been discarded because no route could be found to transmit them to their
destination. This includes any packets which a host cannot route because all of
its default gateways are down. InAddrErrors - input packets that have been
discarded du...

The number of discarded IPv4 packets.

InDiscards, OutDiscards - inbound and outbound packets which were chosen to be
discarded even though no errors had been detected to prevent their being
deliverable to a higher-layer protocol. InHdrErrors - input packets that have
been discarded due to errors in their IP headers, including bad checksums,
version number mismatch, other format errors, time-to-live exceeded, errors
discovered in processing their IP options, etc. OutNoRoutes - packets that have
been discarded because no route could be found to transmit them to their
destination. This includes any packets which a host cannot route because all of
its default gateways are down. InAddrErrors - input packets that have been
discarded due to invalid IP address or the destination IP address is not a local
address and IP forwarding is not enabled. InUnknownProtos - input packets which
were discarded because of an unknown or unsupported protocol.

show more information
ipv4.errors



BROADCAST


ipv4.bcast

ipv4.bcastpkts



MULTICAST


ipv4.mcast

ipv4.mcastpkts



TCP



The number of TCP sockets in the system in certain states.

Alloc - in any TCP state. Orphan - no longer attached to a socket descriptor in
any user processes, but for which the kernel is still required to maintain state
in order to complete the transport protocol. InUse - in any TCP state, excluding
TIME-WAIT and CLOSED. TimeWait - in the TIME-WAIT state.

ipv4.sockstat_tcp_sockets

The amount of memory used by allocated TCP sockets.
ipv4.sockstat_tcp_mem



ICMP



The number of transferred IPv4 ICMP messages.

Received, Sent - ICMP messages which the host received and attempted to send.
Both these counters include errors.

ipv4.icmp

The number of transferred IPv4 ICMP control messages.
ipv4.icmpmsg


The number of IPv4 ICMP errors.

InErrors - received ICMP messages but determined as having ICMP-specific errors,
e.g. bad ICMP checksums, bad length, etc. OutErrors - ICMP messages which this
host did not send due to problems discovered within ICMP such as a lack of
buffers. This counter does not include errors discovered outside the ICMP layer
such as the inability of IP to route the resultant datagram. InCsumErrors -
received ICMP messages with bad checksum.

ipv4.icmp_errors



UDP


The number of transferred UDP packets.
ipv4.udppackets


The number of errors encountered during transferring UDP packets.

RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no
kernel memory available, or the IP layer reported an error when trying to send
the packet and no error queue has been setup. InErrors - that is an aggregated
counter for all errors, excluding NoPorts. NoPorts - no application is listening
at the destination port. InCsumErrors - a UDP checksum failure is detected.
IgnoredMulti - ignored multicast packets.
ipv4.udperrors

The number of used UDP sockets.
ipv4.sockstat_udp_sockets

The amount of memory used by allocated UDP sockets.
ipv4.sockstat_udp_mem



UDPLITE


The number of transferred UDP-Lite packets.
ipv4.udplite


The number of errors encountered during transferring UDP-Lite packets.

RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no
kernel memory available, or the IP layer reported an error when trying to send
the packet and no error queue has been setup. InErrors - that is an aggregated
counter for all errors, excluding NoPorts. NoPorts - no application is listening
at the destination port. InCsumErrors - a UDP checksum failure is detected.
IgnoredMulti - ignored multicast packets.
ipv4.udplite_errors

The number of used UDP-Lite sockets.
ipv4.sockstat_udplite_sockets



ECN


ipv4.ecnpkts



FRAGMENTS



IPv4 reassembly statistics for this system.

OK - packets that have been successfully reassembled. Failed - failures detected
by the IP reassembly algorithm. This is not necessarily a count of discarded IP
fragments since some algorithms can lose track of the number of fragments by
combining them as they are received. All - received IP fragments which needed to
be reassembled.

ipv4.fragsin


IPv4 fragmentation statistics for this system.

OK - packets that have been successfully fragmented. Failed - packets that have
been discarded because they needed to be fragmented but could not be, e.g. due
to Don't Fragment (DF) flag was set. Created - fragments that have been
generated as a result of fragmentation.

ipv4.fragsout

The number of entries in hash tables that are used for packet reassembly.
ipv4.sockstat_frag_sockets

The amount of memory used for packet reassembly.
ipv4.sockstat_frag_mem



RAW


The number of used raw sockets.
ipv4.sockstat_raw_sockets


--------------------------------------------------------------------------------


IPV6 NETWORKING

Metrics for the IPv6 stack of the system. Internet Protocol version 6 (IPv6) is
the most recent version of the Internet Protocol (IP), the communications
protocol that provides an identification and location system for computers on
networks and routes traffic across the Internet. IPv6 was developed by the
Internet Engineering Task Force (IETF) to deal with the long-anticipated problem
of IPv4 address exhaustion. IPv6 is intended to replace IPv4.



PACKETS



IPv6 packet statistics for this host.

Received - packets received by the IP layer. This counter will be increased even
if the packet is dropped later. Sent - packets sent via IP layer, for both
single cast and multicast packets. This counter does not include any packets
counted in Forwarded. Forwarded - input packets for which this host was not
their final IP destination, as a result of which an attempt was made to find a
route to forward them to that final destination. In hosts which do not act as IP
Gateways, this counter will include only those packets which were Source-Routed
and the Source-Route option processing was successful. Delivers - packets
delivered to the upper layer protocols, e.g. TCP, UDP, ICMP, and so on.

ipv6.packets


Total number of received IPv6 packets with ECN bits set in the system.

CEP - congestion encountered. NoECTP - non ECN-capable transport. ECTP0 and
ECTP1 - ECN capable transport.

ipv6.ect



ERRORS



The number of discarded IPv6 packets.

InDiscards, OutDiscards - packets which were chosen to be discarded even though
no errors had been detected to prevent their being deliverable to a higher-layer
protocol. InHdrErrors - errors in IP headers, including bad checksums, version
number mismatch, other format errors, time-to-live exceeded, etc. InAddrErrors -
invalid IP address or the destination IP address is not a local address and IP
forwarding is not enabled. InUnknownProtos - unknown or unsupported protocol.
InTooBigErrors - the size exceeded the link MTU. InTruncatedPkts - packet frame
did not carry enough data. InNoRoutes - no route could be found while
forwarding. OutNoRoutes - no route could be found for packets generated by this
host.

ipv6.errors



BROADCAST6


Total IPv6 broadcast traffic.
ipv6.bcast



MULTICAST6


Total IPv6 multicast traffic.
ipv6.mcast

Total transferred IPv6 multicast packets.
ipv6.mcastpkts



TCP6


The number of TCP sockets in any state, excluding TIME-WAIT and CLOSED.
ipv6.sockstat6_tcp_sockets



ICMP6



The number of transferred ICMPv6 messages.

Received, Sent - ICMP messages which the host received and attempted to send.
Both these counters include errors.

ipv6.icmp

The number of transferred ICMPv6 Redirect messages. These messages inform a host
to update its routing information (to send packets on an alternative route).
ipv6.icmpredir


The number of ICMPv6 errors and error messages.

InErrors, OutErrors - bad ICMP messages (bad ICMP checksums, bad length, etc.).
InCsumErrors - wrong checksum.

ipv6.icmperrors

The number of ICMPv6 Echo messages.
ipv6.icmpechos


The number of transferred ICMPv6 Group Membership messages.

Multicast routers send Group Membership Query messages to learn which groups
have members on each of their attached physical networks. Host computers respond
by sending a Group Membership Report for each multicast group joined by the
host. A host computer can also send a Group Membership Report when it joins a
new multicast group. Group Membership Reduction messages are sent when a host
computer leaves a multicast group.

ipv6.groupmemb


The number of transferred ICMPv6 Router Discovery messages.

Router Solicitations message is sent from a computer host to any routers on the
local area network to request that they advertise their presence on the network.
Router Advertisement message is sent by a router on the local area network to
announce its IP address as available for routing.

ipv6.icmprouter


The number of transferred ICMPv6 Neighbour Discovery messages.

Neighbor Solicitations are used by nodes to determine the link layer address of
a neighbor, or to verify that a neighbor is still reachable via a cached link
layer address. Neighbor Advertisements are used by nodes to respond to a
Neighbor Solicitation message.

ipv6.icmpneighbor

The number of transferred ICMPv6 Multicast Listener Discovery (MLD) messages.
ipv6.icmpmldv2

The number of transferred ICMPv6 messages of certain types.
ipv6.icmptypes



UDP6


The number of transferred UDP packets.
ipv6.udppackets


The number of errors encountered during transferring UDP packets.

RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no
kernel memory available, or the IP layer reported an error when trying to send
the packet and no error queue has been setup. InErrors - that is an aggregated
counter for all errors, excluding NoPorts. NoPorts - no application is listening
at the destination port. InCsumErrors - a UDP checksum failure is detected.
IgnoredMulti - ignored multicast packets.
ipv6.udperrors

The number of used UDP sockets.
ipv6.sockstat6_udp_sockets



UDPLITE6


The number of transferred UDP-Lite packets.
ipv6.udplitepackets


The number of errors encountered during transferring UDP-Lite packets.

RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no
kernel memory available, or the IP layer reported an error when trying to send
the packet and no error queue has been setup. InErrors - that is an aggregated
counter for all errors, excluding NoPorts. NoPorts - no application is listening
at the destination port. InCsumErrors - a UDP checksum failure is detected.

ipv6.udpliteerrors

The number of used UDP-Lite sockets.
ipv6.sockstat6_udplite_sockets



FRAGMENTS6



IPv6 reassembly statistics for this system.

OK - packets that have been successfully reassembled. Failed - failures detected
by the IP reassembly algorithm. This is not necessarily a count of discarded IP
fragments since some algorithms can lose track of the number of fragments by
combining them as they are received. Timeout - reassembly timeouts detected. All
- received IP fragments which needed to be reassembled.

ipv6.fragsin


IPv6 fragmentation statistics for this system.

OK - packets that have been successfully fragmented. Failed - packets that have
been discarded because they needed to be fragmented but could not be, e.g. due
to Don't Fragment (DF) flag was set. All - fragments that have been generated as
a result of fragmentation.

ipv6.fragsout

The number of entries in hash tables that are used for packet reassembly.
ipv6.sockstat6_frag_sockets



RAW6


The number of used raw sockets.
ipv6.sockstat6_raw_sockets


--------------------------------------------------------------------------------


NETWORK INTERFACES

Performance metrics for network interfaces.

Netdata retrieves this data reading the /proc/net/dev file and /sys/class/net/
directory.




ENP2S0

net.enp2s0

net.enp2s0

The amount of traffic transferred by the network interface.
net.enp2s0

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.enp2s0


The number of errors encountered by the network interface.

Inbound - bad packets received on this interface. It includes dropped packets
due to invalid length, CRC, frame alignment, and other errors. Outbound -
transmit problems. It includes frames transmission errors due to loss of
carrier, FIFO underrun/underflow, heartbeat, late collisions, and other
problems.

net_errors.enp2s0


The number of packets that have been dropped at the network interface level.

Inbound - packets received but not processed, e.g. due to softnet backlog
overflow, bad/unintended VLAN tags, unknown or unregistered protocols, IPv6
frames when the server is not configured for IPv6. Outbound - packets dropped on
their way to transmission, e.g. due to lack of resources.

net_drops.enp2s0


The number of FIFO errors encountered by the network interface.

Inbound - packets dropped because they did not fit into buffers provided by the
host, e.g. packets larger than MTU or next buffer in the ring was not available
for a scatter transfer. Outbound - frame transmission errors due to device FIFO
underrun/underflow. This condition occurs when the device begins transmission of
a frame but is unable to deliver the entire frame to the transmitter in time for
transmission.

net_fifo.enp2s0


The number of errors encountered by the network interface.

Frames - aggregated counter for dropped packets due to invalid length, FIFO
overflow, CRC, and frame alignment errors. Collisions - collisions during packet
transmissions. Carrier - aggregated counter for frame transmission errors due to
excessive collisions, loss of carrier, device FIFO underrun/underflow,
Heartbeat/SQE Test errors, and late collisions.

net_events.enp2s0

The interface's latest or current speed that the network adapter negotiated with
the device it is connected to. This does not give the max supported speed of the
NIC.
net_speed.enp2s0


The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.enp2s0


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.enp2s0

The current physical link state of the interface.
net_carrier.enp2s0

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.enp2s0



WLP1S0

net.wlp1s0

net.wlp1s0

The amount of traffic transferred by the network interface.
net.wlp1s0

The number of packets transferred by the network interface. Received multicast
counter is commonly calculated at the device level (unlike received) and
therefore may include packets which did not reach the host.
net_packets.wlp1s0


The number of errors encountered by the network interface.

Inbound - bad packets received on this interface. It includes dropped packets
due to invalid length, CRC, frame alignment, and other errors. Outbound -
transmit problems. It includes frames transmission errors due to loss of
carrier, FIFO underrun/underflow, heartbeat, late collisions, and other
problems.

net_errors.wlp1s0


The number of packets that have been dropped at the network interface level.

Inbound - packets received but not processed, e.g. due to softnet backlog
overflow, bad/unintended VLAN tags, unknown or unregistered protocols, IPv6
frames when the server is not configured for IPv6. Outbound - packets dropped on
their way to transmission, e.g. due to lack of resources.

net_drops.wlp1s0


The number of FIFO errors encountered by the network interface.

Inbound - packets dropped because they did not fit into buffers provided by the
host, e.g. packets larger than MTU or next buffer in the ring was not available
for a scatter transfer. Outbound - frame transmission errors due to device FIFO
underrun/underflow. This condition occurs when the device begins transmission of
a frame but is unable to deliver the entire frame to the transmitter in time for
transmission.

net_fifo.wlp1s0


The number of errors encountered by the network interface.

Frames - aggregated counter for dropped packets due to invalid length, FIFO
overflow, CRC, and frame alignment errors. Collisions - collisions during packet
transmissions. Carrier - aggregated counter for frame transmission errors due to
excessive collisions, loss of carrier, device FIFO underrun/underflow,
Heartbeat/SQE Test errors, and late collisions.

net_events.wlp1s0

The interface's latest or current speed that the network adapter negotiated with
the device it is connected to. This does not give the max supported speed of the
NIC.
net_speed.wlp1s0


The interface's latest or current duplex that the network adapter negotiated
with the device it is connected to.

Unknown - the duplex mode can not be determined. Half duplex - the communication
is one direction at a time. Full duplex - the interface is able to send and
receive data simultaneously.

net_duplex.wlp1s0


The current operational state of the interface.

Unknown - the state can not be determined. NotPresent - the interface has
missing (typically, hardware) components. Down - the interface is unable to
transfer data on L1, e.g. ethernet is not plugged or interface is
administratively down. LowerLayerDown - the interface is down due to state of
lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable
test. It can’t be used for normal traffic until tests complete. Dormant - the
interface is L1 up, but waiting for an external event, e.g. for a protocol to
establish. Up - the interface is ready to pass packets and can be used.

net_operstate.wlp1s0

The current physical link state of the interface.
net_carrier.wlp1s0

The interface's currently configured Maximum transmission unit (MTU) value. MTU
is the size of the largest protocol data unit that can be communicated in a
single network layer transaction.
net_mtu.wlp1s0


--------------------------------------------------------------------------------


WIRELESS INTERFACES

Performance metrics for wireless interfaces.



WLP1S0


wireless.wlp1s0_status

Overall quality of the link. May be based on the level of contention or
interference, the bit or frame error rate, how good the received signal is, some
timing synchronisation, or other hardware metric.
wireless.wlp1s0_link_quality

Received signal strength (RSSI).
wireless.wlp1s0_signal_level

Background noise level (when no packet is transmitted).
wireless.wlp1s0_noise_level


The number of discarded packets.



NWID - received packets with a different NWID or ESSID. Used to detect
configuration problems or adjacent network existence (on the same frequency).
Crypt - received packets that the hardware was unable to code/encode. This can
be used to detect invalid encryption settings. Frag - received packets for which
the hardware was not able to properly re-assemble the link layer fragments (most
likely one was missing). Retry - packets that the hardware failed to deliver.
Most MAC protocols will retry the packet a number of times before giving up.
Misc - other packets lost in relation with specific wireless operations.



wireless.wlp1s0_discarded_packets

The number of periodic beacons from the Cell or the Access Point have been
missed. Beacons are sent at regular intervals to maintain the cell coordination,
failure to receive them usually indicates that the card is out of range.
wireless.wlp1s0_missed_beacon


--------------------------------------------------------------------------------


FIREWALL (NETFILTER)

Performance metrics of the netfilter components.



CONNECTION TRACKER

Netfilter Connection Tracker performance metrics. The connection tracker keeps
track of all connections of the machine, inbound and outbound. It works by
keeping a database with all open connections, tracking network and address
translation and connection expectations.

The number of entries in the conntrack table.
netfilter.conntrack_sockets


Packet tracking statistics. New (since v4.9) and Ignore (since v5.10) are
hardcoded to zeros in the latest kernel.

New - conntrack entries added which were not expected before. Ignore - packets
seen which are already connected to a conntrack entry. Invalid - packets seen
which can not be tracked.

netfilter.conntrack_new


The number of changes in conntrack tables.

Inserted, Deleted - conntrack entries which were inserted or removed.
Delete-list - conntrack entries which were put to dying list.

netfilter.conntrack_changes


The number of events in the "expect" table. Connection tracking expectations are
the mechanism used to "expect" RELATED connections to existing ones. An
expectation is a connection that is expected to happen in a period of time.

Created, Deleted - conntrack entries which were inserted or removed. New -
conntrack entries added after an expectation for them was already present.

netfilter.conntrack_expect


Conntrack errors.

IcmpError - packets which could not be tracked due to error situation.
InsertFailed - entries for which list insertion was attempted but failed
(happens if the same entry is already present). Drop - packets dropped due to
conntrack failure. Either new conntrack entry allocation failed, or protocol
helper dropped the packet. EarlyDrop - dropped conntrack entries to make room
for new ones, if maximum table size was reached.

netfilter.conntrack_errors


Conntrack table lookup statistics.

Searched - conntrack table lookups performed. Restarted - conntrack table
lookups which had to be restarted due to hashtable resizes. Found - conntrack
table lookups which were successful.

netfilter.conntrack_search



NETLINK


netfilter.netlink_new

netfilter.netlink_changes

netfilter.netlink_expect

netfilter.netlink_errors

netfilter.netlink_search


--------------------------------------------------------------------------------


SYSTEMD CADDY


CPU


systemd_caddy.cpu



MEM


systemd_caddy.mem_usage

systemd_caddy.mem

systemd_caddy.writeback

systemd_caddy.pgfaults



DISK


systemd_caddy.io

systemd_caddy.serviced_ops



PIDS


systemd_caddy.pids_current


--------------------------------------------------------------------------------


SYSTEMD DBUS


CPU


systemd_dbus.cpu



MEM


systemd_dbus.mem_usage

systemd_dbus.mem

systemd_dbus.writeback

systemd_dbus.pgfaults



DISK


systemd_dbus.io

systemd_dbus.serviced_ops



PIDS


systemd_dbus.pids_current


--------------------------------------------------------------------------------


SYSTEMD MEALIE


CPU


systemd_mealie.cpu



MEM


systemd_mealie.mem_usage

systemd_mealie.mem

systemd_mealie.writeback

systemd_mealie.pgfaults



DISK


systemd_mealie.io

systemd_mealie.serviced_ops



PIDS


systemd_mealie.pids_current


--------------------------------------------------------------------------------


SYSTEMD NETDATA


CPU


systemd_netdata.cpu



MEM


systemd_netdata.mem_usage

systemd_netdata.mem

systemd_netdata.writeback

systemd_netdata.pgfaults



DISK


systemd_netdata.io

systemd_netdata.serviced_ops



PIDS


systemd_netdata.pids_current


--------------------------------------------------------------------------------


SYSTEMD NETWORKMANAGER


CPU


systemd_networkmanager.cpu



MEM


systemd_networkmanager.mem_usage

systemd_networkmanager.mem

systemd_networkmanager.writeback

systemd_networkmanager.pgfaults



DISK


systemd_networkmanager.io

systemd_networkmanager.serviced_ops



PIDS


systemd_networkmanager.pids_current


--------------------------------------------------------------------------------


SYSTEMD NIX-DAEMON


CPU


systemd_nix-daemon.cpu



MEM


systemd_nix-daemon.mem_usage

systemd_nix-daemon.mem

systemd_nix-daemon.writeback

systemd_nix-daemon.pgfaults



DISK


systemd_nix-daemon.io

systemd_nix-daemon.serviced_ops



PIDS


systemd_nix-daemon.pids_current


--------------------------------------------------------------------------------


SYSTEMD NSCD


CPU


systemd_nscd.cpu



MEM


systemd_nscd.mem_usage

systemd_nscd.mem

systemd_nscd.writeback

systemd_nscd.pgfaults



DISK


systemd_nscd.io

systemd_nscd.serviced_ops



PIDS


systemd_nscd.pids_current


--------------------------------------------------------------------------------


SYSTEMD POSTGRESQL


CPU


systemd_postgresql.cpu



MEM


systemd_postgresql.mem_usage

systemd_postgresql.mem

systemd_postgresql.writeback

systemd_postgresql.pgfaults



DISK


systemd_postgresql.io

systemd_postgresql.serviced_ops



PIDS


systemd_postgresql.pids_current


--------------------------------------------------------------------------------


SYSTEMD RADICALE


CPU


systemd_radicale.cpu



MEM


systemd_radicale.mem_usage

systemd_radicale.mem

systemd_radicale.writeback

systemd_radicale.pgfaults



DISK


systemd_radicale.io

systemd_radicale.serviced_ops



PIDS


systemd_radicale.pids_current


--------------------------------------------------------------------------------


SYSTEMD SSHD


CPU


systemd_sshd.cpu



MEM


systemd_sshd.mem_usage

systemd_sshd.mem

systemd_sshd.writeback

systemd_sshd.pgfaults



DISK


systemd_sshd.io

systemd_sshd.serviced_ops



PIDS


systemd_sshd.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-JOURNALD


CPU


systemd_systemd-journald.cpu



MEM


systemd_systemd-journald.mem_usage

systemd_systemd-journald.mem

systemd_systemd-journald.writeback

systemd_systemd-journald.pgfaults



DISK


systemd_systemd-journald.io

systemd_systemd-journald.serviced_ops



PIDS


systemd_systemd-journald.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-LOGIND


CPU


systemd_systemd-logind.cpu



MEM


systemd_systemd-logind.mem_usage

systemd_systemd-logind.mem

systemd_systemd-logind.writeback

systemd_systemd-logind.pgfaults



DISK


systemd_systemd-logind.io

systemd_systemd-logind.serviced_ops



PIDS


systemd_systemd-logind.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-OOMD


CPU


systemd_systemd-oomd.cpu



MEM


systemd_systemd-oomd.mem_usage

systemd_systemd-oomd.mem

systemd_systemd-oomd.writeback

systemd_systemd-oomd.pgfaults



DISK


systemd_systemd-oomd.io

systemd_systemd-oomd.serviced_ops



PIDS


systemd_systemd-oomd.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-TIMESYNCD


CPU


systemd_systemd-timesyncd.cpu



MEM


systemd_systemd-timesyncd.mem_usage

systemd_systemd-timesyncd.mem

systemd_systemd-timesyncd.writeback

systemd_systemd-timesyncd.pgfaults



DISK


systemd_systemd-timesyncd.io

systemd_systemd-timesyncd.serviced_ops



PIDS


systemd_systemd-timesyncd.pids_current


--------------------------------------------------------------------------------


SYSTEMD SYSTEMD-UDEVD


CPU


systemd_systemd-udevd.cpu



MEM


systemd_systemd-udevd.mem_usage

systemd_systemd-udevd.mem

systemd_systemd-udevd.writeback

systemd_systemd-udevd.pgfaults



DISK


systemd_systemd-udevd.io

systemd_systemd-udevd.serviced_ops



PIDS


systemd_systemd-udevd.pids_current


--------------------------------------------------------------------------------


SYSTEMD VAULTWARDEN


CPU


systemd_vaultwarden.cpu



MEM


systemd_vaultwarden.mem_usage

systemd_vaultwarden.mem

systemd_vaultwarden.writeback

systemd_vaultwarden.pgfaults



DISK


systemd_vaultwarden.io

systemd_vaultwarden.serviced_ops



PIDS


systemd_vaultwarden.pids_current


--------------------------------------------------------------------------------


SYSTEMD WPA SUPPLICANT


CPU


systemd_wpa_supplicant.cpu



MEM


systemd_wpa_supplicant.mem_usage

systemd_wpa_supplicant.mem

systemd_wpa_supplicant.writeback

systemd_wpa_supplicant.pgfaults



DISK


systemd_wpa_supplicant.io

systemd_wpa_supplicant.serviced_ops



PIDS


systemd_wpa_supplicant.pids_current


--------------------------------------------------------------------------------


APP


CPU


app.bitcoin-node_cpu_utilization

app.build_cpu_utilization

app.debugfs_plugin_cpu_utilization

app.go_d_plugin_cpu_utilization

app.gui_cpu_utilization

app.httpd_cpu_utilization

app.i3_cpu_utilization

app.kernel_cpu_utilization

app.khugepaged_cpu_utilization

app.ksmd_cpu_utilization

app.logs_cpu_utilization

app.media_cpu_utilization

app.netdata_cpu_utilization

app.netmanager_cpu_utilization

app.network-viewer_plugin_cpu_utilization

app.nfacct_plugin_cpu_utilization

app.nosql_cpu_utilization

app.other_cpu_utilization

app.puma_cpu_utilization

app.python_d_plugin_cpu_utilization

app.sidekiq_cpu_utilization

app.ssh_cpu_utilization

app.system_cpu_utilization

app.systemd-journal_plugin_cpu_utilization

app.tc-qos-helper_cpu_utilization

app.time_cpu_utilization

app.unicorn_cpu_utilization

app.wifi_cpu_utilization

app.bitcoin-node_cpu_context_switches

app.build_cpu_context_switches

app.debugfs_plugin_cpu_context_switches

app.go_d_plugin_cpu_context_switches

app.gui_cpu_context_switches

app.httpd_cpu_context_switches

app.i3_cpu_context_switches

app.kernel_cpu_context_switches

app.khugepaged_cpu_context_switches

app.ksmd_cpu_context_switches

app.logs_cpu_context_switches

app.media_cpu_context_switches

app.netdata_cpu_context_switches

app.netmanager_cpu_context_switches

app.network-viewer_plugin_cpu_context_switches

app.nfacct_plugin_cpu_context_switches

app.nosql_cpu_context_switches

app.other_cpu_context_switches

app.puma_cpu_context_switches

app.python_d_plugin_cpu_context_switches

app.sidekiq_cpu_context_switches

app.ssh_cpu_context_switches

app.system_cpu_context_switches

app.systemd-journal_plugin_cpu_context_switches

app.tc-qos-helper_cpu_context_switches

app.time_cpu_context_switches

app.unicorn_cpu_context_switches

app.wifi_cpu_context_switches



MEM


app.bitcoin-node_mem_private_usage

app.build_mem_private_usage

app.debugfs_plugin_mem_private_usage

app.go_d_plugin_mem_private_usage

app.gui_mem_private_usage

app.httpd_mem_private_usage

app.i3_mem_private_usage

app.kernel_mem_private_usage

app.khugepaged_mem_private_usage

app.ksmd_mem_private_usage

app.logs_mem_private_usage

app.media_mem_private_usage

app.netdata_mem_private_usage

app.netmanager_mem_private_usage

app.network-viewer_plugin_mem_private_usage

app.nfacct_plugin_mem_private_usage

app.nosql_mem_private_usage

app.other_mem_private_usage

app.puma_mem_private_usage

app.python_d_plugin_mem_private_usage

app.sidekiq_mem_private_usage

app.ssh_mem_private_usage

app.system_mem_private_usage

app.systemd-journal_plugin_mem_private_usage

app.tc-qos-helper_mem_private_usage

app.time_mem_private_usage

app.unicorn_mem_private_usage

app.wifi_mem_private_usage

app.bitcoin-node_mem_usage

app.build_mem_usage

app.debugfs_plugin_mem_usage

app.go_d_plugin_mem_usage

app.gui_mem_usage

app.httpd_mem_usage

app.i3_mem_usage

app.kernel_mem_usage

app.khugepaged_mem_usage

app.ksmd_mem_usage

app.logs_mem_usage

app.media_mem_usage

app.netdata_mem_usage

app.netmanager_mem_usage

app.network-viewer_plugin_mem_usage

app.nfacct_plugin_mem_usage

app.nosql_mem_usage

app.other_mem_usage

app.puma_mem_usage

app.python_d_plugin_mem_usage

app.sidekiq_mem_usage

app.ssh_mem_usage

app.system_mem_usage

app.systemd-journal_plugin_mem_usage

app.tc-qos-helper_mem_usage

app.time_mem_usage

app.unicorn_mem_usage

app.wifi_mem_usage

app.bitcoin-node_mem_page_faults

app.build_mem_page_faults

app.debugfs_plugin_mem_page_faults

app.go_d_plugin_mem_page_faults

app.gui_mem_page_faults

app.httpd_mem_page_faults

app.i3_mem_page_faults

app.kernel_mem_page_faults

app.khugepaged_mem_page_faults

app.ksmd_mem_page_faults

app.logs_mem_page_faults

app.media_mem_page_faults

app.netdata_mem_page_faults

app.netmanager_mem_page_faults

app.network-viewer_plugin_mem_page_faults

app.nfacct_plugin_mem_page_faults

app.nosql_mem_page_faults

app.other_mem_page_faults

app.puma_mem_page_faults

app.python_d_plugin_mem_page_faults

app.sidekiq_mem_page_faults

app.ssh_mem_page_faults

app.system_mem_page_faults

app.systemd-journal_plugin_mem_page_faults

app.tc-qos-helper_mem_page_faults

app.time_mem_page_faults

app.unicorn_mem_page_faults

app.wifi_mem_page_faults

app.bitcoin-node_swap_usage

app.bitcoin-node_vmem_usage

app.build_swap_usage

app.build_vmem_usage

app.debugfs_plugin_swap_usage

app.debugfs_plugin_vmem_usage

app.go_d_plugin_swap_usage

app.go_d_plugin_vmem_usage

app.gui_swap_usage

app.gui_vmem_usage

app.httpd_swap_usage

app.httpd_vmem_usage

app.i3_swap_usage

app.i3_vmem_usage

app.kernel_swap_usage

app.kernel_vmem_usage

app.khugepaged_swap_usage

app.khugepaged_vmem_usage

app.ksmd_swap_usage

app.ksmd_vmem_usage

app.logs_swap_usage

app.logs_vmem_usage

app.media_swap_usage

app.media_vmem_usage

app.netdata_swap_usage

app.netdata_vmem_usage

app.netmanager_swap_usage

app.netmanager_vmem_usage

app.network-viewer_plugin_swap_usage

app.network-viewer_plugin_vmem_usage

app.nfacct_plugin_swap_usage

app.nfacct_plugin_vmem_usage

app.nosql_swap_usage

app.nosql_vmem_usage

app.other_swap_usage

app.other_vmem_usage

app.puma_swap_usage

app.puma_vmem_usage

app.python_d_plugin_swap_usage

app.python_d_plugin_vmem_usage

app.sidekiq_swap_usage

app.sidekiq_vmem_usage

app.ssh_swap_usage

app.ssh_vmem_usage

app.system_swap_usage

app.system_vmem_usage

app.systemd-journal_plugin_swap_usage

app.systemd-journal_plugin_vmem_usage

app.tc-qos-helper_swap_usage

app.tc-qos-helper_vmem_usage

app.time_swap_usage

app.time_vmem_usage

app.unicorn_swap_usage

app.unicorn_vmem_usage

app.wifi_swap_usage

app.wifi_vmem_usage



DISK


app.bitcoin-node_disk_physical_io

app.build_disk_physical_io

app.debugfs_plugin_disk_physical_io

app.go_d_plugin_disk_physical_io

app.gui_disk_physical_io

app.httpd_disk_physical_io

app.i3_disk_physical_io

app.kernel_disk_physical_io

app.khugepaged_disk_physical_io

app.ksmd_disk_physical_io

app.logs_disk_physical_io

app.media_disk_physical_io

app.netdata_disk_physical_io

app.netmanager_disk_physical_io

app.network-viewer_plugin_disk_physical_io

app.nfacct_plugin_disk_physical_io

app.nosql_disk_physical_io

app.other_disk_physical_io

app.puma_disk_physical_io

app.python_d_plugin_disk_physical_io

app.sidekiq_disk_physical_io

app.ssh_disk_physical_io

app.system_disk_physical_io

app.systemd-journal_plugin_disk_physical_io

app.tc-qos-helper_disk_physical_io

app.time_disk_physical_io

app.unicorn_disk_physical_io

app.wifi_disk_physical_io

app.bitcoin-node_disk_logical_io

app.build_disk_logical_io

app.debugfs_plugin_disk_logical_io

app.go_d_plugin_disk_logical_io

app.gui_disk_logical_io

app.httpd_disk_logical_io

app.i3_disk_logical_io

app.kernel_disk_logical_io

app.khugepaged_disk_logical_io

app.ksmd_disk_logical_io

app.logs_disk_logical_io

app.media_disk_logical_io

app.netdata_disk_logical_io

app.netmanager_disk_logical_io

app.network-viewer_plugin_disk_logical_io

app.nfacct_plugin_disk_logical_io

app.nosql_disk_logical_io

app.other_disk_logical_io

app.puma_disk_logical_io

app.python_d_plugin_disk_logical_io

app.sidekiq_disk_logical_io

app.ssh_disk_logical_io

app.system_disk_logical_io

app.systemd-journal_plugin_disk_logical_io

app.tc-qos-helper_disk_logical_io

app.time_disk_logical_io

app.unicorn_disk_logical_io

app.wifi_disk_logical_io



PROCESSES


app.bitcoin-node_processes

app.build_processes

app.debugfs_plugin_processes

app.go_d_plugin_processes

app.gui_processes

app.httpd_processes

app.i3_processes

app.kernel_processes

app.khugepaged_processes

app.ksmd_processes

app.logs_processes

app.media_processes

app.netdata_processes

app.netmanager_processes

app.network-viewer_plugin_processes

app.nfacct_plugin_processes

app.nosql_processes

app.other_processes

app.puma_processes

app.python_d_plugin_processes

app.sidekiq_processes

app.ssh_processes

app.system_processes

app.systemd-journal_plugin_processes

app.tc-qos-helper_processes

app.time_processes

app.unicorn_processes

app.wifi_processes

app.bitcoin-node_threads

app.build_threads

app.debugfs_plugin_threads

app.go_d_plugin_threads

app.gui_threads

app.httpd_threads

app.i3_threads

app.kernel_threads

app.khugepaged_threads

app.ksmd_threads

app.logs_threads

app.media_threads

app.netdata_threads

app.netmanager_threads

app.network-viewer_plugin_threads

app.nfacct_plugin_threads

app.nosql_threads

app.other_threads

app.puma_threads

app.python_d_plugin_threads

app.sidekiq_threads

app.ssh_threads

app.system_threads

app.systemd-journal_plugin_threads

app.tc-qos-helper_threads

app.time_threads

app.unicorn_threads

app.wifi_threads



FDS


app.bitcoin-node_fds_open_limit

app.build_fds_open_limit

app.debugfs_plugin_fds_open_limit

app.go_d_plugin_fds_open_limit

app.gui_fds_open_limit

app.httpd_fds_open_limit

app.i3_fds_open_limit

app.kernel_fds_open_limit

app.khugepaged_fds_open_limit

app.ksmd_fds_open_limit

app.logs_fds_open_limit

app.media_fds_open_limit

app.netdata_fds_open_limit

app.netmanager_fds_open_limit

app.network-viewer_plugin_fds_open_limit

app.nfacct_plugin_fds_open_limit

app.nosql_fds_open_limit

app.other_fds_open_limit

app.puma_fds_open_limit

app.python_d_plugin_fds_open_limit

app.sidekiq_fds_open_limit

app.ssh_fds_open_limit

app.system_fds_open_limit

app.systemd-journal_plugin_fds_open_limit

app.tc-qos-helper_fds_open_limit

app.time_fds_open_limit

app.unicorn_fds_open_limit

app.wifi_fds_open_limit

app.bitcoin-node_fds_open

app.build_fds_open

app.debugfs_plugin_fds_open

app.go_d_plugin_fds_open

app.gui_fds_open

app.httpd_fds_open

app.i3_fds_open

app.kernel_fds_open

app.khugepaged_fds_open

app.ksmd_fds_open

app.logs_fds_open

app.media_fds_open

app.netdata_fds_open

app.netmanager_fds_open

app.network-viewer_plugin_fds_open

app.nfacct_plugin_fds_open

app.nosql_fds_open

app.other_fds_open

app.puma_fds_open

app.python_d_plugin_fds_open

app.sidekiq_fds_open

app.ssh_fds_open

app.system_fds_open

app.systemd-journal_plugin_fds_open

app.tc-qos-helper_fds_open

app.time_fds_open

app.unicorn_fds_open

app.wifi_fds_open



UPTIME


app.bitcoin-node_uptime

app.build_uptime

app.debugfs_plugin_uptime

app.go_d_plugin_uptime

app.gui_uptime

app.httpd_uptime

app.i3_uptime

app.kernel_uptime

app.khugepaged_uptime

app.ksmd_uptime

app.logs_uptime

app.media_uptime

app.netdata_uptime

app.netmanager_uptime

app.network-viewer_plugin_uptime

app.nfacct_plugin_uptime

app.nosql_uptime

app.other_uptime

app.puma_uptime

app.python_d_plugin_uptime

app.sidekiq_uptime

app.ssh_uptime

app.system_uptime

app.systemd-journal_plugin_uptime

app.tc-qos-helper_uptime

app.time_uptime

app.unicorn_uptime

app.wifi_uptime


--------------------------------------------------------------------------------


USER


CPU


user.63892_cpu_utilization

user.caddy_cpu_utilization

user.gitlab_cpu_utilization

user.jellyfin_cpu_utilization

user.maria_cpu_utilization

user.messagebus_cpu_utilization

user.netdata_cpu_utilization

user.nextcloud_cpu_utilization

user.nginx_cpu_utilization

user.nixbld1_cpu_utilization

user.nixbld10_cpu_utilization

user.nixbld11_cpu_utilization

user.nixbld12_cpu_utilization

user.nscd_cpu_utilization

user.postgres_cpu_utilization

user.radicale_cpu_utilization

user.root_cpu_utilization

user.sshd_cpu_utilization

user.systemd-oom_cpu_utilization

user.systemd-timesync_cpu_utilization

user.vaultwarden_cpu_utilization

user.63892_cpu_context_switches

user.caddy_cpu_context_switches

user.gitlab_cpu_context_switches

user.jellyfin_cpu_context_switches

user.maria_cpu_context_switches

user.messagebus_cpu_context_switches

user.netdata_cpu_context_switches

user.nextcloud_cpu_context_switches

user.nginx_cpu_context_switches

user.nixbld1_cpu_context_switches

user.nixbld10_cpu_context_switches

user.nixbld11_cpu_context_switches

user.nixbld12_cpu_context_switches

user.nscd_cpu_context_switches

user.postgres_cpu_context_switches

user.radicale_cpu_context_switches

user.root_cpu_context_switches

user.sshd_cpu_context_switches

user.systemd-oom_cpu_context_switches

user.systemd-timesync_cpu_context_switches

user.vaultwarden_cpu_context_switches



MEM


user.63892_mem_private_usage

user.caddy_mem_private_usage

user.gitlab_mem_private_usage

user.jellyfin_mem_private_usage

user.maria_mem_private_usage

user.messagebus_mem_private_usage

user.netdata_mem_private_usage

user.nextcloud_mem_private_usage

user.nginx_mem_private_usage

user.nixbld1_mem_private_usage

user.nixbld10_mem_private_usage

user.nixbld11_mem_private_usage

user.nixbld12_mem_private_usage

user.nscd_mem_private_usage

user.postgres_mem_private_usage

user.radicale_mem_private_usage

user.root_mem_private_usage

user.sshd_mem_private_usage

user.systemd-oom_mem_private_usage

user.systemd-timesync_mem_private_usage

user.vaultwarden_mem_private_usage

user.63892_mem_usage

user.caddy_mem_usage

user.gitlab_mem_usage

user.jellyfin_mem_usage

user.maria_mem_usage

user.messagebus_mem_usage

user.netdata_mem_usage

user.nextcloud_mem_usage

user.nginx_mem_usage

user.nixbld1_mem_usage

user.nixbld10_mem_usage

user.nixbld11_mem_usage

user.nixbld12_mem_usage

user.nscd_mem_usage

user.postgres_mem_usage

user.radicale_mem_usage

user.root_mem_usage

user.sshd_mem_usage

user.systemd-oom_mem_usage

user.systemd-timesync_mem_usage

user.vaultwarden_mem_usage

user.63892_mem_page_faults

user.caddy_mem_page_faults

user.gitlab_mem_page_faults

user.jellyfin_mem_page_faults

user.maria_mem_page_faults

user.messagebus_mem_page_faults

user.netdata_mem_page_faults

user.nextcloud_mem_page_faults

user.nginx_mem_page_faults

user.nixbld1_mem_page_faults

user.nixbld10_mem_page_faults

user.nixbld11_mem_page_faults

user.nixbld12_mem_page_faults

user.nscd_mem_page_faults

user.postgres_mem_page_faults

user.radicale_mem_page_faults

user.root_mem_page_faults

user.sshd_mem_page_faults

user.systemd-oom_mem_page_faults

user.systemd-timesync_mem_page_faults

user.vaultwarden_mem_page_faults

user.63892_swap_usage

user.63892_vmem_usage

user.caddy_swap_usage

user.caddy_vmem_usage

user.gitlab_swap_usage

user.gitlab_vmem_usage

user.jellyfin_swap_usage

user.jellyfin_vmem_usage

user.maria_swap_usage

user.maria_vmem_usage

user.messagebus_swap_usage

user.messagebus_vmem_usage

user.netdata_swap_usage

user.netdata_vmem_usage

user.nextcloud_swap_usage

user.nextcloud_vmem_usage

user.nginx_swap_usage

user.nginx_vmem_usage

user.nixbld1_swap_usage

user.nixbld1_vmem_usage

user.nixbld10_swap_usage

user.nixbld10_vmem_usage

user.nixbld11_swap_usage

user.nixbld11_vmem_usage

user.nixbld12_swap_usage

user.nixbld12_vmem_usage

user.nscd_swap_usage

user.nscd_vmem_usage

user.postgres_swap_usage

user.postgres_vmem_usage

user.radicale_swap_usage

user.radicale_vmem_usage

user.root_swap_usage

user.root_vmem_usage

user.sshd_swap_usage

user.sshd_vmem_usage

user.systemd-oom_swap_usage

user.systemd-oom_vmem_usage

user.systemd-timesync_swap_usage

user.systemd-timesync_vmem_usage

user.vaultwarden_swap_usage

user.vaultwarden_vmem_usage



DISK


user.63892_disk_physical_io

user.caddy_disk_physical_io

user.gitlab_disk_physical_io

user.jellyfin_disk_physical_io

user.maria_disk_physical_io

user.messagebus_disk_physical_io

user.netdata_disk_physical_io

user.nextcloud_disk_physical_io

user.nginx_disk_physical_io

user.nixbld1_disk_physical_io

user.nixbld10_disk_physical_io

user.nixbld11_disk_physical_io

user.nixbld12_disk_physical_io

user.nscd_disk_physical_io

user.postgres_disk_physical_io

user.radicale_disk_physical_io

user.root_disk_physical_io

user.sshd_disk_physical_io

user.systemd-oom_disk_physical_io

user.systemd-timesync_disk_physical_io

user.vaultwarden_disk_physical_io

user.63892_disk_logical_io

user.caddy_disk_logical_io

user.gitlab_disk_logical_io

user.jellyfin_disk_logical_io

user.maria_disk_logical_io

user.messagebus_disk_logical_io

user.netdata_disk_logical_io

user.nextcloud_disk_logical_io

user.nginx_disk_logical_io

user.nixbld1_disk_logical_io

user.nixbld10_disk_logical_io

user.nixbld11_disk_logical_io

user.nixbld12_disk_logical_io

user.nscd_disk_logical_io

user.postgres_disk_logical_io

user.radicale_disk_logical_io

user.root_disk_logical_io

user.sshd_disk_logical_io

user.systemd-oom_disk_logical_io

user.systemd-timesync_disk_logical_io

user.vaultwarden_disk_logical_io



PROCESSES


user.63892_processes

user.caddy_processes

user.gitlab_processes

user.jellyfin_processes

user.maria_processes

user.messagebus_processes

user.netdata_processes

user.nextcloud_processes

user.nginx_processes

user.nixbld1_processes

user.nixbld10_processes

user.nixbld11_processes

user.nixbld12_processes

user.nscd_processes

user.postgres_processes

user.radicale_processes

user.root_processes

user.sshd_processes

user.systemd-oom_processes

user.systemd-timesync_processes

user.vaultwarden_processes

user.63892_threads

user.caddy_threads

user.gitlab_threads

user.jellyfin_threads

user.maria_threads

user.messagebus_threads

user.netdata_threads

user.nextcloud_threads

user.nginx_threads

user.nixbld1_threads

user.nixbld10_threads

user.nixbld11_threads

user.nixbld12_threads

user.nscd_threads

user.postgres_threads

user.radicale_threads

user.root_threads

user.sshd_threads

user.systemd-oom_threads

user.systemd-timesync_threads

user.vaultwarden_threads



FDS


user.63892_fds_open_limit

user.caddy_fds_open_limit

user.gitlab_fds_open_limit

user.jellyfin_fds_open_limit

user.maria_fds_open_limit

user.messagebus_fds_open_limit

user.netdata_fds_open_limit

user.nextcloud_fds_open_limit

user.nginx_fds_open_limit

user.nixbld1_fds_open_limit

user.nixbld10_fds_open_limit

user.nixbld11_fds_open_limit

user.nixbld12_fds_open_limit

user.nscd_fds_open_limit

user.postgres_fds_open_limit

user.radicale_fds_open_limit

user.root_fds_open_limit

user.sshd_fds_open_limit

user.systemd-oom_fds_open_limit

user.systemd-timesync_fds_open_limit

user.vaultwarden_fds_open_limit

user.63892_fds_open

user.caddy_fds_open

user.gitlab_fds_open

user.jellyfin_fds_open

user.maria_fds_open

user.messagebus_fds_open

user.netdata_fds_open

user.nextcloud_fds_open

user.nginx_fds_open

user.nixbld1_fds_open

user.nixbld10_fds_open

user.nixbld11_fds_open

user.nixbld12_fds_open

user.nscd_fds_open

user.postgres_fds_open

user.radicale_fds_open

user.root_fds_open

user.sshd_fds_open

user.systemd-oom_fds_open

user.systemd-timesync_fds_open

user.vaultwarden_fds_open



UPTIME


user.63892_uptime

user.caddy_uptime

user.gitlab_uptime

user.jellyfin_uptime

user.maria_uptime

user.messagebus_uptime

user.netdata_uptime

user.nextcloud_uptime

user.nginx_uptime

user.nixbld1_uptime

user.nixbld10_uptime

user.nixbld11_uptime

user.nixbld12_uptime

user.nscd_uptime

user.postgres_uptime

user.radicale_uptime

user.root_uptime

user.sshd_uptime

user.systemd-oom_uptime

user.systemd-timesync_uptime

user.vaultwarden_uptime


--------------------------------------------------------------------------------


USERGROUP


CPU


usergroup.63892_cpu_utilization

usergroup.caddy_cpu_utilization

usergroup.gitlab_cpu_utilization

usergroup.jellyfin_cpu_utilization

usergroup.messagebus_cpu_utilization

usergroup.netdata_cpu_utilization

usergroup.nextcloud_cpu_utilization

usergroup.nginx_cpu_utilization

usergroup.nixbld_cpu_utilization

usergroup.nscd_cpu_utilization

usergroup.postgres_cpu_utilization

usergroup.radicale_cpu_utilization

usergroup.root_cpu_utilization

usergroup.sshd_cpu_utilization

usergroup.systemd-oom_cpu_utilization

usergroup.systemd-timesync_cpu_utilization

usergroup.users_cpu_utilization

usergroup.vaultwarden_cpu_utilization

usergroup.63892_cpu_context_switches

usergroup.caddy_cpu_context_switches

usergroup.gitlab_cpu_context_switches

usergroup.jellyfin_cpu_context_switches

usergroup.messagebus_cpu_context_switches

usergroup.netdata_cpu_context_switches

usergroup.nextcloud_cpu_context_switches

usergroup.nginx_cpu_context_switches

usergroup.nixbld_cpu_context_switches

usergroup.nscd_cpu_context_switches

usergroup.postgres_cpu_context_switches

usergroup.radicale_cpu_context_switches

usergroup.root_cpu_context_switches

usergroup.sshd_cpu_context_switches

usergroup.systemd-oom_cpu_context_switches

usergroup.systemd-timesync_cpu_context_switches

usergroup.users_cpu_context_switches

usergroup.vaultwarden_cpu_context_switches



MEM


usergroup.63892_mem_private_usage

usergroup.caddy_mem_private_usage

usergroup.gitlab_mem_private_usage

usergroup.jellyfin_mem_private_usage

usergroup.messagebus_mem_private_usage

usergroup.netdata_mem_private_usage

usergroup.nextcloud_mem_private_usage

usergroup.nginx_mem_private_usage

usergroup.nixbld_mem_private_usage

usergroup.nscd_mem_private_usage

usergroup.postgres_mem_private_usage

usergroup.radicale_mem_private_usage

usergroup.root_mem_private_usage

usergroup.sshd_mem_private_usage

usergroup.systemd-oom_mem_private_usage

usergroup.systemd-timesync_mem_private_usage

usergroup.users_mem_private_usage

usergroup.vaultwarden_mem_private_usage

usergroup.63892_mem_usage

usergroup.caddy_mem_usage

usergroup.gitlab_mem_usage

usergroup.jellyfin_mem_usage

usergroup.messagebus_mem_usage

usergroup.netdata_mem_usage

usergroup.nextcloud_mem_usage

usergroup.nginx_mem_usage

usergroup.nixbld_mem_usage

usergroup.nscd_mem_usage

usergroup.postgres_mem_usage

usergroup.radicale_mem_usage

usergroup.root_mem_usage

usergroup.sshd_mem_usage

usergroup.systemd-oom_mem_usage

usergroup.systemd-timesync_mem_usage

usergroup.users_mem_usage

usergroup.vaultwarden_mem_usage

usergroup.63892_mem_page_faults

usergroup.caddy_mem_page_faults

usergroup.gitlab_mem_page_faults

usergroup.jellyfin_mem_page_faults

usergroup.messagebus_mem_page_faults

usergroup.netdata_mem_page_faults

usergroup.nextcloud_mem_page_faults

usergroup.nginx_mem_page_faults

usergroup.nixbld_mem_page_faults

usergroup.nscd_mem_page_faults

usergroup.postgres_mem_page_faults

usergroup.radicale_mem_page_faults

usergroup.root_mem_page_faults

usergroup.sshd_mem_page_faults

usergroup.systemd-oom_mem_page_faults

usergroup.systemd-timesync_mem_page_faults

usergroup.users_mem_page_faults

usergroup.vaultwarden_mem_page_faults

usergroup.63892_swap_usage

usergroup.63892_vmem_usage

usergroup.caddy_swap_usage

usergroup.caddy_vmem_usage

usergroup.gitlab_swap_usage

usergroup.gitlab_vmem_usage

usergroup.jellyfin_swap_usage

usergroup.jellyfin_vmem_usage

usergroup.messagebus_swap_usage

usergroup.messagebus_vmem_usage

usergroup.netdata_swap_usage

usergroup.netdata_vmem_usage

usergroup.nextcloud_swap_usage

usergroup.nextcloud_vmem_usage

usergroup.nginx_swap_usage

usergroup.nginx_vmem_usage

usergroup.nixbld_swap_usage

usergroup.nixbld_vmem_usage

usergroup.nscd_swap_usage

usergroup.nscd_vmem_usage

usergroup.postgres_swap_usage

usergroup.postgres_vmem_usage

usergroup.radicale_swap_usage

usergroup.radicale_vmem_usage

usergroup.root_swap_usage

usergroup.root_vmem_usage

usergroup.sshd_swap_usage

usergroup.sshd_vmem_usage

usergroup.systemd-oom_swap_usage

usergroup.systemd-oom_vmem_usage

usergroup.systemd-timesync_swap_usage

usergroup.systemd-timesync_vmem_usage

usergroup.users_swap_usage

usergroup.users_vmem_usage

usergroup.vaultwarden_swap_usage

usergroup.vaultwarden_vmem_usage



DISK


usergroup.63892_disk_physical_io

usergroup.caddy_disk_physical_io

usergroup.gitlab_disk_physical_io

usergroup.jellyfin_disk_physical_io

usergroup.messagebus_disk_physical_io

usergroup.netdata_disk_physical_io

usergroup.nextcloud_disk_physical_io

usergroup.nginx_disk_physical_io

usergroup.nixbld_disk_physical_io

usergroup.nscd_disk_physical_io

usergroup.postgres_disk_physical_io

usergroup.radicale_disk_physical_io

usergroup.root_disk_physical_io

usergroup.sshd_disk_physical_io

usergroup.systemd-oom_disk_physical_io

usergroup.systemd-timesync_disk_physical_io

usergroup.users_disk_physical_io

usergroup.vaultwarden_disk_physical_io

usergroup.63892_disk_logical_io

usergroup.caddy_disk_logical_io

usergroup.gitlab_disk_logical_io

usergroup.jellyfin_disk_logical_io

usergroup.messagebus_disk_logical_io

usergroup.netdata_disk_logical_io

usergroup.nextcloud_disk_logical_io

usergroup.nginx_disk_logical_io

usergroup.nixbld_disk_logical_io

usergroup.nscd_disk_logical_io

usergroup.postgres_disk_logical_io

usergroup.radicale_disk_logical_io

usergroup.root_disk_logical_io

usergroup.sshd_disk_logical_io

usergroup.systemd-oom_disk_logical_io

usergroup.systemd-timesync_disk_logical_io

usergroup.users_disk_logical_io

usergroup.vaultwarden_disk_logical_io



PROCESSES


usergroup.63892_processes

usergroup.caddy_processes

usergroup.gitlab_processes

usergroup.jellyfin_processes

usergroup.messagebus_processes

usergroup.netdata_processes

usergroup.nextcloud_processes

usergroup.nginx_processes

usergroup.nixbld_processes

usergroup.nscd_processes

usergroup.postgres_processes

usergroup.radicale_processes

usergroup.root_processes

usergroup.sshd_processes

usergroup.systemd-oom_processes

usergroup.systemd-timesync_processes

usergroup.users_processes

usergroup.vaultwarden_processes

usergroup.63892_threads

usergroup.caddy_threads

usergroup.gitlab_threads

usergroup.jellyfin_threads

usergroup.messagebus_threads

usergroup.netdata_threads

usergroup.nextcloud_threads

usergroup.nginx_threads

usergroup.nixbld_threads

usergroup.nscd_threads

usergroup.postgres_threads

usergroup.radicale_threads

usergroup.root_threads

usergroup.sshd_threads

usergroup.systemd-oom_threads

usergroup.systemd-timesync_threads

usergroup.users_threads

usergroup.vaultwarden_threads



FDS


usergroup.63892_fds_open_limit

usergroup.caddy_fds_open_limit

usergroup.gitlab_fds_open_limit

usergroup.jellyfin_fds_open_limit

usergroup.messagebus_fds_open_limit

usergroup.netdata_fds_open_limit

usergroup.nextcloud_fds_open_limit

usergroup.nginx_fds_open_limit

usergroup.nixbld_fds_open_limit

usergroup.nscd_fds_open_limit

usergroup.postgres_fds_open_limit

usergroup.radicale_fds_open_limit

usergroup.root_fds_open_limit

usergroup.sshd_fds_open_limit

usergroup.systemd-oom_fds_open_limit

usergroup.systemd-timesync_fds_open_limit

usergroup.users_fds_open_limit

usergroup.vaultwarden_fds_open_limit

usergroup.63892_fds_open

usergroup.caddy_fds_open

usergroup.gitlab_fds_open

usergroup.jellyfin_fds_open

usergroup.messagebus_fds_open

usergroup.netdata_fds_open

usergroup.nextcloud_fds_open

usergroup.nginx_fds_open

usergroup.nixbld_fds_open

usergroup.nscd_fds_open

usergroup.postgres_fds_open

usergroup.radicale_fds_open

usergroup.root_fds_open

usergroup.sshd_fds_open

usergroup.systemd-oom_fds_open

usergroup.systemd-timesync_fds_open

usergroup.users_fds_open

usergroup.vaultwarden_fds_open



UPTIME


usergroup.63892_uptime

usergroup.caddy_uptime

usergroup.gitlab_uptime

usergroup.jellyfin_uptime

usergroup.messagebus_uptime

usergroup.netdata_uptime

usergroup.nextcloud_uptime

usergroup.nginx_uptime

usergroup.nixbld_uptime

usergroup.nscd_uptime

usergroup.postgres_uptime

usergroup.radicale_uptime

usergroup.root_uptime

usergroup.sshd_uptime

usergroup.systemd-oom_uptime

usergroup.systemd-timesync_uptime

usergroup.users_uptime

usergroup.vaultwarden_uptime


--------------------------------------------------------------------------------


ANOMALY DETECTION

Charts relating to anomaly detection, increased anomalous dimensions or a higher
than usual anomaly_rate could be signs of some abnormal behaviour. Read our
anomaly detection guide for more details.



DIMENSIONS


Total count of dimensions considered anomalous or normal.
anomaly_detection.dimensions_on_5f3cdcb6-4784-11ef-91f1-1c1bb5160fc1



ANOMALY RATE


Percentage of anomalous dimensions.
anomaly_detection.anomaly_rate_on_5f3cdcb6-4784-11ef-91f1-1c1bb5160fc1

anomaly_detection.type_anomaly_rate_on_5f3cdcb6-4784-11ef-91f1-1c1bb5160fc1



ANOMALY DETECTION


Flags (0 or 1) to show when an anomaly event has been triggered by the detector.
anomaly_detection.anomaly_detection_on_5f3cdcb6-4784-11ef-91f1-1c1bb5160fc1

anomaly_detection.ml_running_on_5f3cdcb6-4784-11ef-91f1-1c1bb5160fc1


--------------------------------------------------------------------------------


LOGIND

Keeps track of user logins and sessions by querying the systemd-logind API.



SESSIONS


Local and remote sessions.
logind.sessions


Sessions of each session type.

Graphical - sessions are running under one of X11, Mir, or Wayland. Console -
sessions are usually regular text mode local logins, but depending on how the
system is configured may have an associated GUI. Other - sessions are those that
do not fall into the above categories (such as sessions for cron jobs or systemd
timer units).

logind.sessions_type


Sessions in each session state.

Online - logged in and running in the background. Closing - nominally logged
out, but some processes belonging to it are still around. Active - logged in and
running in the foreground.

logind.sessions_state



USERS



Users in each user state.

Offline - users are not logged in. Closing - users are in the process of logging
out without lingering. Online - users are logged in, but have no active
sessions. Lingering - users are not logged in, but have one or more services
still running. Active - users are logged in, and have at least one active
session.

logind.users_state


--------------------------------------------------------------------------------


PROMETHEUS CADDY LOCAL


CADDY ADMIN


prometheus_caddy_local.caddy_admin_http_requests_total-code=200-handler=load-method=POST-path=/load

prometheus_caddy_local.caddy_admin_http_requests_total-code=200-handler=metrics-method=GET-path=/metrics



CADDY REVERSE


prometheus_caddy_local.caddy_reverse_proxy_upstreams_healthy-upstream=:5232

prometheus_caddy_local.caddy_reverse_proxy_upstreams_healthy-upstream=:8000

prometheus_caddy_local.caddy_reverse_proxy_upstreams_healthy-upstream=:8080

prometheus_caddy_local.caddy_reverse_proxy_upstreams_healthy-upstream=:8222

prometheus_caddy_local.caddy_reverse_proxy_upstreams_healthy-upstream=:9000

prometheus_caddy_local.caddy_reverse_proxy_upstreams_healthy-upstream=:19999



PROMHTTP METRIC


prometheus_caddy_local.promhttp_metric_handler_requests_in_flight

prometheus_caddy_local.promhttp_metric_handler_requests_total-code=200

prometheus_caddy_local.promhttp_metric_handler_requests_total-code=500

prometheus_caddy_local.promhttp_metric_handler_requests_total-code=503



GO


prometheus_caddy_local.go_gc_duration_seconds

prometheus_caddy_local.go_gc_duration_seconds_count

prometheus_caddy_local.go_gc_duration_seconds_sum

prometheus_caddy_local.go_goroutines

prometheus_caddy_local.go_memstats_alloc_bytes

prometheus_caddy_local.go_memstats_alloc_bytes_total

prometheus_caddy_local.go_memstats_buck_hash_sys_bytes

prometheus_caddy_local.go_memstats_frees_total

prometheus_caddy_local.go_memstats_gc_sys_bytes

prometheus_caddy_local.go_memstats_heap_alloc_bytes

prometheus_caddy_local.go_memstats_heap_idle_bytes

prometheus_caddy_local.go_memstats_heap_inuse_bytes

prometheus_caddy_local.go_memstats_heap_objects

prometheus_caddy_local.go_memstats_heap_released_bytes

prometheus_caddy_local.go_memstats_heap_sys_bytes

prometheus_caddy_local.go_memstats_last_gc_time_seconds

prometheus_caddy_local.go_memstats_lookups_total

prometheus_caddy_local.go_memstats_mallocs_total

prometheus_caddy_local.go_memstats_mcache_inuse_bytes

prometheus_caddy_local.go_memstats_mcache_sys_bytes

prometheus_caddy_local.go_memstats_mspan_inuse_bytes

prometheus_caddy_local.go_memstats_mspan_sys_bytes

prometheus_caddy_local.go_memstats_next_gc_bytes

prometheus_caddy_local.go_memstats_other_sys_bytes

prometheus_caddy_local.go_memstats_stack_inuse_bytes

prometheus_caddy_local.go_memstats_stack_sys_bytes

prometheus_caddy_local.go_memstats_sys_bytes

prometheus_caddy_local.go_threads



PROCESS


prometheus_caddy_local.process_cpu_seconds_total

prometheus_caddy_local.process_max_fds

prometheus_caddy_local.process_open_fds

prometheus_caddy_local.process_resident_memory_bytes

prometheus_caddy_local.process_start_time_seconds

prometheus_caddy_local.process_virtual_memory_bytes

prometheus_caddy_local.process_virtual_memory_max_bytes


--------------------------------------------------------------------------------


SYSTEMD UNITS SERVICE-UNITS

systemd provides a dependency system between various entities called "units" of
11 different types. Units encapsulate various objects that are relevant for
system boot-up and maintenance. Units may be active (meaning started, bound,
plugged in, depending on the unit type), or inactive (meaning stopped, unbound,
unplugged), as well as in the process of being activated or deactivated, i.e.
between the two states (these states are called activating, deactivating). A
special failed state is available as well, which is very similar to inactive and
is entered when the service failed in some way (process returned error code on
exit, or crashed, an operation timed out, or after too many restarts). For
details, see systemd(1).



SERVICE UNITS


systemdunits_service-units.unit_audit_service_state

systemdunits_service-units.unit_caddy_service_state

systemdunits_service-units.unit_dbus_service_state

systemdunits_service-units.unit_emergency_service_state

systemdunits_service-units.unit_firewall_service_state

systemdunits_service-units.unit_generate-shutdown-ramfs_service_state

systemdunits_service-units.unit_getty@tty1_service_state

systemdunits_service-units.unit_home-manager-maria_service_state

systemdunits_service-units.unit_kmod-static-nodes_service_state

systemdunits_service-units.unit_logrotate-checkconf_service_state

systemdunits_service-units.unit_logrotate_service_state

systemdunits_service-units.unit_mealie_service_state

systemdunits_service-units.unit_modprobe@configfs_service_state

systemdunits_service-units.unit_modprobe@drm_service_state

systemdunits_service-units.unit_modprobe@efi_pstore_service_state

systemdunits_service-units.unit_modprobe@fuse_service_state

systemdunits_service-units.unit_mount-pstore_service_state

systemdunits_service-units.unit_netdata_service_state

systemdunits_service-units.unit_network-local-commands_service_state

systemdunits_service-units.unit_network-setup_service_state

systemdunits_service-units.unit_NetworkManager-dispatcher_service_state

systemdunits_service-units.unit_NetworkManager-wait-online_service_state

systemdunits_service-units.unit_NetworkManager_service_state

systemdunits_service-units.unit_nix-daemon_service_state

systemdunits_service-units.unit_nscd_service_state

systemdunits_service-units.unit_postgresql_service_state

systemdunits_service-units.unit_prepare-kexec_service_state

systemdunits_service-units.unit_radicale_service_state

systemdunits_service-units.unit_reload-systemd-vconsole-setup_service_state

systemdunits_service-units.unit_rescue_service_state

systemdunits_service-units.unit_resolvconf_service_state

systemdunits_service-units.unit_save-hwclock_service_state

systemdunits_service-units.unit_sshd_service_state

systemdunits_service-units.unit_suid-sgid-wrappers_service_state

systemdunits_service-units.unit_systemd-ask-password-console_service_state

systemdunits_service-units.unit_systemd-ask-password-wall_service_state

systemdunits_service-units.unit_systemd-boot-random-seed_service_state

systemdunits_service-units.unit_systemd-fsck-root_service_state

systemdunits_service-units.unit_systemd-fsck@dev-disk-by-uuid-426A-A909_service_state

systemdunits_service-units.unit_systemd-halt_service_state

systemdunits_service-units.unit_systemd-hibernate-clear_service_state

systemdunits_service-units.unit_systemd-hostnamed_service_state

systemdunits_service-units.unit_systemd-journal-catalog-update_service_state

systemdunits_service-units.unit_systemd-journal-flush_service_state

systemdunits_service-units.unit_systemd-journald_service_state

systemdunits_service-units.unit_systemd-kexec_service_state

systemdunits_service-units.unit_systemd-logind_service_state

systemdunits_service-units.unit_systemd-modules-load_service_state

systemdunits_service-units.unit_systemd-oomd_service_state

systemdunits_service-units.unit_systemd-poweroff_service_state

systemdunits_service-units.unit_systemd-pstore_service_state

systemdunits_service-units.unit_systemd-random-seed_service_state

systemdunits_service-units.unit_systemd-reboot_service_state

systemdunits_service-units.unit_systemd-remount-fs_service_state

systemdunits_service-units.unit_systemd-rfkill_service_state

systemdunits_service-units.unit_systemd-sysctl_service_state

systemdunits_service-units.unit_systemd-timesyncd_service_state

systemdunits_service-units.unit_systemd-tmpfiles-clean_service_state

systemdunits_service-units.unit_systemd-tmpfiles-resetup_service_state

systemdunits_service-units.unit_systemd-tmpfiles-setup-dev-early_service_state

systemdunits_service-units.unit_systemd-tmpfiles-setup-dev_service_state

systemdunits_service-units.unit_systemd-tmpfiles-setup_service_state

systemdunits_service-units.unit_systemd-udev-trigger_service_state

systemdunits_service-units.unit_systemd-udevd_service_state

systemdunits_service-units.unit_systemd-update-done_service_state

systemdunits_service-units.unit_systemd-update-utmp_service_state

systemdunits_service-units.unit_systemd-user-sessions_service_state

systemdunits_service-units.unit_systemd-vconsole-setup_service_state

systemdunits_service-units.unit_vaultwarden_service_state

systemdunits_service-units.unit_wpa_supplicant_service_state


--------------------------------------------------------------------------------


NETDATA MONITORING

Performance metrics for the operation of netdata itself and its plugins.



DBENGINE RETENTION


netdata.dbengine_retention_tier0

netdata.dbengine_retention_tier1

netdata.dbengine_retention_tier2


--------------------------------------------------------------------------------

 * System Overview
   * cpu
   * load
   * disk
   * ram
   * network
   * processes
   * idlejitter
   * interrupts
   * softirqs
   * softnet
   * entropy
   * files
   * uptime
   * clock synchronization
   * ipc semaphores
   * ipc shared memory
 * CPUs
   * cpufreq
   * throttling
   * powercap
 * Memory
   * overview
   * OOM kills
   * zswap
   * swap
   * page faults
   * writeback
   * kernel
   * slab
   * reclaiming
   * cma
   * hugepages
   * deduper (ksm)
   * balloon
   * ecc
   * fragmentation
 * Disks
   * io
   * sda
   * /
   * /boot
   * /dev
   * /dev/hugepages
   * /dev/shm
   * /run
   * /run/keys
   * /run/wrappers
 * Networking Stack
   * tcp
   * sockets
 * IPv4 Networking
   * packets
   * errors
   * broadcast
   * multicast
   * tcp
   * icmp
   * udp
   * udplite
   * ecn
   * fragments
   * raw
 * IPv6 Networking
   * packets
   * errors
   * broadcast6
   * multicast6
   * tcp6
   * icmp6
   * udp6
   * udplite6
   * fragments6
   * raw6
 * Network Interfaces
   * enp2s0
   * wlp1s0
 * Wireless Interfaces
   * wlp1s0
 * Firewall (netfilter)
   * connection tracker
   * netlink
 * systemd caddy
   * cpu
   * mem
   * disk
   * pids
 * systemd dbus
   * cpu
   * mem
   * disk
   * pids
 * systemd mealie
   * cpu
   * mem
   * disk
   * pids
 * systemd netdata
   * cpu
   * mem
   * disk
   * pids
 * systemd networkmanager
   * cpu
   * mem
   * disk
   * pids
 * systemd nix-daemon
   * cpu
   * mem
   * disk
   * pids
 * systemd nscd
   * cpu
   * mem
   * disk
   * pids
 * systemd postgresql
   * cpu
   * mem
   * disk
   * pids
 * systemd radicale
   * cpu
   * mem
   * disk
   * pids
 * systemd sshd
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-journald
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-logind
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-oomd
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-timesyncd
   * cpu
   * mem
   * disk
   * pids
 * systemd systemd-udevd
   * cpu
   * mem
   * disk
   * pids
 * systemd vaultwarden
   * cpu
   * mem
   * disk
   * pids
 * systemd wpa supplicant
   * cpu
   * mem
   * disk
   * pids
 * app
   * cpu
   * mem
   * disk
   * processes
   * fds
   * uptime
 * user
   * cpu
   * mem
   * disk
   * processes
   * fds
   * uptime
 * usergroup
   * cpu
   * mem
   * disk
   * processes
   * fds
   * uptime
 * Anomaly Detection
   * dimensions
   * anomaly rate
   * anomaly detection
 * Logind
   * sessions
   * users
 * prometheus caddy local
   * caddy admin
   * caddy reverse
   * promhttp metric
   * go
   * process
 * systemd units service-units
   * service units
 * Netdata Monitoring
   * dbengine retention
 * Add more charts
 * Add more alarms
 * Every second, Netdata collects 3,240 metrics on maria, presents them in 1,409
   charts and monitors them with 118 alarms.
    
   netdata
   v1.46.1
 * Do you like Netdata?
   Give us a star!
   
   And share the word!



Netdata

Copyright 2020, Netdata, Inc.


Terms and conditions Privacy Policy
Released under GPL v3 or later. Netdata uses third party tools.



XSS PROTECTION

This dashboard is about to render data from server:



To protect your privacy, the dashboard will check all data transferred for cross
site scripting (XSS).
This is CPU intensive, so your browser might be a bit slower.

If you trust the remote server, you can disable XSS protection.
In this case, any remote dashboard decoration code (javascript) will also run.

If you don't trust the remote server, you should keep the protection on.
The dashboard will run slower and remote dashboard decoration code will not run,
but better be safe than sorry...

Keep protecting me I don't need this, the server is mine
×

PRINT THIS NETDATA DASHBOARD

netdata dashboards cannot be captured, since we are lazy loading and hiding all
but the visible charts.
To capture the whole page with all the charts rendered, a new browser window
will pop-up that will render all the charts at once. The new browser window will
maintain the current pan and zoom settings of the charts. So, align the charts
before proceeding.

This process will put some CPU and memory pressure on your browser.
For the netdata server, we will sequentially download all the charts, to avoid
congesting network and server resources.
Please, do not print netdata dashboards on paper!

Print Close
×

IMPORT A NETDATA SNAPSHOT

netdata can export and import dashboard snapshots. Any netdata can import the
snapshot of any other netdata. The snapshots are not uploaded to a server. They
are handled entirely by your web browser, on your computer.

Click here to select the netdata snapshot file to import

Browse for a snapshot file (or drag it and drop it here), then click Import to
render it.



FilenameHostnameOrigin URLCharts InfoSnapshot InfoTime RangeComments



Snapshot files contain both data and javascript code. Make sure you trust the
files you import! Import Close
×

EXPORT A SNAPSHOT

Please wait while we collect all the dashboard data...

Select the desired resolution of the snapshot. This is the seconds of data per
point.
 
 

 

Filename
Compression
 * Select Compression
 * 
 * uncompressed
 * 
 * pako.deflate (gzip, binary)
 * pako.deflate.base64 (gzip, ascii)
 * 
 * lzstring.uri (LZ, ascii)
 * lzstring.utf16 (LZ, utf16)
 * lzstring.base64 (LZ, ascii)

Comments
 
Select snaphost resolution. This controls the size the snapshot file.

The generated snapshot will include all charts of this dashboard, for the
visible timeframe, so align, pan and zoom the charts as needed. The scroll
position of the dashboard will also be saved. The snapshot will be downloaded as
a file, to your computer, that can be imported back into any netdata dashboard
(no need to import it back on this server).

Snapshot files include all the information of the dashboard, including the URL
of the origin server, its netdata unique ID, etc. So, if you share the snapshot
file with third parties, they will be able to access the origin server, if this
server is exposed on the internet.
Snapshots are handled entirely by the web browser. The netdata servers are not
aware of them.

Export Cancel
×

NETDATA ALARMS

 * Active
 * All
 * Log

loading...
loading...
loading...
Close
×

NETDATA DASHBOARD OPTIONS

These are browser settings. Each viewer has its own. They do not affect the
operation of your netdata server.
Settings take effect immediately and are saved permanently to browser local
storage (except the refresh on focus / always option).
To reset all options (including charts sizes) to their defaults, click here.

 * Performance
 * Synchronization
 * Visual
 * Locale

On FocusAlways
When to refresh the charts?
When set to On Focus, the charts will stop being updated if the page / tab does
not have the focus of the user. When set to Always, the charts will always be
refreshed. Set it to On Focus it to lower the CPU requirements of the browser
(and extend the battery of laptops and tablets) when this page does not have
your focus. Set to Always to work on another window (i.e. change the settings of
something) and have the charts auto-refresh in this window.
Non ZeroAll
Which dimensions to show?
When set to Non Zero, dimensions that have all their values (within the current
view) set to zero will not be transferred from the netdata server (except if all
dimensions of the chart are zero, in which case this setting does nothing - all
dimensions are transferred and shown). When set to All, all dimensions will
always be shown. Set it to Non Zero to lower the data transferred between
netdata and your browser, lower the CPU requirements of your browser (fewer
lines to draw) and increase the focus on the legends (fewer entries at the
legends).
DestroyHide
How to handle hidden charts?
When set to Destroy, charts that are not in the current viewport of the browser
(are above, or below the visible area of the page), will be destroyed and
re-created if and when they become visible again. When set to Hide, the
not-visible charts will be just hidden, to simplify the DOM and speed up your
browser. Set it to Destroy, to lower the memory requirements of your browser.
Set it to Hide for faster restoration of charts on page scrolling.
AsyncSync
Page scroll handling?
When set to Sync, charts will be examined for their visibility immediately after
scrolling. On slow computers this may impact the smoothness of page scrolling.
To update the page when scrolling ends, set it to Async. Set it to Sync for
immediate chart updates when scrolling. Set it to Async for smoother page
scrolling on slower computers.

ParallelSequential
Which chart refresh policy to use?
When set to parallel, visible charts are refreshed in parallel (all queries are
sent to netdata server in parallel) and are rendered asynchronously. When set to
sequential charts are refreshed one after another. Set it to parallel if your
browser can cope with it (most modern browsers do), set it to sequential if you
work on an older/slower computer.
ResyncBest Effort
Shall we re-sync chart refreshes?
When set to Resync, the dashboard will attempt to re-synchronize all the charts
so that they are refreshed concurrently. When set to Best Effort, each chart may
be refreshed with a little time difference to the others. Normally, the
dashboard starts refreshing them in parallel, but depending on the speed of your
computer and the network latencies, charts start having a slight time
difference. Setting this to Resync will attempt to re-synchronize the charts on
every update. Setting it to Best Effort may lower the pressure on your browser
and the network.
SyncDon't Sync
Sync hover selection on all charts?
When enabled, a selection on one chart will automatically select the same time
on all other visible charts and the legends of all visible charts will be
updated to show the selected values. When disabled, only the chart getting the
user's attention will be selected. Enable it to get better insights of the data.
Disable it if you are on a very slow computer that cannot actually do it.

RightBelow
Where do you want to see the legend?
Netdata can place the legend in two positions: Below charts (the default) or to
the Right of charts.
Switching this will reload the dashboard.
DarkWhite
Which theme to use?
Netdata comes with two themes: Dark (the default) and White.
Switching this will reload the dashboard.
Help MeNo Help
Do you need help?
Netdata can show some help in some areas to help you use the dashboard. If all
these balloons bother you, disable them using this switch.
Switching this will reload the dashboard.
PadDon't Pad
Enable data padding when panning and zooming?
When set to Pad the charts will be padded with more data, both before and after
the visible area, thus giving the impression the whole database is loaded. This
padding will happen only after the first pan or zoom operation on the chart
(initially all charts have only the visible data). When set to Don't Pad only
the visible data will be transferred from the netdata server, even after the
first pan and zoom operation.
SmoothRough
Enable Bézier lines on charts?
When set to Smooth the charts libraries that support it, will plot smooth curves
instead of simple straight lines to connect the points.
Keep in mind dygraphs, the main charting library in netdata dashboards, can only
smooth line charts. It cannot smooth area or stacked charts. When set to Rough,
this setting can lower the CPU resources consumed by your browser.

These settings are applied gradually, as charts are updated. To force them,
refresh the dashboard now.
Scale UnitsFixed Units
Enable auto-scaling of select units?
When set to Scale Units the values shown will dynamically be scaled (e.g. 1000
kilobits will be shown as 1 megabit). Netdata can auto-scale these original
units: kilobits/s, kilobytes/s, KB/s, KB, MB, and GB. When set to Fixed Units
all the values will be rendered using the original units maintained by the
netdata server.
CelsiusFahrenheit
Which units to use for temperatures?
Set the temperature units of the dashboard.
TimeSeconds
Convert seconds to time?
When set to Time, charts that present seconds will show DDd:HH:MM:SS. When set
to Seconds, the raw number of seconds will be presented.

Close
×

UPDATE CHECK

Your netdata version: v1.46.1




New version of netdata available!

Latest version: v1.47.1

Click here for the changes log and
click here for directions on updating your netdata installation.

We suggest to review the changes log for new features you may be interested, or
important bug fixes you may need.
Keeping your netdata updated is generally a good idea.

--------------------------------------------------------------------------------

For progress reports and key netdata updates: Join the Netdata Community
You can also follow netdata on twitter, follow netdata on facebook, or watch
netdata on github.
Check Now Close
×

SIGN IN

Signing-in to netdata.cloud will synchronize the list of your netdata monitored
nodes known at registry . This may include server hostnames, urls and
identification GUIDs.

After you upgrade all your netdata servers, your private registry will not be
needed any more.

Are you sure you want to proceed?

Cancel Sign In
×

DELETE ?

You are about to delete, from your personal list of netdata servers, the
following server:




Are you sure you want to do this?


Keep in mind, this server will be added back if and when you visit it again.


keep it delete it
×

SWITCH NETDATA REGISTRY IDENTITY

You can copy and paste the following ID to all your browsers (e.g. work and
home).
All the browsers with the same ID will identify you, so please don't share this
with others.

Either copy this ID and paste it to another browser, or paste here the ID you
have taken from another browser.
Keep in mind that:
 * when you switch ID, your previous ID will be lost forever - this is
   irreversible.
 * both IDs (your old and the new) must list this netdata at their personal
   lists.
 * both IDs have to be known by the registry: .
 * to get a new ID, just clear your browser cookies.


cancel impersonate
×



Checking known URLs for this server...



Checks may fail if you are viewing an HTTPS page and the server to be checked is
HTTP only.


Close