66.70.205.227
Open in
urlscan Pro
66.70.205.227
Public Scan
URL:
http://66.70.205.227:19999/
Submission: On July 11 via api from RU — Scanned from CA
Submission: On July 11 via api from RU — Scanned from CA
Form analysis
5 forms found in the DOM<form id="optionsForm1" class="form-horizontal">
<div class="form-group">
<table>
<tbody>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="stop_updates_when_focus_is_lost" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
data-on="On Focus" data-off="Always" data-width="110px">
<div class="toggle-group"><label class="btn btn-success toggle-on">On Focus</label><label class="btn btn-danger active toggle-off">Always</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>When to refresh the charts?</strong><br><small>When set to <b>On Focus</b>, the charts will stop being updated if the page / tab does not have the focus of the user. When set to <b>Always</b>, the charts will
always be refreshed. Set it to <b>On Focus</b> it to lower the CPU requirements of the browser (and extend the battery of laptops and tablets) when this page does not have your focus. Set to <b>Always</b> to work on another window (i.e.
change the settings of something) and have the charts auto-refresh in this window.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="eliminate_zero_dimensions" type="checkbox" checked="checked" data-toggle="toggle" data-on="Non Zero" data-off="All" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Non Zero</label><label class="btn btn-default active toggle-off">All</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Which dimensions to show?</strong><br><small>When set to <b>Non Zero</b>, dimensions that have all their values (within the current view) set to zero will not be transferred from the netdata server (except if
all dimensions of the chart are zero, in which case this setting does nothing - all dimensions are transferred and shown). When set to <b>All</b>, all dimensions will always be shown. Set it to <b>Non Zero</b> to lower the data
transferred between netdata and your browser, lower the CPU requirements of your browser (fewer lines to draw) and increase the focus on the legends (fewer entries at the legends).</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="destroy_on_hide" type="checkbox" data-toggle="toggle" data-on="Destroy" data-off="Hide" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Destroy</label><label class="btn btn-default active toggle-off">Hide</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>How to handle hidden charts?</strong><br><small>When set to <b>Destroy</b>, charts that are not in the current viewport of the browser (are above, or below the visible area of the page), will be destroyed and
re-created if and when they become visible again. When set to <b>Hide</b>, the not-visible charts will be just hidden, to simplify the DOM and speed up your browser. Set it to <b>Destroy</b>, to lower the memory requirements of your
browser. Set it to <b>Hide</b> for faster restoration of charts on page scrolling.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="async_on_scroll" type="checkbox" data-toggle="toggle" data-on="Async" data-off="Sync" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Async</label><label class="btn btn-default active toggle-off">Sync</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Page scroll handling?</strong><br><small>When set to <b>Sync</b>, charts will be examined for their visibility immediately after scrolling. On slow computers this may impact the smoothness of page scrolling.
To update the page when scrolling ends, set it to <b>Async</b>. Set it to <b>Sync</b> for immediate chart updates when scrolling. Set it to <b>Async</b> for smoother page scrolling on slower computers.</small></td>
</tr>
</tbody>
</table>
</div>
</form>
<form id="optionsForm2" class="form-horizontal">
<div class="form-group">
<table>
<tbody>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="parallel_refresher" type="checkbox" checked="checked" data-toggle="toggle" data-on="Parallel" data-off="Sequential" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Parallel</label><label class="btn btn-default active toggle-off">Sequential</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Which chart refresh policy to use?</strong><br><small>When set to <b>parallel</b>, visible charts are refreshed in parallel (all queries are sent to netdata server in parallel) and are rendered
asynchronously. When set to <b>sequential</b> charts are refreshed one after another. Set it to parallel if your browser can cope with it (most modern browsers do), set it to sequential if you work on an older/slower computer.</small>
</td>
</tr>
<tr class="option-row" id="concurrent_refreshes_row">
<td class="option-control">
<div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="concurrent_refreshes" type="checkbox" checked="checked" data-toggle="toggle" data-on="Resync" data-off="Best Effort"
data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Resync</label><label class="btn btn-default active toggle-off">Best Effort</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Shall we re-sync chart refreshes?</strong><br><small>When set to <b>Resync</b>, the dashboard will attempt to re-synchronize all the charts so that they are refreshed concurrently. When set to
<b>Best Effort</b>, each chart may be refreshed with a little time difference to the others. Normally, the dashboard starts refreshing them in parallel, but depending on the speed of your computer and the network latencies, charts start
having a slight time difference. Setting this to <b>Resync</b> will attempt to re-synchronize the charts on every update. Setting it to <b>Best Effort</b> may lower the pressure on your browser and the network.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="sync_selection" type="checkbox" checked="checked" data-toggle="toggle" data-on="Sync" data-off="Don't Sync" data-onstyle="success"
data-offstyle="danger" data-width="110px">
<div class="toggle-group"><label class="btn btn-success toggle-on">Sync</label><label class="btn btn-danger active toggle-off">Don't Sync</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Sync hover selection on all charts?</strong><br><small>When enabled, a selection on one chart will automatically select the same time on all other visible charts and the legends of all visible charts will be
updated to show the selected values. When disabled, only the chart getting the user's attention will be selected. Enable it to get better insights of the data. Disable it if you are on a very slow computer that cannot actually do
it.</small></td>
</tr>
</tbody>
</table>
</div>
</form>
<form id="optionsForm3" class="form-horizontal">
<div class="form-group">
<table>
<tbody>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-default off" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="legend_right" type="checkbox" checked="checked" data-toggle="toggle" data-on="Right" data-off="Below" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Right</label><label class="btn btn-default active toggle-off">Below</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Where do you want to see the legend?</strong><br><small>Netdata can place the legend in two positions: <b>Below</b> charts (the default) or to the <b>Right</b> of
charts.<br><b>Switching this will reload the dashboard</b>.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="netdata_theme_control" type="checkbox" checked="checked" data-toggle="toggle" data-offstyle="danger" data-onstyle="success"
data-on="Dark" data-off="White" data-width="110px">
<div class="toggle-group"><label class="btn btn-success toggle-on">Dark</label><label class="btn btn-danger active toggle-off">White</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Which theme to use?</strong><br><small>Netdata comes with two themes: <b>Dark</b> (the default) and <b>White</b>.<br><b>Switching this will reload the dashboard</b>.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="show_help" type="checkbox" checked="checked" data-toggle="toggle" data-on="Help Me" data-off="No Help" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Help Me</label><label class="btn btn-default active toggle-off">No Help</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Do you need help?</strong><br><small>Netdata can show some help in some areas to help you use the dashboard. If all these balloons bother you, disable them using this
switch.<br><b>Switching this will reload the dashboard</b>.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="pan_and_zoom_data_padding" type="checkbox" checked="checked" data-toggle="toggle" data-on="Pad" data-off="Don't Pad"
data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Pad</label><label class="btn btn-default active toggle-off">Don't Pad</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Enable data padding when panning and zooming?</strong><br><small>When set to <b>Pad</b> the charts will be padded with more data, both before and after the visible area, thus giving the impression the whole
database is loaded. This padding will happen only after the first pan or zoom operation on the chart (initially all charts have only the visible data). When set to <b>Don't Pad</b> only the visible data will be transfered from the
netdata server, even after the first pan and zoom operation.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="smooth_plot" type="checkbox" checked="checked" data-toggle="toggle" data-on="Smooth" data-off="Rough" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Smooth</label><label class="btn btn-default active toggle-off">Rough</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Enable Bézier lines on charts?</strong><br><small>When set to <b>Smooth</b> the charts libraries that support it, will plot smooth curves instead of simple straight lines to connect the points.<br>Keep in
mind <a href="http://dygraphs.com" target="_blank">dygraphs</a>, the main charting library in netdata dashboards, can only smooth line charts. It cannot smooth area or stacked charts. When set to <b>Rough</b>, this setting can lower the
CPU resources consumed by your browser.</small></td>
</tr>
</tbody>
</table>
</div>
</form>
<form id="optionsForm4" class="form-horizontal">
<div class="form-group">
<table>
<tbody>
<tr class="option-row">
<td colspan="2" align="center"><small><b>These settings are applied gradually, as charts are updated. To force them, refresh the dashboard now</b>.</small></td>
</tr>
<tr class="option-row">
<td class="option-control">
<div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="units_conversion" type="checkbox" checked="checked" data-toggle="toggle" data-on="Scale Units" data-off="Fixed Units"
data-onstyle="success" data-width="110px">
<div class="toggle-group"><label class="btn btn-success toggle-on">Scale Units</label><label class="btn btn-default active toggle-off">Fixed Units</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Enable auto-scaling of select units?</strong><br><small>When set to <b>Scale Units</b> the values shown will dynamically be scaled (e.g. 1000 kilobits will be shown as 1 megabit). Netdata can auto-scale these
original units: <code>kilobits/s</code>, <code>kilobytes/s</code>, <code>KB/s</code>, <code>KB</code>, <code>MB</code>, and <code>GB</code>. When set to <b>Fixed Units</b> all the values will be rendered using the original units
maintained by the netdata server.</small></td>
</tr>
<tr id="settingsLocaleTempRow" class="option-row">
<td class="option-control">
<div class="toggle btn btn-primary" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="units_temp" type="checkbox" checked="checked" data-toggle="toggle" data-on="Celsius" data-off="Fahrenheit" data-width="110px">
<div class="toggle-group"><label class="btn btn-primary toggle-on">Celsius</label><label class="btn btn-default active toggle-off">Fahrenheit</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Which units to use for temperatures?</strong><br><small>Set the temperature units of the dashboard.</small></td>
</tr>
<tr id="settingsLocaleTimeRow" class="option-row">
<td class="option-control">
<div class="toggle btn btn-success" data-toggle="toggle" style="width: 110px; height: 0px;"><input id="seconds_as_time" type="checkbox" checked="checked" data-toggle="toggle" data-on="Time" data-off="Seconds" data-onstyle="success"
data-width="110px">
<div class="toggle-group"><label class="btn btn-success toggle-on">Time</label><label class="btn btn-default active toggle-off">Seconds</label><span class="toggle-handle btn btn-default"></span></div>
</div>
</td>
<td class="option-info"><strong>Convert seconds to time?</strong><br><small>When set to <b>Time</b>, charts that present <code>seconds</code> will show <code>DDd:HH:MM:SS</code>. When set to <b>Seconds</b>, the raw number of seconds will be
presented.</small></td>
</tr>
</tbody>
</table>
</div>
</form>
#
<form action="#"><input class="form-control" id="switchRegistryPersonGUID" placeholder="your personal ID" maxlength="36" autocomplete="off" style="text-align:center;font-size:1.4em"></form>
Text Content
netdata Real-time performance monitoring, done right! Welcome back!Sign in again to enjoy the benefits of Netdata Cloud Sign in 6bcef270ce26 UTC -7 Playing 2024-07-11 • 00:2700:34 • last 7min 0 0 Sign in NETDATA REAL-TIME PERFORMANCE MONITORING, IN THE GREATEST POSSIBLE DETAIL Drag charts to pan. Shift + wheel on them, to zoom in and out. Double-click on them, to reset. Hover on them too! system.cpu SYSTEM OVERVIEW Overview of the key system metrics. 20.0Disk ReadMiB/s 0.00Disk WriteMiB/s 3.3CPU%0.0100.0 1.09Net Inboundmegabits/s 113.2Net Outboundmegabits/s 44.2Used RAM% CPU Total CPU utilization (all cores). 100% here means there is no CPU idle time at all. You can get per core usage at the CPUs section and per application usage at the Applications Monitoring section. Keep an eye on iowait iowait (1.40%). If it is constantly high, your disks are a bottleneck and they slow your system down. An important metric worth monitoring, is softirq softirq (0.09%). A constantly high percentage of softirq may indicate network driver issues. The individual metrics can be found in the kernel documentation. Total CPU utilization (system.cpu) 0.0 20.0 40.0 60.0 80.0 100.0 00:27:30 00:28:00 00:28:30 00:29:00 00:29:30 00:30:00 00:30:30 00:31:00 00:31:30 00:32:00 00:32:30 00:33:00 00:33:30 00:34:00 softirq user system nice iowait percentage Thu, Jul 11, 2024|00:34:07 softirq0.3 user0.4 system1.0 nice0.1 iowait1.6 LOAD Current system load, i.e. the number of processes using CPU or waiting for system resources (usually CPU and disk). The 3 metrics refer to 1, 5 and 15 minute averages. The system calculates this once every 5 seconds. For more information check this wikipedia article. System Load Average (system.load) 1.80 2.00 2.20 2.40 2.60 00:27:30 00:28:00 00:28:30 00:29:00 00:29:30 00:30:00 00:30:30 00:31:00 00:31:30 00:32:00 00:32:30 00:33:00 00:33:30 load1 load5 load15 load Thu, Jul 11, 2024|00:34:00 load12.67 load52.25 load152.10 DISK Total Disk I/O, for all physical disks. You can get detailed information about each disk at the Disks section and per application Disk usage at the Applications Monitoring section. Physical are all the disks that are listed in /sys/block, but do not exist in /sys/devices/virtual/block. Disk I/O (system.io) 0.0 9.8 19.5 29.3 39.1 48.8 58.6 68.4 00:27:30 00:28:00 00:28:30 00:29:00 00:29:30 00:30:00 00:30:30 00:31:00 00:31:30 00:32:00 00:32:30 00:33:00 00:33:30 00:34:00 in out MiB/s Thu, Jul 11, 2024|00:34:08 in20.0 out0.0 Memory paged from/to disk. This is usually the total disk I/O of the system. system.pgpgio RAM System Random Access Memory (i.e. physical memory) usage. system.ram NETWORK Total bandwidth of all physical network interfaces. This does not include lo, VPNs, network bridges, IFB devices, bond interfaces, etc. Only the bandwidth of physical network interfaces is aggregated. Physical are all the network interfaces that are listed in /proc/net/dev, but do not exist in /sys/devices/virtual/net. system.net Total IP traffic in the system. system.ip PROCESSES System processes. Running - running or ready to run (runnable). Blocked - currently blocked, waiting for I/O to complete. system.processes The number of new processes created. system.forks The total number of processes in the system. system.active_processes Context Switches, is the switching of the CPU from one process, task or thread to another. If there are many processes or threads willing to execute and very few CPU cores available to handle them, the system is making more context switching to balance the CPU resources among them. The whole process is computationally intensive. The more the context switches, the slower the system gets. system.ctxt IDLEJITTER Idle jitter is calculated by netdata. A thread is spawned that requests to sleep for a few microseconds. When the system wakes it up, it measures how many microseconds have passed. The difference between the requested and the actual duration of the sleep, is the idle jitter. This number is useful in real-time environments, where CPU jitter can affect the quality of the service (like VoIP media gateways). system.idlejitter INTERRUPTS Interrupts are signals sent to the CPU by external devices (normally I/O devices) or programs (running processes). They tell the CPU to stop its current activities and execute the appropriate part of the operating system. Interrupt types are hardware (generated by hardware devices to signal that they need some attention from the OS), software (generated by programs when they want to request a system call to be performed by the operating system), and traps (generated by the CPU itself to indicate that some error or condition occurred for which assistance from the operating system is needed). Total number of CPU interrupts. Check system.interrupts that gives more detail about each interrupt and also the CPUs section where interrupts are analyzed per CPU core. system.intr CPU interrupts in detail. At the CPUs section, interrupts are analyzed per CPU core. The last column in /proc/interrupts provides an interrupt description or the device name that registered the handler for that interrupt. system.interrupts SOFTIRQS Software interrupts (or "softirqs") are one of the oldest deferred-execution mechanisms in the kernel. Several tasks among those executed by the kernel are not critical: they can be deferred for a long period of time, if necessary. The deferrable tasks can execute with all interrupts enabled (softirqs are patterned after hardware interrupts). Taking them out of the interrupt handler helps keep kernel response time small. Total number of software interrupts in the system. At the CPUs section, softirqs are analyzed per CPU core. HI - high priority tasklets. TIMER - tasklets related to timer interrupts. NET_TX, NET_RX - used for network transmit and receive processing. BLOCK - handles block I/O completion events. IRQ_POLL - used by the IO subsystem to increase performance (a NAPI like approach for block devices). TASKLET - handles regular tasklets. SCHED - used by the scheduler to perform load-balancing and other scheduling tasks. HRTIMER - used for high-resolution timers. RCU - performs read-copy-update (RCU) processing. system.softirqs SOFTNET Statistics for CPUs SoftIRQs related to network receive work. Break down per CPU core can be found at CPU / softnet statistics. More information about identifying and troubleshooting network driver related issues can be found at Red Hat Enterprise Linux Network Performance Tuning Guide. Processed - packets processed. Dropped - packets dropped because the network device backlog was full. Squeezed - number of times the network device budget was consumed or the time limit was reached, but more work was available. ReceivedRPS - number of times this CPU has been woken up to process packets via an Inter-processor Interrupt. FlowLimitCount - number of times the flow limit has been reached (flow limiting is an optional Receive Packet Steering feature). system.softnet_stat ENTROPY Entropy, is a pool of random numbers (/dev/random) that is mainly used in cryptography. If the pool of entropy gets empty, processes requiring random numbers may run a lot slower (it depends on the interface each program uses), waiting for the pool to be replenished. Ideally a system with high entropy demands should have a hardware device for that purpose (TPM is one such device). There are also several software-only options you may install, like haveged, although these are generally useful only in servers. system.entropy UPTIME The amount of time the system has been running, including time spent in suspend. system.uptime CLOCK SYNCHRONIZATION NTP lets you automatically sync your system time with a remote server. This keeps your machine’s time accurate by syncing with servers that are known to have accurate times. The system clock synchronization state. It is strongly recommended having the clock in sync with reliable NTP servers. Otherwise, it leads to unpredictable problems. It can take several minutes (usually up to 17) before NTP daemon selects a server to synchronize with. State map: 0 - not synchronized, 1 - synchronized. system.clock_sync_state A typical NTP client regularly polls one or more NTP servers. The client must compute its time offset and round-trip delay. Time offset is the difference in absolute time between the two clocks. system.clock_sync_offset IPC SEMAPHORES System V semaphores is an inter-process communication (IPC) mechanism. It allows processes or threads within a process to synchronize their actions. They are often used to monitor and control the availability of system resources such as shared memory segments. For details, see svipc(7). To see the host IPC semaphore information, run ipcs -us. For limits, run ipcs -ls. Number of allocated System V IPC semaphores. The system-wide limit on the number of semaphores in all semaphore sets is specified in /proc/sys/kernel/sem file (2nd field). system.ipc_semaphores Number of used System V IPC semaphore arrays (sets). Semaphores support semaphore sets where each one is a counting semaphore. So when an application requests semaphores, the kernel releases them in sets. The system-wide limit on the maximum number of semaphore sets is specified in /proc/sys/kernel/sem file (4th field). system.ipc_semaphore_arrays IPC SHARED MEMORY System V shared memory is an inter-process communication (IPC) mechanism. It allows processes to communicate information by sharing a region of memory. It is the fastest form of inter-process communication available since no kernel involvement occurs when data is passed between the processes (no copying). Typically, processes must synchronize their access to a shared memory object, using, for example, POSIX semaphores. For details, see svipc(7). To see the host IPC shared memory information, run ipcs -um. For limits, run ipcs -lm. Number of allocated System V IPC memory segments. The system-wide maximum number of shared memory segments that can be created is specified in /proc/sys/kernel/shmmni file. system.shared_memory_segments Amount of memory currently used by System V IPC memory segments. The run-time limit on the maximum shared memory segment size that can be created is specified in /proc/sys/kernel/shmmax file. system.shared_memory_bytes -------------------------------------------------------------------------------- CPUS Detailed information for each CPU of the system. A summary of the system for all CPUs can be found at the System Overview section. UTILIZATION cpu.cpu0 cpu.cpu1 cpu.cpu2 cpu.cpu3 cpu.cpu4 cpu.cpu5 cpu.cpu6 cpu.cpu7 cpu.cpu8 cpu.cpu9 cpu.cpu10 cpu.cpu11 cpu.cpu12 cpu.cpu13 cpu.cpu14 cpu.cpu15 cpu.cpu16 cpu.cpu17 cpu.cpu18 cpu.cpu19 cpu.cpu20 cpu.cpu21 cpu.cpu22 cpu.cpu23 cpu.cpu24 cpu.cpu25 cpu.cpu26 cpu.cpu27 cpu.cpu28 cpu.cpu29 cpu.cpu30 cpu.cpu31 cpu.cpu32 cpu.cpu33 cpu.cpu34 cpu.cpu35 cpu.cpu36 cpu.cpu37 cpu.cpu38 cpu.cpu39 cpu.cpu40 cpu.cpu41 cpu.cpu42 cpu.cpu43 cpu.cpu44 cpu.cpu45 cpu.cpu46 cpu.cpu47 INTERRUPTS Total number of interrupts per CPU. To see the total number for the system check the interrupts section. The last column in /proc/interrupts provides an interrupt description or the device name that registered the handler for that interrupt. cpu.cpu0_interrupts cpu.cpu1_interrupts cpu.cpu2_interrupts cpu.cpu3_interrupts cpu.cpu4_interrupts cpu.cpu5_interrupts cpu.cpu6_interrupts cpu.cpu7_interrupts cpu.cpu8_interrupts cpu.cpu9_interrupts cpu.cpu10_interrupts cpu.cpu11_interrupts cpu.cpu12_interrupts cpu.cpu13_interrupts cpu.cpu14_interrupts cpu.cpu15_interrupts cpu.cpu16_interrupts cpu.cpu17_interrupts cpu.cpu18_interrupts cpu.cpu19_interrupts cpu.cpu20_interrupts cpu.cpu21_interrupts cpu.cpu22_interrupts cpu.cpu23_interrupts cpu.cpu24_interrupts cpu.cpu25_interrupts cpu.cpu26_interrupts cpu.cpu27_interrupts cpu.cpu28_interrupts cpu.cpu29_interrupts cpu.cpu30_interrupts cpu.cpu31_interrupts cpu.cpu32_interrupts cpu.cpu33_interrupts cpu.cpu34_interrupts cpu.cpu35_interrupts cpu.cpu36_interrupts cpu.cpu37_interrupts cpu.cpu38_interrupts cpu.cpu39_interrupts cpu.cpu40_interrupts cpu.cpu41_interrupts cpu.cpu42_interrupts cpu.cpu43_interrupts cpu.cpu44_interrupts cpu.cpu45_interrupts cpu.cpu46_interrupts cpu.cpu47_interrupts SOFTIRQS Total number of software interrupts per CPU. To see the total number for the system check the softirqs section. cpu.cpu0_softirqs cpu.cpu1_softirqs cpu.cpu2_softirqs cpu.cpu3_softirqs cpu.cpu4_softirqs cpu.cpu5_softirqs cpu.cpu6_softirqs cpu.cpu7_softirqs cpu.cpu8_softirqs cpu.cpu9_softirqs cpu.cpu10_softirqs cpu.cpu11_softirqs cpu.cpu12_softirqs cpu.cpu13_softirqs cpu.cpu14_softirqs cpu.cpu15_softirqs cpu.cpu16_softirqs cpu.cpu17_softirqs cpu.cpu18_softirqs cpu.cpu19_softirqs cpu.cpu20_softirqs cpu.cpu21_softirqs cpu.cpu22_softirqs cpu.cpu23_softirqs cpu.cpu24_softirqs cpu.cpu25_softirqs cpu.cpu26_softirqs cpu.cpu27_softirqs cpu.cpu28_softirqs cpu.cpu29_softirqs cpu.cpu30_softirqs cpu.cpu31_softirqs cpu.cpu32_softirqs cpu.cpu33_softirqs cpu.cpu34_softirqs cpu.cpu35_softirqs cpu.cpu36_softirqs cpu.cpu37_softirqs cpu.cpu38_softirqs cpu.cpu39_softirqs cpu.cpu40_softirqs cpu.cpu41_softirqs cpu.cpu42_softirqs cpu.cpu43_softirqs cpu.cpu44_softirqs cpu.cpu45_softirqs cpu.cpu46_softirqs cpu.cpu47_softirqs SOFTNET Statistics for CPUs SoftIRQs related to network receive work. Total for all CPU cores can be found at System / softnet statistics. More information about identifying and troubleshooting network driver related issues can be found at Red Hat Enterprise Linux Network Performance Tuning Guide. Processed - packets processed. Dropped - packets dropped because the network device backlog was full. Squeezed - number of times the network device budget was consumed or the time limit was reached, but more work was available. ReceivedRPS - number of times this CPU has been woken up to process packets via an Inter-processor Interrupt. FlowLimitCount - number of times the flow limit has been reached (flow limiting is an optional Receive Packet Steering feature). cpu.cpu0_softnet_stat cpu.cpu1_softnet_stat cpu.cpu2_softnet_stat cpu.cpu3_softnet_stat cpu.cpu4_softnet_stat cpu.cpu5_softnet_stat cpu.cpu6_softnet_stat cpu.cpu7_softnet_stat cpu.cpu8_softnet_stat cpu.cpu9_softnet_stat cpu.cpu10_softnet_stat cpu.cpu11_softnet_stat cpu.cpu12_softnet_stat cpu.cpu13_softnet_stat cpu.cpu14_softnet_stat cpu.cpu15_softnet_stat cpu.cpu16_softnet_stat cpu.cpu17_softnet_stat cpu.cpu18_softnet_stat cpu.cpu19_softnet_stat cpu.cpu20_softnet_stat cpu.cpu21_softnet_stat cpu.cpu22_softnet_stat cpu.cpu23_softnet_stat cpu.cpu24_softnet_stat cpu.cpu25_softnet_stat cpu.cpu26_softnet_stat cpu.cpu27_softnet_stat cpu.cpu28_softnet_stat cpu.cpu29_softnet_stat cpu.cpu30_softnet_stat cpu.cpu31_softnet_stat cpu.cpu32_softnet_stat cpu.cpu33_softnet_stat cpu.cpu34_softnet_stat cpu.cpu35_softnet_stat cpu.cpu36_softnet_stat cpu.cpu37_softnet_stat cpu.cpu38_softnet_stat cpu.cpu39_softnet_stat cpu.cpu40_softnet_stat cpu.cpu41_softnet_stat cpu.cpu42_softnet_stat cpu.cpu43_softnet_stat cpu.cpu44_softnet_stat cpu.cpu45_softnet_stat cpu.cpu46_softnet_stat cpu.cpu47_softnet_stat THROTTLING CPU throttling is commonly used to automatically slow down the computer when possible to use less energy and conserve battery. The number of adjustments made to the clock speed of the CPU based on it's core temperature. cpu.core_throttling CPUFREQ The frequency measures the number of cycles your CPU executes per second. cpu.cpufreq CPUIDLE Idle States (C-states) are used to save power when the processor is idle. The percentage of time spent in C-states. cpu.cpu0_cpuidle The percentage of time spent in C-states. cpu.cpu1_cpuidle The percentage of time spent in C-states. cpu.cpu2_cpuidle The percentage of time spent in C-states. cpu.cpu3_cpuidle The percentage of time spent in C-states. cpu.cpu4_cpuidle The percentage of time spent in C-states. cpu.cpu5_cpuidle The percentage of time spent in C-states. cpu.cpu6_cpuidle The percentage of time spent in C-states. cpu.cpu7_cpuidle The percentage of time spent in C-states. cpu.cpu8_cpuidle The percentage of time spent in C-states. cpu.cpu9_cpuidle The percentage of time spent in C-states. cpu.cpu10_cpuidle The percentage of time spent in C-states. cpu.cpu11_cpuidle The percentage of time spent in C-states. cpu.cpu12_cpuidle The percentage of time spent in C-states. cpu.cpu13_cpuidle The percentage of time spent in C-states. cpu.cpu14_cpuidle The percentage of time spent in C-states. cpu.cpu15_cpuidle The percentage of time spent in C-states. cpu.cpu16_cpuidle The percentage of time spent in C-states. cpu.cpu17_cpuidle The percentage of time spent in C-states. cpu.cpu18_cpuidle The percentage of time spent in C-states. cpu.cpu19_cpuidle The percentage of time spent in C-states. cpu.cpu20_cpuidle The percentage of time spent in C-states. cpu.cpu21_cpuidle The percentage of time spent in C-states. cpu.cpu22_cpuidle The percentage of time spent in C-states. cpu.cpu23_cpuidle The percentage of time spent in C-states. cpu.cpu24_cpuidle The percentage of time spent in C-states. cpu.cpu25_cpuidle The percentage of time spent in C-states. cpu.cpu26_cpuidle The percentage of time spent in C-states. cpu.cpu27_cpuidle The percentage of time spent in C-states. cpu.cpu28_cpuidle The percentage of time spent in C-states. cpu.cpu29_cpuidle The percentage of time spent in C-states. cpu.cpu30_cpuidle The percentage of time spent in C-states. cpu.cpu31_cpuidle The percentage of time spent in C-states. cpu.cpu32_cpuidle The percentage of time spent in C-states. cpu.cpu33_cpuidle The percentage of time spent in C-states. cpu.cpu34_cpuidle The percentage of time spent in C-states. cpu.cpu35_cpuidle The percentage of time spent in C-states. cpu.cpu36_cpuidle The percentage of time spent in C-states. cpu.cpu37_cpuidle The percentage of time spent in C-states. cpu.cpu38_cpuidle The percentage of time spent in C-states. cpu.cpu39_cpuidle The percentage of time spent in C-states. cpu.cpu40_cpuidle The percentage of time spent in C-states. cpu.cpu41_cpuidle The percentage of time spent in C-states. cpu.cpu42_cpuidle The percentage of time spent in C-states. cpu.cpu43_cpuidle The percentage of time spent in C-states. cpu.cpu44_cpuidle The percentage of time spent in C-states. cpu.cpu45_cpuidle The percentage of time spent in C-states. cpu.cpu46_cpuidle The percentage of time spent in C-states. cpu.cpu47_cpuidle -------------------------------------------------------------------------------- MEMORY Detailed information about the memory management of the system. SYSTEM Available Memory is estimated by the kernel, as the amount of RAM that can be used by userspace processes, without causing swapping. mem.available The number of processes killed by Out of Memory Killer. The kernel's OOM killer is summoned when the system runs short of free memory and is unable to proceed without killing one or more processes. It tries to pick the process whose demise will free the most memory while causing the least misery for users of the system. This counter also includes processes within containers that have exceeded the memory limit. mem.oom_kill Committed Memory, is the sum of all memory which has been allocated by processes. mem.committed A page fault is a type of interrupt, called trap, raised by computer hardware when a running program accesses a memory page that is mapped into the virtual address space, but not actually loaded into main memory. Minor - the page is loaded in memory at the time the fault is generated, but is not marked in the memory management unit as being loaded in memory. Major - generated when the system needs to load the memory page from disk or swap memory. mem.pgfaults KERNEL Dirty is the amount of memory waiting to be written to disk. Writeback is how much memory is actively being written to disk. mem.writeback The total amount of memory being used by the kernel. Slab - used by the kernel to cache data structures for its own use. KernelStack - allocated for each task done by the kernel. PageTables - dedicated to the lowest level of page tables (A page table is used to turn a virtual address into a physical memory address). VmallocUsed - being used as virtual address space. Percpu - allocated to the per-CPU allocator used to back per-CPU allocations (excludes the cost of metadata). When you create a per-CPU variable, each processor on the system gets its own copy of that variable. mem.kernel SLAB Slab memory statistics. Reclaimable - amount of memory which the kernel can reuse. Unreclaimable - can not be reused even when the kernel is lacking memory. mem.slab HUGEPAGES Hugepages is a feature that allows the kernel to utilize the multiple page size capabilities of modern hardware architectures. The kernel creates multiple pages of virtual memory, mapped from both physical RAM and swap. There is a mechanism in the CPU architecture called "Translation Lookaside Buffers" (TLB) to manage the mapping of virtual memory pages to actual physical memory addresses. The TLB is a limited hardware resource, so utilizing a large amount of physical memory with the default page size consumes the TLB and adds processing overhead. By utilizing Huge Pages, the kernel is able to create pages of much larger sizes, each page consuming a single resource in the TLB. Huge Pages are pinned to physical RAM and cannot be swapped/paged out. Transparent HugePages (THP) is backing virtual memory with huge pages, supporting automatic promotion and demotion of page sizes. It works for all applications for anonymous memory mappings and tmpfs/shmem. mem.transparent_hugepages NUMA Non-Uniform Memory Access (NUMA) is a hierarchical memory design the memory access time is dependent on locality. Under NUMA, a processor can access its own local memory faster than non-local memory (memory local to another processor or memory shared between processors). The individual metrics are described in the Linux kernel documentation. NUMA balancing statistics. Local - pages successfully allocated on this node, by a process on this node. Foreign - pages initially intended for this node that were allocated to another node instead. Interleave - interleave policy pages successfully allocated to this node. Other - pages allocated on this node, by a process on another node. PteUpdates - base pages that were marked for NUMA hinting faults. HugePteUpdates - transparent huge pages that were marked for NUMA hinting faults. In Combination with pte_updates the total address space that was marked can be calculated. HintFaults - NUMA hinting faults that were trapped. HintFaultsLocal - hinting faults that were to local nodes. In combination with HintFaults, the pe... NUMA balancing statistics. Local - pages successfully allocated on this node, by a process on this node. Foreign - pages initially intended for this node that were allocated to another node instead. Interleave - interleave policy pages successfully allocated to this node. Other - pages allocated on this node, by a process on another node. PteUpdates - base pages that were marked for NUMA hinting faults. HugePteUpdates - transparent huge pages that were marked for NUMA hinting faults. In Combination with pte_updates the total address space that was marked can be calculated. HintFaults - NUMA hinting faults that were trapped. HintFaultsLocal - hinting faults that were to local nodes. In combination with HintFaults, the percentage of local versus remote faults can be calculated. A high percentage of local hinting faults indicates that the workload is closer to being converged. PagesMigrated - pages were migrated because they were misplaced. As migration is a copying operation, it contributes the largest part of the overhead created by NUMA balancing. show more information mem.numa mem.node0 mem.node1 -------------------------------------------------------------------------------- DISKS Charts with performance information for all the system disks. Special care has been given to present disk performance metrics in a way compatible with iostat -x. netdata by default prevents rendering performance charts for individual partitions and unmounted virtual disks. Disabled charts can still be enabled by configuring the relative settings in the netdata configuration file. /ETC/RESOLV.CONF disk.md2 disk.md2 The amount of data transferred to and from disk. disk.md2 The amount of discarded data that are no longer in use by a mounted file system. disk_ext.md2 Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.md2 The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.md2 The average I/O operation size. disk_avgsz.md2 The average discard operation size. disk_ext_avgsz.md2 SDA disk.sda disk.sda disk_util.sda The amount of data transferred to and from disk. disk.sda The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sda Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sda The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sda Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sda Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sda Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sda The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sda The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sda The average I/O operation size. disk_avgsz.sda The average discard operation size. disk_ext_avgsz.sda The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sda The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sda The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sda The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sda The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sda SDAA disk.sdaa disk.sdaa disk_util.sdaa The amount of data transferred to and from disk. disk.sdaa The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdaa Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdaa The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdaa Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdaa Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdaa Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdaa The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdaa The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdaa The average I/O operation size. disk_avgsz.sdaa The average discard operation size. disk_ext_avgsz.sdaa The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdaa The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdaa The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdaa The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdaa The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdaa SDAB disk.sdab disk.sdab disk_util.sdab The amount of data transferred to and from disk. disk.sdab The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdab Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdab The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdab Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdab Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdab Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdab The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdab The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdab The average I/O operation size. disk_avgsz.sdab The average discard operation size. disk_ext_avgsz.sdab The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdab The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdab The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdab The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdab The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdab SDAC disk.sdac disk.sdac disk_util.sdac The amount of data transferred to and from disk. disk.sdac The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdac Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdac The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdac Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdac Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdac Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdac The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdac The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdac The average I/O operation size. disk_avgsz.sdac The average discard operation size. disk_ext_avgsz.sdac The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdac The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdac The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdac The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdac The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdac SDAD disk.sdad disk.sdad disk_util.sdad The amount of data transferred to and from disk. disk.sdad The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdad Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdad The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdad Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdad Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdad Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdad The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdad The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdad The average I/O operation size. disk_avgsz.sdad The average discard operation size. disk_ext_avgsz.sdad The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdad The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdad The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdad The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdad The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdad SDAE disk.sdae disk.sdae disk_util.sdae The amount of data transferred to and from disk. disk.sdae The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdae Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdae The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdae Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdae Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdae Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdae The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdae The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdae The average I/O operation size. disk_avgsz.sdae The average discard operation size. disk_ext_avgsz.sdae The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdae The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdae The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdae The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdae The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdae SDAF disk.sdaf disk.sdaf disk_util.sdaf The amount of data transferred to and from disk. disk.sdaf The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdaf Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdaf The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdaf Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdaf Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdaf Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdaf The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdaf The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdaf The average I/O operation size. disk_avgsz.sdaf The average discard operation size. disk_ext_avgsz.sdaf The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdaf The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdaf The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdaf The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdaf The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdaf SDAG disk.sdag disk.sdag disk_util.sdag The amount of data transferred to and from disk. disk.sdag The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdag Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdag The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdag Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdag Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdag Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdag The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdag The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdag The average I/O operation size. disk_avgsz.sdag The average discard operation size. disk_ext_avgsz.sdag The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdag The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdag The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdag The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdag The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdag SDAH disk.sdah disk.sdah disk_util.sdah The amount of data transferred to and from disk. disk.sdah The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdah Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdah The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdah Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdah Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdah Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdah The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdah The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdah The average I/O operation size. disk_avgsz.sdah The average discard operation size. disk_ext_avgsz.sdah The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdah The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdah The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdah The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdah The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdah SDAI disk.sdai disk.sdai disk_util.sdai The amount of data transferred to and from disk. disk.sdai The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdai Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdai The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdai Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdai Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdai Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdai The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdai The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdai The average I/O operation size. disk_avgsz.sdai The average discard operation size. disk_ext_avgsz.sdai The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdai The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdai The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdai The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdai The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdai SDAJ disk.sdaj disk.sdaj disk_util.sdaj The amount of data transferred to and from disk. disk.sdaj The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdaj Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdaj The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdaj Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdaj Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdaj Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdaj The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdaj The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdaj The average I/O operation size. disk_avgsz.sdaj The average discard operation size. disk_ext_avgsz.sdaj The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdaj The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdaj The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdaj The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdaj The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdaj SDAK disk.sdak disk.sdak disk_util.sdak The amount of data transferred to and from disk. disk.sdak The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdak Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdak The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdak Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdak Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdak Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdak The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdak The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdak The average I/O operation size. disk_avgsz.sdak The average discard operation size. disk_ext_avgsz.sdak The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdak The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdak The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdak The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdak The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdak SDAL disk.sdal disk.sdal disk_util.sdal The amount of data transferred to and from disk. disk.sdal The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdal Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdal The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdal Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdal Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdal Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdal The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdal The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdal The average I/O operation size. disk_avgsz.sdal The average discard operation size. disk_ext_avgsz.sdal The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdal The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdal The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdal The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdal The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdal SDB disk.sdb disk.sdb disk_util.sdb The amount of data transferred to and from disk. disk.sdb The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdb Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdb The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdb Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdb Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdb Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdb The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdb The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdb The average I/O operation size. disk_avgsz.sdb The average discard operation size. disk_ext_avgsz.sdb The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdb The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdb The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdb The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdb The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdb SDC disk.sdc disk.sdc disk_util.sdc The amount of data transferred to and from disk. disk.sdc The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdc Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdc The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdc Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdc Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdc Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdc The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdc The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdc The average I/O operation size. disk_avgsz.sdc The average discard operation size. disk_ext_avgsz.sdc The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdc The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdc The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdc The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdc The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdc SDD disk.sdd disk.sdd disk_util.sdd The amount of data transferred to and from disk. disk.sdd The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdd Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdd The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdd Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdd Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdd Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdd The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdd The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdd The average I/O operation size. disk_avgsz.sdd The average discard operation size. disk_ext_avgsz.sdd The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdd The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdd The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdd The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdd The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdd SDE disk.sde disk.sde disk_util.sde The amount of data transferred to and from disk. disk.sde The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sde Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sde The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sde Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sde Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sde Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sde The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sde The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sde The average I/O operation size. disk_avgsz.sde The average discard operation size. disk_ext_avgsz.sde The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sde The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sde The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sde The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sde The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sde SDF disk.sdf disk.sdf disk_util.sdf The amount of data transferred to and from disk. disk.sdf The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdf Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdf The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdf Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdf Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdf Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdf The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdf The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdf The average I/O operation size. disk_avgsz.sdf The average discard operation size. disk_ext_avgsz.sdf The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdf The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdf The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdf The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdf The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdf SDG disk.sdg disk.sdg disk_util.sdg The amount of data transferred to and from disk. disk.sdg The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdg Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdg The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdg Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdg Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdg Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdg The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdg The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdg The average I/O operation size. disk_avgsz.sdg The average discard operation size. disk_ext_avgsz.sdg The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdg The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdg The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdg The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdg The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdg SDH disk.sdh disk.sdh disk_util.sdh The amount of data transferred to and from disk. disk.sdh The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdh Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdh The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdh Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdh Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdh Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdh The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdh The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdh The average I/O operation size. disk_avgsz.sdh The average discard operation size. disk_ext_avgsz.sdh The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdh The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdh The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdh The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdh The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdh SDI disk.sdi disk.sdi disk_util.sdi The amount of data transferred to and from disk. disk.sdi The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdi Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdi The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdi Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdi Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdi Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdi The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdi The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdi The average I/O operation size. disk_avgsz.sdi The average discard operation size. disk_ext_avgsz.sdi The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdi The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdi The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdi The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdi The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdi SDJ disk.sdj disk.sdj disk_util.sdj The amount of data transferred to and from disk. disk.sdj The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdj Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdj The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdj Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdj Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdj Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdj The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdj The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdj The average I/O operation size. disk_avgsz.sdj The average discard operation size. disk_ext_avgsz.sdj The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdj The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdj The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdj The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdj The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdj SDK disk.sdk disk.sdk disk_util.sdk The amount of data transferred to and from disk. disk.sdk The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdk Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdk The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdk Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdk Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdk Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdk The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdk The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdk The average I/O operation size. disk_avgsz.sdk The average discard operation size. disk_ext_avgsz.sdk The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdk The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdk The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdk The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdk The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdk SDL disk.sdl disk.sdl disk_util.sdl The amount of data transferred to and from disk. disk.sdl The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdl Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdl The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdl Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdl Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdl Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdl The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdl The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdl The average I/O operation size. disk_avgsz.sdl The average discard operation size. disk_ext_avgsz.sdl The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdl The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdl The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdl The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdl The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdl SDM disk.sdm disk.sdm disk_util.sdm The amount of data transferred to and from disk. disk.sdm The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdm Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdm The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdm Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdm Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdm Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdm The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdm The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdm The average I/O operation size. disk_avgsz.sdm The average discard operation size. disk_ext_avgsz.sdm The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdm The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdm The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdm The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdm The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdm SDN disk.sdn disk.sdn disk_util.sdn The amount of data transferred to and from disk. disk.sdn The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdn Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdn The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdn Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdn Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdn Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdn The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdn The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdn The average I/O operation size. disk_avgsz.sdn The average discard operation size. disk_ext_avgsz.sdn The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdn The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdn The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdn The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdn The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdn SDO disk.sdo disk.sdo disk_util.sdo The amount of data transferred to and from disk. disk.sdo The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdo Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdo The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdo Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdo Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdo Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdo The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdo The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdo The average I/O operation size. disk_avgsz.sdo The average discard operation size. disk_ext_avgsz.sdo The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdo The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdo The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdo The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdo The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdo SDP disk.sdp disk.sdp disk_util.sdp The amount of data transferred to and from disk. disk.sdp The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdp Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdp The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdp Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdp Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdp Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdp The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdp The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdp The average I/O operation size. disk_avgsz.sdp The average discard operation size. disk_ext_avgsz.sdp The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdp The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdp The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdp The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdp The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdp SDQ disk.sdq disk.sdq disk_util.sdq The amount of data transferred to and from disk. disk.sdq The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdq Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdq The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdq Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdq Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdq Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdq The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdq The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdq The average I/O operation size. disk_avgsz.sdq The average discard operation size. disk_ext_avgsz.sdq The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdq The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdq The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdq The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdq The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdq SDR disk.sdr disk.sdr disk_util.sdr The amount of data transferred to and from disk. disk.sdr The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdr Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdr The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdr Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdr Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdr Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdr The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdr The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdr The average I/O operation size. disk_avgsz.sdr The average discard operation size. disk_ext_avgsz.sdr The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdr The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdr The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdr The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdr The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdr SDS disk.sds disk.sds disk_util.sds The amount of data transferred to and from disk. disk.sds The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sds Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sds The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sds Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sds Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sds Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sds The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sds The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sds The average I/O operation size. disk_avgsz.sds The average discard operation size. disk_ext_avgsz.sds The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sds The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sds The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sds The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sds The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sds SDT disk.sdt disk.sdt disk_util.sdt The amount of data transferred to and from disk. disk.sdt The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdt Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdt The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdt Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdt Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdt Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdt The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdt The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdt The average I/O operation size. disk_avgsz.sdt The average discard operation size. disk_ext_avgsz.sdt The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdt The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdt The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdt The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdt The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdt SDU disk.sdu disk.sdu disk_util.sdu The amount of data transferred to and from disk. disk.sdu The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdu Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdu The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdu Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdu Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdu Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdu The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdu The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdu The average I/O operation size. disk_avgsz.sdu The average discard operation size. disk_ext_avgsz.sdu The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdu The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdu The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdu The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdu The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdu SDV disk.sdv disk.sdv disk_util.sdv The amount of data transferred to and from disk. disk.sdv The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdv Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdv The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdv Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdv Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdv Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdv The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdv The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdv The average I/O operation size. disk_avgsz.sdv The average discard operation size. disk_ext_avgsz.sdv The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdv The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdv The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdv The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdv The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdv SDW disk.sdw disk.sdw disk_util.sdw The amount of data transferred to and from disk. disk.sdw The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdw Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdw The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdw Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdw Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdw Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdw The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdw The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdw The average I/O operation size. disk_avgsz.sdw The average discard operation size. disk_ext_avgsz.sdw The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdw The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdw The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdw The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdw The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdw SDX disk.sdx disk.sdx disk_util.sdx The amount of data transferred to and from disk. disk.sdx The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdx Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdx The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdx Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdx Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdx Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdx The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdx The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdx The average I/O operation size. disk_avgsz.sdx The average discard operation size. disk_ext_avgsz.sdx The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdx The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdx The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdx The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdx The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdx SDY disk.sdy disk.sdy disk_util.sdy The amount of data transferred to and from disk. disk.sdy The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdy Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdy The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdy Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdy Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdy Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdy The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdy The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdy The average I/O operation size. disk_avgsz.sdy The average discard operation size. disk_ext_avgsz.sdy The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdy The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdy The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdy The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdy The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdy SDZ disk.sdz disk.sdz disk_util.sdz The amount of data transferred to and from disk. disk.sdz The amount of discarded data that are no longer in use by a mounted file system. disk_ext.sdz Completed disk I/O operations. Keep in mind the number of operations requested might be higher, since the system is able to merge adjacent to each other (see merged operations chart). disk_ops.sdz The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush... The number (after merges) of completed discard/flush requests. Discard commands inform disks which blocks of data are no longer considered to be in use and therefore can be erased internally. They are useful for solid-state drivers (SSDs) and thinly-provisioned storage. Discarding/trimming enables the SSD to handle garbage collection more efficiently, which would otherwise slow future write operations to the involved blocks down. Flush operations transfer all modified in-core data (i.e., modified buffer cache pages) to the disk device so that all changed information can be retrieved even if the system crashes or is rebooted. Flush requests are executed by disks. Flush requests are not tracked for partitions. Before being merged, flush operations are counted as writes. show more information disk_ext_ops.sdz Backlog is an indication of the duration of pending disk operations. On every I/O event the system is multiplying the time spent doing I/O since the last update of this field with the number of pending operations. While not accurate, this metric can provide an indication of the expected completion time of the operations in progress. disk_backlog.sdz Disk Busy Time measures the amount of time the disk was busy with something. disk_busy.sdz Disk Utilization measures the amount of time the disk was busy with something. This is not related to its performance. 100% means that the system always had an outstanding operation on the disk. Keep in mind that depending on the underlying technology of the disk, 100% here may or may not be an indication of congestion. disk_util.sdz The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_await.sdz The average time for discard/flush requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. disk_ext_await.sdz The average I/O operation size. disk_avgsz.sdz The average discard operation size. disk_ext_avgsz.sdz The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. disk_svctm.sdz The number of merged disk operations. The system is able to merge adjacent I/O operations, for example two 4KB reads can become one 8KB read before given to disk. disk_mops.sdz The number of merged discard disk operations. Discard operations which are adjacent to each other may be merged for efficiency. disk_ext_mops.sdz The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute I/O operations in parallel. disk_iotime.sdz The sum of the duration of all completed discard/flush operations. This number can exceed the interval if the disk is able to execute discard/flush operations in parallel. disk_ext_iotime.sdz / Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user from getting out of space. disk_space._ Inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum number of files the filesystem can hold. It is possible for a device to run out of inodes. When this happens, new files cannot be created on the device, even though there may be free space available. disk_inodes._ /DEV Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user from getting out of space. disk_space._dev Inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum number of files the filesystem can hold. It is possible for a device to run out of inodes. When this happens, new files cannot be created on the device, even though there may be free space available. disk_inodes._dev /DEV/SHM Disk space utilization. reserved for root is automatically reserved by the system to prevent the root user from getting out of space. disk_space._dev_shm Inodes (or index nodes) are filesystem objects (e.g. files and directories). On many types of file system implementations, the maximum number of inodes is fixed at filesystem creation, limiting the maximum number of files the filesystem can hold. It is possible for a device to run out of inodes. When this happens, new files cannot be created on the device, even though there may be free space available. disk_inodes._dev_shm -------------------------------------------------------------------------------- MD ARRAYS RAID devices are virtual devices created from two or more real block devices. Linux Software RAID devices are implemented through the md (Multiple Devices) device driver. Netdata monitors the current status of MD arrays reading /proc/mdstat and /sys/block/%s/md/mismatch_cnt files. HEALTH Number of failed devices per MD array. Netdata retrieves this data from the [n/m] field of the md status line. It means that ideally the array would have n devices however, currently, m devices are in use. failed disks is n-m. mdstat.mdstat_health MD2 (RAID1) Number of devices in use and in the down state. Netdata retrieves this data from the [n/m] field of the md status line. It means that ideally the array would have n devices however, currently, m devices are in use. inuse is m, down is n-m. mdstat.md2_disks When performing check and repair, and possibly when performing resync, md will count the number of errors that are found. A count of mismatches is recorded in the sysfs file md/mismatch_cnt. This value is the number of sectors that were re-written, or (for check) would have been re-written. It may be larger than the number of actual errors by a factor of the number of sectors in a page. Mismatches can not be interpreted very reliably on RAID1 or RAID10, especially when the device is used for swap. On a truly clean RAID5 or RAID6 array, any mismatches should indicate a hardware problem at some level - software issues should never cause such a mismatch. For details, see md(4). mdstat.md2_mismatch Completion progress of the ongoing operation. mdstat.md2_operation Estimated time to complete the ongoing operation. The time is only an approximation since the operation speed will vary according to other I/O demands. mdstat.md2_finish Speed of the ongoing operation. The system-wide rebuild speed limits are specified in /proc/sys/dev/raid/{speed_limit_min,speed_limit_max} files. These options are good for tweaking rebuilt process and may increase overall system load, cpu and memory usage. mdstat.md2_speed -------------------------------------------------------------------------------- NETWORKING STACK Metrics for the networking stack of the system. These metrics are collected from /proc/net/netstat or attaching kprobes to kernel functions, apply to both IPv4 and IPv6 traffic and are related to operation of the kernel networking stack. TCP TCP connection aborts. BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a packet with a sequence number beyond the last one for this connection - the kernel responds with RST (closes the connection). UserClosed - happens when the kernel receives data on an already closed connection and responds with RST. NoMemory - happens when there are too many orphaned sockets (not attached to an fd) and the kernel has to drop a connection - sometimes it will send an RST, sometimes it won't. Timeout - happens when a connection times out. Linger - happens when the kernel killed a socket that was already closed by the application and lingered around for long enough. Failed - happens when the kernel attempted to se... TCP connection aborts. BadData - happens while the connection is on FIN_WAIT1 and the kernel receives a packet with a sequence number beyond the last one for this connection - the kernel responds with RST (closes the connection). UserClosed - happens when the kernel receives data on an already closed connection and responds with RST. NoMemory - happens when there are too many orphaned sockets (not attached to an fd) and the kernel has to drop a connection - sometimes it will send an RST, sometimes it won't. Timeout - happens when a connection times out. Linger - happens when the kernel killed a socket that was already closed by the application and lingered around for long enough. Failed - happens when the kernel attempted to send an RST but failed because there was no memory available. show more information ip.tcpconnaborts The SYN queue of the kernel tracks TCP handshakes until connections get fully established. It overflows when too many incoming TCP connection requests hang in the half-open state and the server is not configured to fall back to SYN cookies. Overflows are usually caused by SYN flood DoS attacks. Drops - number of connections dropped because the SYN queue was full and SYN cookies were disabled. Cookies - number of SYN cookies sent because the SYN queue was full. ip.tcp_syn_queue The accept queue of the kernel holds the fully established TCP connections, waiting to be handled by the listening application. Overflows - the number of established connections that could not be handled because the receive queue of the listening application was full. Drops - number of incoming connections that could not be handled, including SYN floods, overflows, out of memory, security issues, no route to destination, reception of related ICMP messages, socket is broadcast or multicast. ip.tcp_accept_queue TCP prevents out-of-order packets by either sequencing them in the correct order or by requesting the retransmission of out-of-order packets. Timestamp - detected re-ordering using the timestamp option. SACK - detected re-ordering using Selective Acknowledgment algorithm. FACK - detected re-ordering using Forward Acknowledgment algorithm. Reno - detected re-ordering using Fast Retransmit algorithm. ip.tcpreorders TCP maintains an out-of-order queue to keep the out-of-order packets in the TCP communication. InQueue - the TCP layer receives an out-of-order packet and has enough memory to queue it. Dropped - the TCP layer receives an out-of-order packet but does not have enough memory, so drops it. Merged - the received out-of-order packet has an overlay with the previous packet. The overlay part will be dropped. All these packets will also be counted into InQueue. Pruned - packets dropped from out-of-order queue because of socket buffer overrun. ip.tcpofo SYN cookies are used to mitigate SYN flood. Received - after sending a SYN cookie, it came back to us and passed the check. Sent - an application was not able to accept a connection fast enough, so the kernel could not store an entry in the queue for this connection. Instead of dropping it, it sent a SYN cookie to the client. Failed - the MSS decoded from the SYN cookie is invalid. When this counter is incremented, the received packet won’t be treated as a SYN cookie. ip.tcpsyncookies ECN Explicit Congestion Notification (ECN) is an extension to the IP and to the TCP that allows end-to-end notification of network congestion without dropping packets. ECN is an optional feature that may be used between two ECN-enabled endpoints when the underlying network infrastructure also supports it. Total number of received IP packets with ECN bits set in the system. CEP - congestion encountered. NoECTP - non ECN-capable transport. ECTP0 and ECTP1 - ECN capable transport. ip.ecnpkts -------------------------------------------------------------------------------- IPV4 NETWORKING Metrics for the IPv4 stack of the system. Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking methods in the Internet. IPv4 is a connectionless protocol for use on packet-switched networks. It operates on a best effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP). SOCKETS The total number of used sockets for all address families in this system. ipv4.sockstat_sockets PACKETS IPv4 packets statistics for this host. Received - packets received by the IP layer. This counter will be increased even if the packet is dropped later. Sent - packets sent via IP layer, for both single cast and multicast packets. This counter does not include any packets counted in Forwarded. Forwarded - input packets for which this host was not their final IP destination, as a result of which an attempt was made to find a route to forward them to that final destination. In hosts which do not act as IP Gateways, this counter will include only those packets which were Source-Routed and the Source-Route option processing was successful. Delivered - packets delivered to the upper layer protocols, e.g. TCP, UDP, ICMP, and so on. ipv4.packets ERRORS The number of discarded IPv4 packets. InDiscards, OutDiscards - inbound and outbound packets which were chosen to be discarded even though no errors had been detected to prevent their being deliverable to a higher-layer protocol. InHdrErrors - input packets that have been discarded due to errors in their IP headers, including bad checksums, version number mismatch, other format errors, time-to-live exceeded, errors discovered in processing their IP options, etc. OutNoRoutes - packets that have been discarded because no route could be found to transmit them to their destination. This includes any packets which a host cannot route because all of its default gateways are down. InAddrErrors - input packets that have been discarded du... The number of discarded IPv4 packets. InDiscards, OutDiscards - inbound and outbound packets which were chosen to be discarded even though no errors had been detected to prevent their being deliverable to a higher-layer protocol. InHdrErrors - input packets that have been discarded due to errors in their IP headers, including bad checksums, version number mismatch, other format errors, time-to-live exceeded, errors discovered in processing their IP options, etc. OutNoRoutes - packets that have been discarded because no route could be found to transmit them to their destination. This includes any packets which a host cannot route because all of its default gateways are down. InAddrErrors - input packets that have been discarded due to invalid IP address or the destination IP address is not a local address and IP forwarding is not enabled. InUnknownProtos - input packets which were discarded because of an unknown or unsupported protocol. show more information ipv4.errors ICMP The number of transferred IPv4 ICMP messages. Received, Sent - ICMP messages which the host received and attempted to send. Both these counters include errors. ipv4.icmp The number of IPv4 ICMP errors. InErrors - received ICMP messages but determined as having ICMP-specific errors, e.g. bad ICMP checksums, bad length, etc. OutErrors - ICMP messages which this host did not send due to problems discovered within ICMP such as a lack of buffers. This counter does not include errors discovered outside the ICMP layer such as the inability of IP to route the resultant datagram. InCsumErrors - received ICMP messages with bad checksum. ipv4.icmp_errors The number of transferred IPv4 ICMP control messages. ipv4.icmpmsg TCP The number of TCP connections for which the current state is either ESTABLISHED or CLOSE-WAIT. This is a snapshot of the established connections at the time of measurement (i.e. a connection established and a connection disconnected within the same iteration will not affect this metric). ipv4.tcpsock The number of TCP sockets in the system in certain states. Alloc - in any TCP state. Orphan - no longer attached to a socket descriptor in any user processes, but for which the kernel is still required to maintain state in order to complete the transport protocol. InUse - in any TCP state, excluding TIME-WAIT and CLOSED. TimeWait - in the TIME-WAIT state. ipv4.sockstat_tcp_sockets The number of packets transferred by the TCP layer. Received - received packets, including those received in error, such as checksum error, invalid TCP header, and so on. Sent - sent packets, excluding the retransmitted packets. But it includes the SYN, ACK, and RST packets. ipv4.tcppackets TCP connection statistics. Active - number of outgoing TCP connections attempted by this host. Passive - number of incoming TCP connections accepted by this host. ipv4.tcpopens TCP errors. InErrs - TCP segments received in error (including header too small, checksum errors, sequence errors, bad packets - for both IPv4 and IPv6). InCsumErrors - TCP segments received with checksum errors (for both IPv4 and IPv6). RetransSegs - TCP segments retransmitted. ipv4.tcperrors TCP handshake statistics. EstabResets - established connections resets (i.e. connections that made a direct transition from ESTABLISHED or CLOSE_WAIT to CLOSED). OutRsts - TCP segments sent, with the RST flag set (for both IPv4 and IPv6). AttemptFails - number of times TCP connections made a direct transition from either SYN_SENT or SYN_RECV to CLOSED, plus the number of times TCP connections made a direct transition from the SYN_RECV to LISTEN. SynRetrans - shows retries for new outbound TCP connections, which can indicate general connectivity issues or backlog on the remote host. ipv4.tcphandshake The amount of memory used by allocated TCP sockets. ipv4.sockstat_tcp_mem UDP The number of used UDP sockets. ipv4.sockstat_udp_sockets The number of transferred UDP packets. ipv4.udppackets The number of errors encountered during transferring UDP packets. RcvbufErrors - receive buffer is full. SndbufErrors - send buffer is full, no kernel memory available, or the IP layer reported an error when trying to send the packet and no error queue has been setup. InErrors - that is an aggregated counter for all errors, excluding NoPorts. NoPorts - no application is listening at the destination port. InCsumErrors - a UDP checksum failure is detected. IgnoredMulti - ignored multicast packets. ipv4.udperrors The amount of memory used by allocated UDP sockets. ipv4.sockstat_udp_mem FRAGMENTS IPv4 fragmentation statistics for this system. OK - packets that have been successfully fragmented. Failed - packets that have been discarded because they needed to be fragmented but could not be, e.g. due to Don't Fragment (DF) flag was set. Created - fragments that have been generated as a result of fragmentation. ipv4.fragsout -------------------------------------------------------------------------------- IPV6 NETWORKING Metrics for the IPv6 stack of the system. Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion. IPv6 is intended to replace IPv4. TCP6 The number of TCP sockets in any state, excluding TIME-WAIT and CLOSED. ipv6.sockstat6_tcp_sockets -------------------------------------------------------------------------------- NETWORK INTERFACES Performance metrics for network interfaces. Netdata retrieves this data reading the /proc/net/dev file and /sys/class/net/ directory. BR-1CC92120B84D net.br-1cc92120b84d net.br-1cc92120b84d The amount of traffic transferred by the network interface. net.br-1cc92120b84d The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.br-1cc92120b84d The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.br-1cc92120b84d The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.br-1cc92120b84d The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.br-1cc92120b84d BR-66FED93A8F7A net.br-66fed93a8f7a net.br-66fed93a8f7a The amount of traffic transferred by the network interface. net.br-66fed93a8f7a The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.br-66fed93a8f7a The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.br-66fed93a8f7a The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.br-66fed93a8f7a The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.br-66fed93a8f7a BR-67F1A7830135 net.br-67f1a7830135 net.br-67f1a7830135 The amount of traffic transferred by the network interface. net.br-67f1a7830135 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.br-67f1a7830135 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.br-67f1a7830135 The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.br-67f1a7830135 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.br-67f1a7830135 DOCKER0 net.docker0 net.docker0 The amount of traffic transferred by the network interface. net.docker0 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.docker0 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.docker0 The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.docker0 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.docker0 ENP175S0F0 net.enp175s0f0 net.enp175s0f0 The amount of traffic transferred by the network interface. net.enp175s0f0 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.enp175s0f0 The interface's latest or current speed that the network adapter negotiated with the device it is connected to. This does not give the max supported speed of the NIC. net_speed.enp175s0f0 The interface's latest or current duplex that the network adapter negotiated with the device it is connected to. Unknown - the duplex mode can not be determined. Half duplex - the communication is one direction at a time. Full duplex - the interface is able to send and receive data simultaneously. State map: 0 - unknown, 1 - half, 2 - full. net_duplex.enp175s0f0 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.enp175s0f0 The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.enp175s0f0 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.enp175s0f0 ENS5F0 net.ens5f0 net.ens5f0 The amount of traffic transferred by the network interface. net.ens5f0 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.ens5f0 The number of packets that have been dropped at the network interface level. Inbound - packets received but not processed, e.g. due to softnet backlog overflow, bad/unintended VLAN tags, unknown or unregistered protocols, IPv6 frames when the server is not configured for IPv6. Outbound - packets dropped on their way to transmission, e.g. due to lack of resources. net_drops.ens5f0 The interface's latest or current speed that the network adapter negotiated with the device it is connected to. This does not give the max supported speed of the NIC. net_speed.ens5f0 The interface's latest or current duplex that the network adapter negotiated with the device it is connected to. Unknown - the duplex mode can not be determined. Half duplex - the communication is one direction at a time. Full duplex - the interface is able to send and receive data simultaneously. State map: 0 - unknown, 1 - half, 2 - full. net_duplex.ens5f0 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.ens5f0 The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.ens5f0 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.ens5f0 VETH8C3C463 net.veth8c3c463 net.veth8c3c463 The amount of traffic transferred by the network interface. net.veth8c3c463 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.veth8c3c463 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.veth8c3c463 The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.veth8c3c463 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.veth8c3c463 VETH4221C2F net.veth4221c2f net.veth4221c2f The amount of traffic transferred by the network interface. net.veth4221c2f The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.veth4221c2f The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.veth4221c2f The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.veth4221c2f The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.veth4221c2f VETH29725B1 net.veth29725b1 net.veth29725b1 The amount of traffic transferred by the network interface. net.veth29725b1 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.veth29725b1 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.veth29725b1 The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.veth29725b1 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.veth29725b1 VETHC36B073 net.vethc36b073 net.vethc36b073 The amount of traffic transferred by the network interface. net.vethc36b073 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.vethc36b073 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.vethc36b073 The current physical link state of the interface. State map: 0 - down, 1 - up. net_carrier.vethc36b073 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.vethc36b073 ENP134S0F1 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.enp134s0f1 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.enp134s0f1 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.enp134s0f1 ENP175S0F1 The number of packets transferred by the network interface. Received multicast counter is commonly calculated at the device level (unlike received) and therefore may include packets which did not reach the host. net_packets.enp175s0f1 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.enp175s0f1 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.enp175s0f1 ENO1 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.eno1 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.eno1 ENO2 The current operational state of the interface. Unknown - the state can not be determined. NotPresent - the interface has missing (typically, hardware) components. Down - the interface is unable to transfer data on L1, e.g. ethernet is not plugged or interface is administratively down. LowerLayerDown - the interface is down due to state of lower-layer interface(s). Testing - the interface is in testing mode, e.g. cable test. It can’t be used for normal traffic until tests complete. Dormant - the interface is L1 up, but waiting for an external event, e.g. for a protocol to establish. Up - the interface is ready to pass packets and can be used. State map: 0 - unknown, 1 - notpresent, 2 - down, 3 - lowerlayerdown, 4 - testing, 5 - dormant, 6 - up. net_operstate.eno2 The interface's currently configured Maximum transmission unit (MTU) value. MTU is the size of the largest protocol data unit that can be communicated in a single network layer transaction. net_mtu.eno2 -------------------------------------------------------------------------------- INFINIBAND PORTS Performance and exception statistics for Infiniband ports. The individual port and hardware counter descriptions can be found in the Mellanox knowledge base. MLX5 0-1 The amount of traffic transferred by the port. Infiniband.ib_cntbytes_mlx5_0-1 The number of packets transferred by the port. Infiniband.ib_cntpackets_mlx5_0-1 The number of errors encountered by the port. Infiniband.ib_cnterrors_mlx5_0-1 The number of hardware errors encountered by the port. Infiniband.ib_hwcnterrors_mlx5_0-1 The number of hardware packets transferred by the port. Infiniband.ib_hwcntpackets_mlx5_0-1 MLX5 2-1 The amount of traffic transferred by the port. Infiniband.ib_cntbytes_mlx5_2-1 The number of packets transferred by the port. Infiniband.ib_cntpackets_mlx5_2-1 The number of errors encountered by the port. Infiniband.ib_cnterrors_mlx5_2-1 The number of hardware errors encountered by the port. Infiniband.ib_hwcnterrors_mlx5_2-1 The number of hardware packets transferred by the port. Infiniband.ib_hwcntpackets_mlx5_2-1 -------------------------------------------------------------------------------- FIREWALL (NETFILTER) Performance metrics of the netfilter components. CONNECTION TRACKER Netfilter Connection Tracker performance metrics. The connection tracker keeps track of all connections of the machine, inbound and outbound. It works by keeping a database with all open connections, tracking network and address translation and connection expectations. The number of entries in the conntrack table. netfilter.conntrack_sockets Packet tracking statistics. New - conntrack entries added which were not expected before. Ignore - packets seen which are already connected to a conntrack entry. Invalid - packets seen which can not be tracked. netfilter.conntrack_new The number of changes in conntrack tables. Inserted, Deleted - conntrack entries which were inserted or removed. Delete-list - conntrack entries which were put to dying list. netfilter.conntrack_changes The number of events in the "expect" table. Connection tracking expectations are the mechanism used to "expect" RELATED connections to existing ones. An expectation is a connection that is expected to happen in a period of time. Created, Deleted - conntrack entries which were inserted or removed. New - conntrack entries added after an expectation for them was already present. netfilter.conntrack_expect Conntrack errors. IcmpError - packets which could not be tracked due to error situation. InsertFailed - entries for which list insertion was attempted but failed (happens if the same entry is already present). Drop - packets dropped due to conntrack failure. Either new conntrack entry allocation failed, or protocol helper dropped the packet. EarlyDrop - dropped conntrack entries to make room for new ones, if maximum table size was reached. netfilter.conntrack_errors Conntrack table lookup statistics. Searched - conntrack table lookups performed. Restarted - conntrack table lookups which had to be restarted due to hashtable resizes. Found - conntrack table lookups which were successful. netfilter.conntrack_search -------------------------------------------------------------------------------- SYSTEMD SERVICES Resources utilization of systemd services. Netdata monitors all systemd services via cgroups (the resources accounting used by containers). CPU Total CPU utilization within the system-wide CPU resources (all cores). The amount of time spent by tasks of the cgroup in user and kernel modes. services.cpu MEM The amount of used RAM. services.mem_usage DISK The amount of data transferred from specific devices as seen by the CFQ scheduler. It is not updated when the CFQ scheduler is operating on a request queue. services.io_read The amount of data transferred to specific devices as seen by the CFQ scheduler. It is not updated when the CFQ scheduler is operating on a request queue. services.io_write The number of read operations performed on specific devices as seen by the CFQ scheduler. services.io_ops_read The number write operations performed on specific devices as seen by the CFQ scheduler. services.io_ops_write -------------------------------------------------------------------------------- APPLICATIONS Per application statistics are collected using apps.plugin. This plugin walks through all processes and aggregates statistics for application groups. The plugin also counts the resources of exited children. So for processes like shell scripts, the reported values include the resources used by the commands these scripts run within each timeframe. CPU Total CPU utilization (all cores). It includes user, system and guest time. apps.cpu The amount of time the CPU was busy executing code in user mode (all cores). apps.cpu_user The amount of time the CPU was busy executing code in kernel mode (all cores). apps.cpu_system DISK The amount of data that has been read from the storage layer. Actual physical disk I/O was required. apps.preads The amount of data that has been written to the storage layer. Actual physical disk I/O was required. apps.pwrites The amount of data that has been read from the storage layer. It includes things such as terminal I/O and is unaffected by whether or not actual physical disk I/O was required (the read might have been satisfied from pagecache). apps.lreads The amount of data that has been written or shall be written to the storage layer. It includes things such as terminal I/O and is unaffected by whether or not actual physical disk I/O was required. apps.lwrites The number of open files and directories. apps.files MEM Real memory (RAM) used by applications. This does not include shared memory. apps.mem Virtual memory allocated by applications. Check this article for more information. apps.vmem The number of minor faults which have not required loading a memory page from the disk. Minor page faults occur when a process needs data that is in memory and is assigned to another process. They share memory pages between multiple processes – no additional data needs to be read from disk to memory. apps.minor_faults PROCESSES The number of threads. apps.threads The number of processes. apps.processes The period of time within which at least one process in the group has been running. apps.uptime The number of open pipes. A pipe is a unidirectional data channel that can be used for interprocess communication. apps.pipes SWAP The amount of swapped-out virtual memory by anonymous private pages. This does not include shared swap memory. apps.swap The number of major faults which have required loading a memory page from the disk. Major page faults occur because of the absence of the required page from the RAM. They are expected when a process starts or needs to read in additional data and in these cases do not indicate a problem condition. However, a major page fault can also be the result of reading memory pages that have been written out to the swap file, which could indicate a memory shortage. apps.major_faults NETWORK Netdata also gives a summary for eBPF charts in Networking Stack submenu. The number of open sockets. Sockets are a way to enable inter-process communication between programs running on a server, or between programs running on separate servers. This includes both network and UNIX sockets. apps.sockets -------------------------------------------------------------------------------- USER GROUPS Per user group statistics are collected using apps.plugin. This plugin walks through all processes and aggregates statistics per user group. The plugin also counts the resources of exited children. So for processes like shell scripts, the reported values include the resources used by the commands these scripts run within each timeframe. CPU Total CPU utilization (all cores). It includes user, system and guest time. groups.cpu The amount of time the CPU was busy executing code in user mode (all cores). groups.cpu_user The amount of time the CPU was busy executing code in kernel mode (all cores). groups.cpu_system DISK The amount of data that has been read from the storage layer. Actual physical disk I/O was required. groups.preads The amount of data that has been written to the storage layer. Actual physical disk I/O was required. groups.pwrites The amount of data that has been read from the storage layer. It includes things such as terminal I/O and is unaffected by whether or not actual physical disk I/O was required (the read might have been satisfied from pagecache). groups.lreads The amount of data that has been written or shall be written to the storage layer. It includes things such as terminal I/O and is unaffected by whether or not actual physical disk I/O was required. groups.lwrites The number of open files and directories. groups.files MEM Real memory (RAM) used per user group. This does not include shared memory. groups.mem Virtual memory allocated per user group since the Netdata restart. Please check this article for more information. groups.vmem The number of minor faults which have not required loading a memory page from the disk. Minor page faults occur when a process needs data that is in memory and is assigned to another process. They share memory pages between multiple processes – no additional data needs to be read from disk to memory. groups.minor_faults PROCESSES The number of threads. groups.threads The number of processes. groups.processes The period of time within which at least one process in the group has been running. groups.uptime The number of open pipes. A pipe is a unidirectional data channel that can be used for interprocess communication. groups.pipes SWAP The amount of swapped-out virtual memory by anonymous private pages. This does not include shared swap memory. groups.swap The number of major faults which have required loading a memory page from the disk. Major page faults occur because of the absence of the required page from the RAM. They are expected when a process starts or needs to read in additional data and in these cases do not indicate a problem condition. However, a major page fault can also be the result of reading memory pages that have been written out to the swap file, which could indicate a memory shortage. groups.major_faults NET The number of open sockets. Sockets are a way to enable inter-process communication between programs running on a server, or between programs running on separate servers. This includes both network and UNIX sockets. groups.sockets -------------------------------------------------------------------------------- USERS Per user statistics are collected using apps.plugin. This plugin walks through all processes and aggregates statistics per user. The plugin also counts the resources of exited children. So for processes like shell scripts, the reported values include the resources used by the commands these scripts run within each timeframe. CPU Total CPU utilization (all cores). It includes user, system and guest time. users.cpu The amount of time the CPU was busy executing code in user mode (all cores). users.cpu_user The amount of time the CPU was busy executing code in kernel mode (all cores). users.cpu_system DISK The amount of data that has been read from the storage layer. Actual physical disk I/O was required. users.preads The amount of data that has been written to the storage layer. Actual physical disk I/O was required. users.pwrites The amount of data that has been read from the storage layer. It includes things such as terminal I/O and is unaffected by whether or not actual physical disk I/O was required (the read might have been satisfied from pagecache). users.lreads The amount of data that has been written or shall be written to the storage layer. It includes things such as terminal I/O and is unaffected by whether or not actual physical disk I/O was required. users.lwrites The number of open files and directories. users.files MEM Real memory (RAM) used per user group. This does not include shared memory. users.mem Virtual memory allocated per user group since the Netdata restart. Please check this article for more information. users.vmem The number of minor faults which have not required loading a memory page from the disk. Minor page faults occur when a process needs data that is in memory and is assigned to another process. They share memory pages between multiple processes – no additional data needs to be read from disk to memory. users.minor_faults PROCESSES The number of threads. users.threads The number of processes. users.processes The period of time within which at least one process in the group has been running. users.uptime The number of open pipes. A pipe is a unidirectional data channel that can be used for interprocess communication. users.pipes SWAP The amount of swapped-out virtual memory by anonymous private pages. This does not include shared swap memory. users.swap The number of major faults which have required loading a memory page from the disk. Major page faults occur because of the absence of the required page from the RAM. They are expected when a process starts or needs to read in additional data and in these cases do not indicate a problem condition. However, a major page fault can also be the result of reading memory pages that have been written out to the swap file, which could indicate a memory shortage. users.major_faults NET The number of open sockets. Sockets are a way to enable inter-process communication between programs running on a server, or between programs running on separate servers. This includes both network and UNIX sockets. users.sockets -------------------------------------------------------------------------------- FRONT NGINX Container resource utilization metrics. Netdata reads this information from cgroups (abbreviated from control groups), a Linux kernel feature that limits and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. cgroups together with namespaces (that offer isolation between processes) provide what we usually call: containers. cgroup_front_nginx.cpu_limit cgroup_front_nginx.mem_usage_limit CPU Total CPU utilization within the configured or system-wide (if not set) limits. When the CPU utilization of a cgroup exceeds the limit for the configured period, the tasks belonging to its hierarchy will be throttled and are not allowed to run again until the next period. cgroup_front_nginx.cpu_limit Total CPU utilization within the system-wide CPU resources (all cores). The amount of time spent by tasks of the cgroup in user and kernel modes. cgroup_front_nginx.cpu MEM RAM utilization within the configured or system-wide (if not set) limits. When the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_front_nginx.mem_utilization RAM usage within the configured or system-wide (if not set) limits. When the RAM usage of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_front_nginx.mem_usage_limit The amount of used RAM and swap memory. cgroup_front_nginx.mem_usage Memory usage statistics. The individual metrics are described in the memory.stat section for cgroup-v1 and cgroup-v2. cgroup_front_nginx.mem Dirty is the amount of memory waiting to be written to disk. Writeback is how much memory is actively being written to disk. cgroup_front_nginx.writeback Memory page fault statistics. Pgfault - all page faults. Swap - major page faults. cgroup_front_nginx.pgfaults DISK The amount of data transferred to and from specific devices as seen by the CFQ scheduler. It is not updated when the CFQ scheduler is operating on a request queue. cgroup_front_nginx.io The number of I/O operations performed on specific devices as seen by the CFQ scheduler. cgroup_front_nginx.serviced_ops -------------------------------------------------------------------------------- NETDATA Container resource utilization metrics. Netdata reads this information from cgroups (abbreviated from control groups), a Linux kernel feature that limits and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. cgroups together with namespaces (that offer isolation between processes) provide what we usually call: containers. cgroup_netdata.cpu_limit cgroup_netdata.mem_usage_limit CPU Total CPU utilization within the configured or system-wide (if not set) limits. When the CPU utilization of a cgroup exceeds the limit for the configured period, the tasks belonging to its hierarchy will be throttled and are not allowed to run again until the next period. cgroup_netdata.cpu_limit Total CPU utilization within the system-wide CPU resources (all cores). The amount of time spent by tasks of the cgroup in user and kernel modes. cgroup_netdata.cpu MEM RAM utilization within the configured or system-wide (if not set) limits. When the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_netdata.mem_utilization RAM usage within the configured or system-wide (if not set) limits. When the RAM usage of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_netdata.mem_usage_limit The amount of used RAM and swap memory. cgroup_netdata.mem_usage Memory usage statistics. The individual metrics are described in the memory.stat section for cgroup-v1 and cgroup-v2. cgroup_netdata.mem Dirty is the amount of memory waiting to be written to disk. Writeback is how much memory is actively being written to disk. cgroup_netdata.writeback Memory page fault statistics. Pgfault - all page faults. Swap - major page faults. cgroup_netdata.pgfaults DISK The amount of data transferred to and from specific devices as seen by the CFQ scheduler. It is not updated when the CFQ scheduler is operating on a request queue. cgroup_netdata.io The number of I/O operations performed on specific devices as seen by the CFQ scheduler. cgroup_netdata.serviced_ops -------------------------------------------------------------------------------- QBIT READ Container resource utilization metrics. Netdata reads this information from cgroups (abbreviated from control groups), a Linux kernel feature that limits and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. cgroups together with namespaces (that offer isolation between processes) provide what we usually call: containers. cgroup_qbit_read.cpu_limit cgroup_qbit_read.mem_usage_limit CPU Total CPU utilization within the configured or system-wide (if not set) limits. When the CPU utilization of a cgroup exceeds the limit for the configured period, the tasks belonging to its hierarchy will be throttled and are not allowed to run again until the next period. cgroup_qbit_read.cpu_limit Total CPU utilization within the system-wide CPU resources (all cores). The amount of time spent by tasks of the cgroup in user and kernel modes. cgroup_qbit_read.cpu MEM RAM utilization within the configured or system-wide (if not set) limits. When the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_qbit_read.mem_utilization RAM usage within the configured or system-wide (if not set) limits. When the RAM usage of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_qbit_read.mem_usage_limit The amount of used RAM and swap memory. cgroup_qbit_read.mem_usage Memory usage statistics. The individual metrics are described in the memory.stat section for cgroup-v1 and cgroup-v2. cgroup_qbit_read.mem Dirty is the amount of memory waiting to be written to disk. Writeback is how much memory is actively being written to disk. cgroup_qbit_read.writeback Memory page fault statistics. Pgfault - all page faults. Swap - major page faults. cgroup_qbit_read.pgfaults DISK The amount of data transferred to and from specific devices as seen by the CFQ scheduler. It is not updated when the CFQ scheduler is operating on a request queue. cgroup_qbit_read.io The number of I/O operations performed on specific devices as seen by the CFQ scheduler. cgroup_qbit_read.serviced_ops -------------------------------------------------------------------------------- QBIT WRITE Container resource utilization metrics. Netdata reads this information from cgroups (abbreviated from control groups), a Linux kernel feature that limits and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. cgroups together with namespaces (that offer isolation between processes) provide what we usually call: containers. cgroup_qbit_write.cpu_limit cgroup_qbit_write.mem_usage_limit CPU Total CPU utilization within the configured or system-wide (if not set) limits. When the CPU utilization of a cgroup exceeds the limit for the configured period, the tasks belonging to its hierarchy will be throttled and are not allowed to run again until the next period. cgroup_qbit_write.cpu_limit Total CPU utilization within the system-wide CPU resources (all cores). The amount of time spent by tasks of the cgroup in user and kernel modes. cgroup_qbit_write.cpu MEM RAM utilization within the configured or system-wide (if not set) limits. When the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_qbit_write.mem_utilization RAM usage within the configured or system-wide (if not set) limits. When the RAM usage of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_qbit_write.mem_usage_limit The amount of used RAM and swap memory. cgroup_qbit_write.mem_usage Memory usage statistics. The individual metrics are described in the memory.stat section for cgroup-v1 and cgroup-v2. cgroup_qbit_write.mem Dirty is the amount of memory waiting to be written to disk. Writeback is how much memory is actively being written to disk. cgroup_qbit_write.writeback Memory page fault statistics. Pgfault - all page faults. Swap - major page faults. cgroup_qbit_write.pgfaults DISK The amount of data transferred to and from specific devices as seen by the CFQ scheduler. It is not updated when the CFQ scheduler is operating on a request queue. cgroup_qbit_write.io The number of I/O operations performed on specific devices as seen by the CFQ scheduler. cgroup_qbit_write.serviced_ops -------------------------------------------------------------------------------- QBIT WRITE FAILED Container resource utilization metrics. Netdata reads this information from cgroups (abbreviated from control groups), a Linux kernel feature that limits and accounts resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. cgroups together with namespaces (that offer isolation between processes) provide what we usually call: containers. cgroup_qbit_write_failed.cpu_limit cgroup_qbit_write_failed.mem_usage_limit CPU Total CPU utilization within the configured or system-wide (if not set) limits. When the CPU utilization of a cgroup exceeds the limit for the configured period, the tasks belonging to its hierarchy will be throttled and are not allowed to run again until the next period. cgroup_qbit_write_failed.cpu_limit Total CPU utilization within the system-wide CPU resources (all cores). The amount of time spent by tasks of the cgroup in user and kernel modes. cgroup_qbit_write_failed.cpu MEM RAM utilization within the configured or system-wide (if not set) limits. When the RAM utilization of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_qbit_write_failed.mem_utilization RAM usage within the configured or system-wide (if not set) limits. When the RAM usage of a cgroup exceeds the limit, OOM killer will start killing the tasks belonging to the cgroup. cgroup_qbit_write_failed.mem_usage_limit The amount of used RAM and swap memory. cgroup_qbit_write_failed.mem_usage Memory usage statistics. The individual metrics are described in the memory.stat section for cgroup-v1 and cgroup-v2. cgroup_qbit_write_failed.mem Dirty is the amount of memory waiting to be written to disk. Writeback is how much memory is actively being written to disk. cgroup_qbit_write_failed.writeback Memory page fault statistics. Pgfault - all page faults. Swap - major page faults. cgroup_qbit_write_failed.pgfaults DISK The amount of data transferred to and from specific devices as seen by the CFQ scheduler. It is not updated when the CFQ scheduler is operating on a request queue. cgroup_qbit_write_failed.io The number of I/O operations performed on specific devices as seen by the CFQ scheduler. cgroup_qbit_write_failed.serviced_ops -------------------------------------------------------------------------------- SENSORS Readings of the configured system sensors. TEMPERATURE sensors.coretemp-isa-0000_temperature sensors.coretemp-isa-0001_temperature POWER sensors.power_meter-acpi-0_power -------------------------------------------------------------------------------- NETDATA MONITORING Performance metrics for the operation of netdata itself and its plugins. NETDATA netdata.net netdata.server_cpu netdata.uptime netdata.clients netdata.requests The netdata API response time measures the time netdata needed to serve requests. This time includes everything, from the reception of the first byte of a request, to the dispatch of the last byte of its reply, therefore it includes all network latencies involved (i.e. a client over a slow network will influence these metrics). netdata.response_time netdata.compression_ratio QUERIES netdata.queries netdata.db_points DBENGINE netdata.dbengine_compression_ratio netdata.page_cache_hit_ratio netdata.page_cache_stats netdata.dbengine_long_term_page_stats netdata.dbengine_io_throughput netdata.dbengine_io_operations netdata.dbengine_global_errors netdata.dbengine_global_file_descriptors netdata.dbengine_ram CGROUPS netdata.plugin_cgroups_cpu PROC netdata.plugin_proc_cpu netdata.plugin_proc_modules WEB netdata.web_thread1_cpu netdata.web_thread2_cpu netdata.web_thread3_cpu netdata.web_thread4_cpu netdata.web_thread5_cpu netdata.web_thread6_cpu STATSD netdata.plugin_statsd_charting_cpu netdata.plugin_statsd_collector1_cpu netdata.statsd_metrics netdata.statsd_useful_metrics netdata.statsd_events netdata.statsd_reads netdata.statsd_bytes netdata.statsd_packets netdata.tcp_connects netdata.tcp_connected netdata.private_charts DISKSPACE netdata.plugin_diskspace netdata.plugin_diskspace_dt TIMEX netdata.plugin_timex netdata.plugin_timex_dt TC.HELPER netdata.plugin_tc_cpu netdata.plugin_tc_time APPS.PLUGIN netdata.apps_cpu netdata.apps_sizes netdata.apps_fix netdata.apps_children_fix PYTHON.D netdata.runtime_sensors -------------------------------------------------------------------------------- * System Overview * cpu * load * disk * ram * network * processes * idlejitter * interrupts * softirqs * softnet * entropy * uptime * clock synchronization * ipc semaphores * ipc shared memory * CPUs * utilization * interrupts * softirqs * softnet * throttling * cpufreq * cpuidle * Memory * system * kernel * slab * hugepages * numa * Disks * /etc/resolv.conf * sda * sdaa * sdab * sdac * sdad * sdae * sdaf * sdag * sdah * sdai * sdaj * sdak * sdal * sdb * sdc * sdd * sde * sdf * sdg * sdh * sdi * sdj * sdk * sdl * sdm * sdn * sdo * sdp * sdq * sdr * sds * sdt * sdu * sdv * sdw * sdx * sdy * sdz * / * /dev * /dev/shm * MD arrays * health * md2 (raid1) * Networking Stack * tcp * ecn * IPv4 Networking * sockets * packets * errors * icmp * tcp * udp * fragments * IPv6 Networking * tcp6 * Network Interfaces * br-1cc92120b84d * br-66fed93a8f7a * br-67f1a7830135 * docker0 * enp175s0f0 * ens5f0 * veth8c3c463 * veth4221c2f * veth29725b1 * vethc36b073 * enp134s0f1 * enp175s0f1 * eno1 * eno2 * Infiniband ports * mlx5 0-1 * mlx5 2-1 * Firewall (netfilter) * connection tracker * systemd Services * cpu * mem * disk * Applications * cpu * disk * mem * processes * swap * network * User Groups * cpu * disk * mem * processes * swap * net * Users * cpu * disk * mem * processes * swap * net * front nginx * cpu * mem * disk * netdata * cpu * mem * disk * qbit read * cpu * mem * disk * qbit write * cpu * mem * disk * qbit write failed * cpu * mem * disk * Sensors * temperature * power * Netdata Monitoring * netdata * queries * dbengine * cgroups * proc * web * statsd * diskspace * timex * tc.helper * apps.plugin * python.d * Add more charts * Add more alarms * Every second, Netdata collects 4,791 metrics on 6bcef270ce26, presents them in 1,172 charts and monitors them with 210 alarms. netdata v1.31.0-381-nightly * Do you like Netdata? Give us a star! And share the word! Netdata Copyright 2020, Netdata, Inc. Terms and conditions Privacy Policy Released under GPL v3 or later. Netdata uses third party tools. XSS PROTECTION This dashboard is about to render data from server: To protect your privacy, the dashboard will check all data transferred for cross site scripting (XSS). This is CPU intensive, so your browser might be a bit slower. If you trust the remote server, you can disable XSS protection. In this case, any remote dashboard decoration code (javascript) will also run. If you don't trust the remote server, you should keep the protection on. The dashboard will run slower and remote dashboard decoration code will not run, but better be safe than sorry... Keep protecting me I don't need this, the server is mine × PRINT THIS NETDATA DASHBOARD netdata dashboards cannot be captured, since we are lazy loading and hiding all but the visible charts. To capture the whole page with all the charts rendered, a new browser window will pop-up that will render all the charts at once. The new browser window will maintain the current pan and zoom settings of the charts. So, align the charts before proceeding. This process will put some CPU and memory pressure on your browser. For the netdata server, we will sequencially download all the charts, to avoid congesting network and server resources. Please, do not print netdata dashboards on paper! Print Close × PREPARING DASHBOARD FOR PRINTING... Please wait while we initialize and render all the charts on the dashboard. The print dialog will appear as soon as we finish rendering the page. × IMPORT A NETDATA SNAPSHOT netdata can export and import dashboard snapshots. Any netdata can import the snapshot of any other netdata. The snapshots are not uploaded to a server. They are handled entirely by your web browser, on your computer. Click here to select the netdata snapshot file to import Browse for a snapshot file (or drag it and drop it here), then click Import to render it. FilenameHostnameOrigin URLCharts InfoSnapshot InfoTime RangeComments Snapshot files contain both data and javascript code. Make sure you trust the files you import! Import Close × EXPORT A SNAPSHOT Please wait while we collect all the dashboard data... Select the desired resolution of the snapshot. This is the seconds of data per point. Filename Compression * Select Compression * * uncompressed * * pako.deflate (gzip, binary) * pako.deflate.base64 (gzip, ascii) * * lzstring.uri (LZ, ascii) * lzstring.utf16 (LZ, utf16) * lzstring.base64 (LZ, ascii) Comments Select snaphost resolution. This controls the size the snapshot file. The generated snapshot will include all charts of this dashboard, for the visible timeframe, so align, pan and zoom the charts as needed. The scroll position of the dashboard will also be saved. The snapshot will be downloaded as a file, to your computer, that can be imported back into any netdata dashboard (no need to import it back on this server). Snapshot files include all the information of the dashboard, including the URL of the origin server, its netdata unique ID, etc. So, if you share the snapshot file with third parties, they will be able to access the origin server, if this server is exposed on the internet. Snapshots are handled entirely by the web browser. The netdata servers are not aware of them. Export Cancel × NETDATA ALARMS * Active * All * Log loading... loading... loading... Close × NETDATA DASHBOARD OPTIONS These are browser settings. Each viewer has its own. They do not affect the operation of your netdata server. Settings take effect immediately and are saved permanently to browser local storage (except the refresh on focus / always option). To reset all options (including charts sizes) to their defaults, click here. * Performance * Synchronization * Visual * Locale On FocusAlways When to refresh the charts? When set to On Focus, the charts will stop being updated if the page / tab does not have the focus of the user. When set to Always, the charts will always be refreshed. Set it to On Focus it to lower the CPU requirements of the browser (and extend the battery of laptops and tablets) when this page does not have your focus. Set to Always to work on another window (i.e. change the settings of something) and have the charts auto-refresh in this window. Non ZeroAll Which dimensions to show? When set to Non Zero, dimensions that have all their values (within the current view) set to zero will not be transferred from the netdata server (except if all dimensions of the chart are zero, in which case this setting does nothing - all dimensions are transferred and shown). When set to All, all dimensions will always be shown. Set it to Non Zero to lower the data transferred between netdata and your browser, lower the CPU requirements of your browser (fewer lines to draw) and increase the focus on the legends (fewer entries at the legends). DestroyHide How to handle hidden charts? When set to Destroy, charts that are not in the current viewport of the browser (are above, or below the visible area of the page), will be destroyed and re-created if and when they become visible again. When set to Hide, the not-visible charts will be just hidden, to simplify the DOM and speed up your browser. Set it to Destroy, to lower the memory requirements of your browser. Set it to Hide for faster restoration of charts on page scrolling. AsyncSync Page scroll handling? When set to Sync, charts will be examined for their visibility immediately after scrolling. On slow computers this may impact the smoothness of page scrolling. To update the page when scrolling ends, set it to Async. Set it to Sync for immediate chart updates when scrolling. Set it to Async for smoother page scrolling on slower computers. ParallelSequential Which chart refresh policy to use? When set to parallel, visible charts are refreshed in parallel (all queries are sent to netdata server in parallel) and are rendered asynchronously. When set to sequential charts are refreshed one after another. Set it to parallel if your browser can cope with it (most modern browsers do), set it to sequential if you work on an older/slower computer. ResyncBest Effort Shall we re-sync chart refreshes? When set to Resync, the dashboard will attempt to re-synchronize all the charts so that they are refreshed concurrently. When set to Best Effort, each chart may be refreshed with a little time difference to the others. Normally, the dashboard starts refreshing them in parallel, but depending on the speed of your computer and the network latencies, charts start having a slight time difference. Setting this to Resync will attempt to re-synchronize the charts on every update. Setting it to Best Effort may lower the pressure on your browser and the network. SyncDon't Sync Sync hover selection on all charts? When enabled, a selection on one chart will automatically select the same time on all other visible charts and the legends of all visible charts will be updated to show the selected values. When disabled, only the chart getting the user's attention will be selected. Enable it to get better insights of the data. Disable it if you are on a very slow computer that cannot actually do it. RightBelow Where do you want to see the legend? Netdata can place the legend in two positions: Below charts (the default) or to the Right of charts. Switching this will reload the dashboard. DarkWhite Which theme to use? Netdata comes with two themes: Dark (the default) and White. Switching this will reload the dashboard. Help MeNo Help Do you need help? Netdata can show some help in some areas to help you use the dashboard. If all these balloons bother you, disable them using this switch. Switching this will reload the dashboard. PadDon't Pad Enable data padding when panning and zooming? When set to Pad the charts will be padded with more data, both before and after the visible area, thus giving the impression the whole database is loaded. This padding will happen only after the first pan or zoom operation on the chart (initially all charts have only the visible data). When set to Don't Pad only the visible data will be transfered from the netdata server, even after the first pan and zoom operation. SmoothRough Enable Bézier lines on charts? When set to Smooth the charts libraries that support it, will plot smooth curves instead of simple straight lines to connect the points. Keep in mind dygraphs, the main charting library in netdata dashboards, can only smooth line charts. It cannot smooth area or stacked charts. When set to Rough, this setting can lower the CPU resources consumed by your browser. These settings are applied gradually, as charts are updated. To force them, refresh the dashboard now. Scale UnitsFixed Units Enable auto-scaling of select units? When set to Scale Units the values shown will dynamically be scaled (e.g. 1000 kilobits will be shown as 1 megabit). Netdata can auto-scale these original units: kilobits/s, kilobytes/s, KB/s, KB, MB, and GB. When set to Fixed Units all the values will be rendered using the original units maintained by the netdata server. CelsiusFahrenheit Which units to use for temperatures? Set the temperature units of the dashboard. TimeSeconds Convert seconds to time? When set to Time, charts that present seconds will show DDd:HH:MM:SS. When set to Seconds, the raw number of seconds will be presented. Close × UPDATE CHECK Your netdata version: v1.31.0-381-nightly New version of netdata available! Latest version: v1.46.0-129-nightly Click here for the changes log and click here for directions on updating your netdata installation. We suggest to review the changes log for new features you may be interested, or important bug fixes you may need. Keeping your netdata updated is generally a good idea. -------------------------------------------------------------------------------- For progress reports and key netdata updates: Join the Netdata Community You can also follow netdata on twitter, follow netdata on facebook, or watch netdata on github. Check Now Close × SIGN IN Signing-in to netdata.cloud will synchronize the list of your netdata monitored nodes known at registry . This may include server hostnames, urls and identification GUIDs. After you upgrade all your netdata servers, your private registry will not be needed any more. Are you sure you want to proceed? Cancel Sign In × DELETE ? You are about to delete, from your personal list of netdata servers, the following server: Are you sure you want to do this? Keep in mind, this server will be added back if and when you visit it again. keep it delete it × SWITCH NETDATA REGISTRY IDENTITY You can copy and paste the following ID to all your browsers (e.g. work and home). All the browsers with the same ID will identify you, so please don't share this with others. Either copy this ID and paste it to another browser, or paste here the ID you have taken from another browser. Keep in mind that: * when you switch ID, your previous ID will be lost forever - this is irreversible. * both IDs (your old and the new) must list this netdata at their personal lists. * both IDs have to be known by the registry: . * to get a new ID, just clear your browser cookies. cancel impersonate × Checking known URLs for this server... Checks may fail if you are viewing an HTTPS page and the server to be checked is HTTP only. Close