Bug with NagiosXI Performance Grapher
Posted: Wed May 01, 2024 3:18 pm
I am currently experience what I believe to be a bug in the NagiosXI performance grapher. I have a service running to capture what we have defined as 'non-standard' disk mounts ('standard' mounts = /, /apps, /boot, /home, /opt, /tmp, and /var) with the following plugin:
This acts as a catch-all for db servers and unique application mounts.
It appears that performance graphs are not coming in correctly. On one host, all the mount points that are discovered show in the graph as 0 capacity (see attached photo dbdev3). Here are the check results for dbdev3 which correctly display perfdata from the check result:
On another, the performance data is not consistent, likely due to the order in which the mounts are displayed in the check results (nagiossrv1). Each time the check results come in the mounts are in a different order and it looks like the performance grapher prioritizes order over mount name.
Please let me know if this is intended behavior or if there is a fix planned
Code: Select all
/check_snmp_storage_wizard.pl -H $HOSTADDRESS$ -G -m ^/tmp\|/boot\|/apps\|/dev/shm\|/home\|/run\|/var\|/opt\|/sys/fs/cgroup\|memory\|Memory\|Swap\|/mongoshare/.snapshot\|/emedia/.snapshot -e -2 -C <community_string> -w 96 -c 97 -o 20000 -f -S 0 It appears that performance graphs are not coming in correctly. On one host, all the mount points that are discovered show in the graph as 0 capacity (see attached photo dbdev3). Here are the check results for dbdev3 which correctly display perfdata from the check result:
Code: Select all
[root@nagiossrv1 libexec]# ./check_snmp_storage_wizard.pl -H dbdev3 -G -m ^/tmp\|/boot\|/apps\|/dev/shm\|/home\|/run\|/var\|/opt\|/sys/fs/cgroup\|memory\|Memory\|Swap\|/mongoshare/.snapshot\|/emedia/.snapshot -e -2 -C <community_string> -w 96 -c 97 -o 20000 -f -S 0
All selected storages (<96%) : OK | '/oradata/db07d'=17GB;31;31;0;32 '/oradata/db05d'=548GB;864;873;0;900 '/oradata/ppmdbt'=333GB;384;388;0;400 '/ck'=0GB;10;10;0;10 '/oradata/dmii'=15GB;24;24;0;25 '/oradata/mbtu'=156GB;259;262;0;270 '/oradata/db02xd'=187GB;287;290;0;299 '/oradata/db07dg'=98GB;192;194;0;200 '/oradata/tmii'=16GB;19;19;0;20 '/'=9GB;13;14;0;14 '/oradata/mbtdg'=201GB;355;359;0;370 '/oradata/ppmdbd'=337GB;383;387;0;399 '/oradata/db02dg'=635GB;671;678;0;699 '/oradata/db04d'=126GB;240;242;0;250 '/oradata/db04i'=97GB;192;194;0;200 '/orafra'=90GB;96;97;0;100 '/oradata/db07i'=13GB;31;31;0;32 '/orawork'=15GB;96;97;0;100 '/oradata/db05i'=240GB;360;364;0;375 '/orashare'=166804GB;1474560;1489920;0;1536000 '/ora'=113GB;259;262;0;270 '/oradata/db02d'=164GB;336;340;0;350 '/oradata/mbtd'=156GB;288;291;0;300 '/dev/vx'=0GB;0;0;0;0 '/oradata/ppmdba'=323GB;384;388;0;400 '/oradata/db04xd'=100GB;182;184;0;190 '/oradata/mbti'=156GB;240;242;0;250 '/oradata/ppmdbi'=318GB;384;388;0;400 '/oradata/db04dg'=229GB;288;291;0;300 '/oraarchive'=4GB;1056;1067;0;1100 '/oradata/db02i'=200GB;230;233;0;240Please let me know if this is intended behavior or if there is a fix planned