Support forum for Nagios Core, Nagios Plugins, NCPA, NRPE, NSCA, NDOUtils and more. Engage with the community of users including those using the open source solutions.
[INFO] ========== Starting Environment Checks ============
[INFO] My version is: verify_pnp_config-0.6.25-R.40
[INFO] Start Options: verify_pnp_config --mode bulk+npcd --config=/usr/local/nagios/etc/nagios.cfg --pnpcfg=/usr/local/pnp4nagios/etc/
[INFO] Reading /usr/local/nagios/etc/nagios.cfg
[OK ] Running product is 'nagios'
[OK ] object_cache_file is defined
[OK ] object_cache_file=/usr/local/nagios/var/objects.cache
[INFO] Reading /usr/local/nagios/var/objects.cache
[OK ] resource_file is defined
[OK ] resource_file=/usr/local/nagios/dell/resources/dell_resource.cfg
[INFO] Reading /usr/local/nagios/dell/resources/dell_resource.cfg
[INFO] Reading /usr/local/pnp4nagios/etc//process_perfdata.cfg
[INFO] Reading /usr/local/pnp4nagios/etc//pnp4nagios_release
[OK ] Found PNP4Nagios version "0.6.25"
[OK ] ./configure Options '--with-nagios-user=nagios' '--with-nagios-group=nagcmd'
[OK ] Effective User is 'nagios'
[OK ] User nagios exists with ID '1001'
[OK ] Effective group is 'nagios'
[OK ] Group nagios exists with ID '1001'
[INFO] ========== Checking Bulk Mode + NPCD Config ============
[OK ] process_performance_data is 1 compared with '/1/'
[OK ] service_perfdata_file is defined
[OK ] service_perfdata_file=/usr/local/pnp4nagios/var/service-perfdata
[OK ] service_perfdata_file_template is defined
[OK ] service_perfdata_file_template=DATATYPE::SERVICEPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tSERVICEDESC::$SERVICEDESC$\tSERVICEPERF
DATA::$SERVICEPERFDATA$\tSERVICECHECKCOMMAND::$SERVICECHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$\tSERVICESTATE::$SERVICESTATE
$\tSERVICESTATETYPE::$SERVICESTATETYPE$
[OK ] PERFDATA template looks good
[OK ] service_perfdata_file_mode is defined
[OK ] service_perfdata_file_mode=a
[OK ] service_perfdata_file_processing_interval is defined
[OK ] service_perfdata_file_processing_interval=15
[OK ] service_perfdata_file_processing_command is defined
[OK ] service_perfdata_file_processing_command=process-service-perfdata-file
[OK ] host_perfdata_file is defined
[OK ] host_perfdata_file=/usr/local/pnp4nagios/var/host-perfdata
--More--service_perfdata_file_processing_command at verify_pnp_config line 462.
host_perfdata_file_processing_command at verify_pnp_config line 462.
[OK ] host_perfdata_file_template is defined
[OK ] host_perfdata_file_template=DATATYPE::HOSTPERFDATA\tTIMET::$TIMET$\tHOSTNAME::$HOSTNAME$\tHOSTPERFDATA::$HOSTPERFDATA$\tHOSTCHECKCOMMAN
D::$HOSTCHECKCOMMAND$\tHOSTSTATE::$HOSTSTATE$\tHOSTSTATETYPE::$HOSTSTATETYPE$
[OK ] PERFDATA template looks good
[OK ] host_perfdata_file_mode is defined
[OK ] host_perfdata_file_mode=a
[OK ] host_perfdata_file_processing_interval is defined
[OK ] host_perfdata_file_processing_interval=15
[OK ] host_perfdata_file_processing_command is defined
[OK ] host_perfdata_file_processing_command=process-host-perfdata-file
[INFO] Nagios config looks good so far
[INFO] ========== Checking config values ============
[OK ] npcd daemon is running
[OK ] /usr/local/pnp4nagios/etc/npcd.cfg is used by npcd and readable
[INFO] Reading /usr/local/pnp4nagios/etc/npcd.cfg
[OK ] perfdata_spool_dir is defined
[OK ] perfdata_spool_dir=/usr/local/pnp4nagios/var/spool
[OK ] -1 files found in /usr/local/pnp4nagios/var/spool
[OK ] Command process-service-perfdata-file is defined
[OK ] '/bin/mv /usr/local/pnp4nagios/var/service-perfdata /usr/local/pnp4nagios/var/spool/service-perfdata.$TIMET$'
[OK ] Command looks good
[OK ] Command process-host-perfdata-file is defined
[OK ] '/bin/mv /usr/local/pnp4nagios/var/host-perfdata /usr/local/pnp4nagios/var/spool/host-perfdata.$TIMET$'
[OK ] Command looks good
[OK ] Script /usr/local/pnp4nagios/libexec/process_perfdata.pl is executable
[INFO] ========== Starting global checks ============
[OK ] status_file is defined
[OK ] status_file=/usr/local/nagios/var/status.dat
[INFO] host_query =
[INFO] service_query =
[INFO] Reading /usr/local/nagios/var/status.dat
[INFO] ==== Starting rrdtool checks ====
[OK ] RRDTOOL is defined
[OK ] RRDTOOL=/usr/bin/rrdtool
[OK ] /usr/bin/rrdtool is executable
[OK ] RRDtool 1.4.8 Copyright 1997-2013 by Tobias Oetiker <[email protected]>
[OK ] USE_RRDs is defined
[OK ] USE_RRDs=1
[OK ] Perl RRDs modules are loadable
[INFO] ==== Starting directory checks ====
[OK ] RRDPATH is defined
[OK ] RRDPATH=/usr/local/pnp4nagios/var/perfdata
[OK ] Perfdata directory '/usr/local/pnp4nagios/var/perfdata' exists
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/_HOST_.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/_HOST_.xml: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/RAM_Utilisation.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/RAM_Utilisation.xml: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Event_Log_Application_Warnings.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Event_Log_Application_Warnings.xml: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Event_Log_System_Warnings.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Event_Log_System_Warnings.xml: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Page_File.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Page_File.xml: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Network.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Network.xml: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Disk_Space_-_All.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/Disk_Space_-_All.xml: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/CPU.rrd: group is nagcmd
[CRIT] /usr/local/pnp4nagios/var/perfdata/NDPRT/CPU.xml: group is nagcmd
....
....
[WARN] 361 hosts/services are not providing performance data
[WARN] 'process_perf_data 1' is set for 362 hosts/services which are not providing performance data!
[OK ] 'process_perf_data 1' is set for 1072 of your hosts/services
[INFO] ==== System sizing ====
[OK ] 1071 hosts/service objects defined
[INFO] ==== Check statistics ====
[CRIT] Warning: 2, Critical: 1044
[CRIT] Checks finished...
I have compiled pnp4nagios --with-nagios-group=nagcmd option
systemctl status npcd.service
npcd.service - LSB: pnp4nagios NPCD Daemon Version 0.6.25
Loaded: loaded (/etc/rc.d/init.d/npcd)
Active: active (running) since Thu 2015-11-12 14:08:43 AEDT; 2s ago
Process: 9070 ExecStart=/etc/rc.d/init.d/npcd start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/npcd.service
├─9077 /usr/local/pnp4nagios/bin/npcd -d -f /usr/local/pnp4nagios/etc/npcd.cfg
└─9147 /usr/bin/perl /usr/local/pnp4nagios/libexec/process_perfdata.pl -n --bulk /usr/local/pnp4nagios/var/spool/host-perfdata.1447295002
When I browse to server-ip/pnp4nagios i am actually getting graphs generated, however under services I'm seeing Host Perfdata and PING, so not all services that I have show up, Windows hosts where I'm using wmic checks all show up:
Host Perfdata CPU DFS Replication Disk Space - All Event Log Application War... Event Log System Warnings Network Page File RAM Utilisation
For example I have a Cisco switch - NDC-SW-Admin with 54 checks but in pnp4nagios I'm only seeing the ping service and Host Perfdata, so 2 out of 54. How does pnp4nagios know which services to graph and which not to?
EDIT: Looks like the other's can't really be graphed as they are text info only
Pikmin wrote:I have already modified /usr/local/pnp4nagios/etc/npcd.cfg
I hope that's the right file for me as the one you specified doesn't exist on my system
Yeah thats OK, different systems have different directories.
Pikmin wrote:EDIT: Looks like the other's can't really be graphed as they are text info only
Bang that's exactly it.
I did a talk on this at the Nagios World Conference, you can watch it here:
While it's Nagios XI based, the basics about performance data and plugins is the same and the directories might be a bit different in your installation.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
I do have one question, I saw the youtube links late last night and didn't have time to watch them, saw about 25 min of the first one.
Trying to make sense out why mrtg and pnp4nagios are not really showing the same data. From the video if I remember correctly pnp4nagios averages data but still these don't look right to me.
I am looking at the data source in and in mrtg we can see it's more or less a steady 12 MB/s from 7-11 this morning, that same data in pnp4nagios (Datasource: in) doesn't show much at all - from 7-11 it's almost zero
I noticed in the previously attached screenshot when you look at the pnp4nagios side it shows the warning and critical which are in bytes but not translated correctly ?? - Warning shows 11250000.000000 while for example KB/s Max shows 84.5176
Looks like way too many zeroes in the Warning Critical part?
Don't be sorry, Nagios is great because there are so many ways of using it!
So basically what is happening here is that MRTG has the data most closest to the device.
When I checks it gets an octets value and then calculates the difference from last time and stores this in /var/www/mrtg/192.168.199.1_5.log.
Then your nagios service queries this file and calculates the difference from last time and then the plugin returns this as performance data.
PNP then gets the performance data and when it inserts it into it's RRD file it calculates the difference from last time and hence the data is averaged some more.
This might not be 100% correct but it's close. The point I'm making is that your PNP graphs are the furtherest from the source so the averaging is reducing the results.
But also, what is the command definition for check_local_mrtgtraf ? And what is the specific plugin that is being used, where did you download it from. I'm guessing that the last argument "10" is for an option which would have something to do with the interface speed perhaps?
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.