Nagios Load very high

This support forum board is for support questions relating to Nagios XI, our flagship commercial network monitoring solution.
lgaddam
Posts: 116
Joined: Wed Aug 28, 2019 1:01 am

Nagios Load very high

Post by lgaddam »

Team,

We faced issue in our Nagios primary system with huge Load.
Not sure what caused the issue. At the time we have taken screenshot of top command as mentioned at bottom.

Our both primary and secondary servers are physical servers.
Current status of hosts and services are 3663 & 14678

Attached file contains:
load average screenshot, system profile, "ps -ef|grep Nagios" output, mysqld.log

Kindly let us know what might have caused the issue and need to know is everything fine in our setup.

-----------
top - 15:00:38 up 111 days, 21:22, 1 user, load average: 225.08, 433.34, 328.94
Tasks: 732 total, 24 running, 673 sleeping, 0 stopped, 35 zombie
Cpu(s): 49.6%us, 29.8%sy, 0.0%ni, 20.5%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 65953056k total, 55149932k used, 10803124k free, 1861404k buffers
Swap: 33554424k total, 65852k used, 33488572k free, 45598156k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5701 mysql 23 0 570m 83m 4668 S 119.9 0.1 16971:43 mysqld
14428 nagios 25 0 10180 1348 672 R 99.9 0.0 0:42.71 nagios
14431 nagios 25 0 10184 1348 672 R 99.9 0.0 2:10.12 nagios
14403 nagios 25 0 10192 1336 696 R 99.2 0.0 2:11.50 nagios
14452 nagios 25 0 10180 1308 672 R 98.2 0.0 2:02.73 nagios
14449 nagios 25 0 10168 1352 696 R 97.2 0.0 1:37.45 nagios
26744 apache 20 0 779m 438m 4024 S 97.2 0.7 0:53.15 httpd
14413 nagios 25 0 10224 1332 672 R 96.9 0.0 2:05.33 nagios
14419 nagios 25 0 10212 1348 672 R 96.3 0.0 0:28.79 nagios
---------------

top - 15:12:30 up 111 days, 21:34, 3 users, load average: 106.61, 101.41, 181.74
Tasks: 1466 total, 477 running, 845 sleeping, 0 stopped, 144 zombie
Cpu(s): 62.5%us, 9.1%sy, 0.0%ni, 28.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 65953056k total, 59548628k used, 6404428k free, 1872404k buffers
Swap: 33554424k total, 65848k used, 33488576k free, 45817000k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
5701 mysql 15 0 570m 83m 4668 S 191.8 0.1 16990:15 mysqld
16315 apache 15 0 911m 568m 5036 S 63.8 0.9 7:09.76 httpd
13610 apache 16 0 475m 135m 4504 S 60.9 0.2 0:51.48 httpd
7952 apache 15 0 445m 105m 4996 S 59.9 0.2 6:53.76 httpd
27294 apache 17 0 944m 598m 5144 S 58.9 0.9 10:39.81 httpd
1125 apache 16 0 398m 57m 4856 S 52.0 0.1 2:16.10 httpd
23301 apache 16 0 393m 52m 3788 S 51.4 0.1 0:30.66 httpd
24886 apache 15 0 439m 99m 4916 S 49.4 0.2 4:45.83 httpd
24547 apache 15 0 911m 568m 4952 S 46.5 0.9 5:01.02 httpd
1126 apache 16 0 447m 107m 4824 S 42.2 0.2 2:07.66 httpd
3472 apache 16 0 452m 113m 4972 R 40.6 0.2 4:45.21 httpd
22660 apache 16 0 450m 111m 4668 S 40.3 0.2 4:24.32 httpd
22705 apache 15 0 986m 645m 4892 S 38.0 1.0 5:05.08 httpd
30377 apache 15 0 463m 123m 4824 S 36.3 0.2 2:25.59 httpd
-------------------
You do not have the required permissions to view the files attached to this post.
User avatar
jdunitz
Posts: 235
Joined: Wed Feb 05, 2020 2:50 pm

Re: Nagios Load very high

Post by jdunitz »

Looking through your profile info that you attached, it appears that you've got some database corruption that couldn't repair itself.
Do a "grep -B1 -A4 repair mysqllog.txt", and you'll see what I'm talking about.

This is certainly something that people have run into before, and we even have a document that explains how to fix it:
https://assets.nagios.com/downloads/nag ... tabase.pdf

Have a look at that document, and do what it says, and things should start working better.

Additionally, you can look at the sizes of the tables, which can be helpful to see:

echo "SELECT table_name AS 'Table', round(((data_length + index_length) / 1024 / 1024), 2) 'Size in MB' FROM information_schema.TABLES WHERE table_schema IN ('nagios', 'nagiosql');" | mysql -uroot -pnagiosxi --table

Feel free to post the results of that query here if you'd like us to comment on it.

Hope that helps!

--Jeffrey
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.

Be sure to check out our Knowledgebase for helpful articles and solutions!
lgaddam
Posts: 116
Joined: Wed Aug 28, 2019 1:01 am

Re: Nagios Load very high

Post by lgaddam »

Please find outputs below.
Do a "grep -B1 -A4 repair mysqllog.txt", and you'll see what I'm talking about.
I ran like this, grep -B1 -A4 repair mysqld.log >> /tmp/dbrepair.txt.
And the output is in the attachment.

Additionally, you can look at the sizes of the tables, which can be helpful to see:
echo "SELECT table_name AS 'Table', round(((data_length + index_length) / 1024 / 1024), 2) 'Size in MB' FROM information_schema.TABLES WHERE table_schema IN ('nagios', 'nagiosql');" | mysql -uroot -pnagiosxi --table
[root@nagiosp01 tmp]# echo "SELECT table_name AS 'Table', round(((data_length + index_length) / 1024 / 1024), 2) 'Size in MB' FROM information_schema.TABLES WHERE table_schema IN ('nagios', 'nagiosql');" | mysql -uroot -pnagiosxi --table
+--------------------------------------------+------------+
| Table | Size in MB |
+--------------------------------------------+------------+
| nagios_acknowledgements | 0.03 |
| nagios_commands | 0.02 |
| nagios_commenthistory | 10.62 |
| nagios_comments | 0.00 |
| nagios_configfiles | 0.00 |
| nagios_configfilevariables | 0.01 |
| nagios_conninfo | 1.00 |
| nagios_contact_addresses | 0.00 |
| nagios_contact_notificationcommands | 0.05 |
| nagios_contactgroup_members | 0.01 |
| nagios_contactgroups | 0.00 |
| nagios_contactnotificationmethods | 77.39 |
| nagios_contactnotifications | 81.33 |
| nagios_contacts | 0.01 |
| nagios_contactstatus | 0.01 |
| nagios_customvariables | 1.38 |
| nagios_customvariablestatus | 1.28 |
| nagios_dbversion | 0.00 |
| nagios_downtimehistory | 0.36 |
| nagios_eventhandlers | 0.03 |
| nagios_externalcommands | 0.00 |
| nagios_flappinghistory | 37.46 |
| nagios_host_contactgroups | 0.16 |
| nagios_host_contacts | 0.18 |
| nagios_host_parenthosts | 0.01 |
| nagios_hostchecks | 0.00 |
| nagios_hostdependencies | 0.00 |
| nagios_hostescalation_contactgroups | 0.00 |
| nagios_hostescalation_contacts | 0.00 |
| nagios_hostescalations | 0.00 |
| nagios_hostgroup_members | 0.32 |
| nagios_hostgroups | 0.02 |
| nagios_hosts | 0.94 |
| nagios_hoststatus | 1.81 |
| nagios_instances | 0.00 |
| nagios_logentries | 1125.86 |
| nagios_notifications | 443.79 |
| nagios_objects | 3.46 |
| nagios_processevents | 0.22 |
| nagios_programstatus | 0.00 |
| nagios_runtimevariables | 0.00 |
| nagios_scheduleddowntime | 0.00 |
| nagios_service_contactgroups | 0.42 |
| nagios_service_contacts | 1.05 |
| nagios_service_parentservices | 0.00 |
| nagios_servicechecks | 0.00 |
| nagios_servicedependencies | 0.00 |
| nagios_serviceescalation_contactgroups | 0.00 |
| nagios_serviceescalation_contacts | 0.00 |
| nagios_serviceescalations | 0.00 |
| nagios_servicegroup_members | 0.03 |
| nagios_servicegroups | 0.00 |
| nagios_services | 2.95 |
| nagios_servicestatus | 7.69 |
| nagios_statehistory | 2078.93 |
| nagios_systemcommands | 0.03 |
| nagios_timedeventqueue | 0.00 |
| nagios_timedevents | 0.00 |
| nagios_timeperiod_timeranges | 0.04 |
| nagios_timeperiods | 0.01 |
| tbl_command | 0.02 |
| tbl_contact | 0.02 |
| tbl_contactgroup | 0.01 |
| tbl_contacttemplate | 0.01 |
| tbl_domain | 0.01 |
| tbl_host | 0.72 |
| tbl_hostdependency | 0.00 |
| tbl_hostescalation | 0.00 |
| tbl_hostextinfo | 0.00 |
| tbl_hostgroup | 0.03 |
| tbl_hosttemplate | 0.01 |
| tbl_info | 0.13 |
| tbl_lnkContactToCommandHost | 0.00 |
| tbl_lnkContactToCommandService | 0.00 |
| tbl_lnkContactToContactgroup | 0.00 |
| tbl_lnkContactToContacttemplate | 0.01 |
| tbl_lnkContactToVariabledefinition | 0.00 |
| tbl_lnkContactgroupToContact | 0.00 |
| tbl_lnkContactgroupToContactgroup | 0.00 |
| tbl_lnkContacttemplateToCommandHost | 0.00 |
| tbl_lnkContacttemplateToCommandService | 0.00 |
| tbl_lnkContacttemplateToContactgroup | 0.00 |
| tbl_lnkContacttemplateToContacttemplate | 0.00 |
| tbl_lnkContacttemplateToVariabledefinition | 0.00 |
| tbl_lnkHostToContact | 0.10 |
| tbl_lnkHostToContactgroup | 0.10 |
| tbl_lnkHostToHost | 0.01 |
| tbl_lnkHostToHostgroup | 0.03 |
| tbl_lnkHostToHosttemplate | 0.11 |
| tbl_lnkHostToVariabledefinition | 0.08 |
| tbl_lnkHostdependencyToHost_DH | 0.00 |
| tbl_lnkHostdependencyToHost_H | 0.00 |
| tbl_lnkHostdependencyToHostgroup_DH | 0.00 |
| tbl_lnkHostdependencyToHostgroup_H | 0.00 |
| tbl_lnkHostescalationToContact | 0.00 |
| tbl_lnkHostescalationToContactgroup | 0.00 |
| tbl_lnkHostescalationToHost | 0.00 |
| tbl_lnkHostescalationToHostgroup | 0.00 |
| tbl_lnkHostgroupToHost | 0.11 |
| tbl_lnkHostgroupToHostgroup | 0.01 |
| tbl_lnkHosttemplateToContact | 0.00 |
| tbl_lnkHosttemplateToContactgroup | 0.00 |
| tbl_lnkHosttemplateToHost | 0.00 |
| tbl_lnkHosttemplateToHostgroup | 0.00 |
| tbl_lnkHosttemplateToHosttemplate | 0.00 |
| tbl_lnkHosttemplateToVariabledefinition | 0.00 |
| tbl_lnkServiceToContact | 0.50 |
| tbl_lnkServiceToContactgroup | 0.25 |
| tbl_lnkServiceToHost | 0.35 |
| tbl_lnkServiceToHostgroup | 0.00 |
| tbl_lnkServiceToServicegroup | 0.00 |
| tbl_lnkServiceToServicetemplate | 0.36 |
| tbl_lnkServiceToVariabledefinition | 0.29 |
| tbl_lnkServicedependencyToHost_DH | 0.00 |
| tbl_lnkServicedependencyToHost_H | 0.00 |
| tbl_lnkServicedependencyToHostgroup_DH | 0.00 |
| tbl_lnkServicedependencyToHostgroup_H | 0.00 |
| tbl_lnkServicedependencyToService_DS | 0.00 |
| tbl_lnkServicedependencyToService_S | 0.00 |
| tbl_lnkServiceescalationToContact | 0.00 |
| tbl_lnkServiceescalationToContactgroup | 0.00 |
| tbl_lnkServiceescalationToHost | 0.00 |
| tbl_lnkServiceescalationToHostgroup | 0.00 |
| tbl_lnkServiceescalationToService | 0.00 |
| tbl_lnkServicegroupToService | 0.03 |
| tbl_lnkServicegroupToServicegroup | 0.00 |
| tbl_lnkServicetemplateToContact | 0.00 |
| tbl_lnkServicetemplateToContactgroup | 0.00 |
| tbl_lnkServicetemplateToHost | 0.00 |
| tbl_lnkServicetemplateToHostgroup | 0.00 |
| tbl_lnkServicetemplateToServicegroup | 0.00 |
| tbl_lnkServicetemplateToServicetemplate | 0.00 |
| tbl_lnkServicetemplateToVariabledefinition | 0.00 |
| tbl_lnkTimeperiodToTimeperiod | 0.00 |
| tbl_logbook | 0.00 |
| tbl_mainmenu | 0.00 |
| tbl_service | 2.24 |
| tbl_servicedependency | 0.00 |
| tbl_serviceescalation | 0.00 |
| tbl_serviceextinfo | 0.00 |
| tbl_servicegroup | 0.01 |
| tbl_servicetemplate | 0.01 |
| tbl_session | 0.00 |
| tbl_session_locks | 0.00 |
| tbl_settings | 0.00 |
| tbl_submenu | 0.00 |
| tbl_timedefinition | 0.04 |
| tbl_timeperiod | 0.02 |
| tbl_user | 0.01 |
| tbl_variabledefinition | 1.05 |
+--------------------------------------------+------------+

My action when the issue with load average occured in Nagios system.
1.Checked top command , visible there is high load average.
2.Suddenly, lot of node down alerts generated
3.Immediately, restarted Nagios services, it didnt worked out with the load.
4.mysqld, httpd, nagios processes are constantly consuming high cpu and no fluctuations seen.
5.So,Ran repairdatabases.sh and waited for 15 minutes.
6.Slowly Load came down to normal

Now, we would like to know what made suddenly to increase load at the particular time which I provided earlier the graph ?
What we need to be so careful going forward.
lgaddam
Posts: 116
Joined: Wed Aug 28, 2019 1:01 am

Re: Nagios Load very high

Post by lgaddam »

missed attachment for earlier post
You do not have the required permissions to view the files attached to this post.
lgaddam
Posts: 116
Joined: Wed Aug 28, 2019 1:01 am

Re: Nagios Load very high

Post by lgaddam »

HI,

Again the issue happened and I have ran dbrepair script one more time. Now the load coming down slowly.
Why this is repeating , could you please help us, we have lot of nodes in monitoring, because of this lot thousands of node down alerts are generating by Nagios. How can we fix this permanently ?

Is that if Nagios server load is very high, Nagios generated false node down alerts ?

Latest system profile is attached.


Below are the logoutput for mysqld.log...

[root@glnagiosp01 ~]# tail -100 /var/log/mysqld.log|grep -i repair

[root@glnagiosp01 ~]# tail -100 /var/log/mysqld.log

200213 15:23:23 mysqld started
200213 15:23:24 InnoDB: Started; log sequence number 0 56891
200213 15:23:24 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution
200213 15:23:25 [Note] /usr/libexec/mysqld: Normal shutdown

200213 15:23:25 InnoDB: Starting shutdown...
200213 15:23:26 InnoDB: Shutdown completed; log sequence number 0 56891
200213 15:23:26 [Note] /usr/libexec/mysqld: Shutdown complete

200213 15:23:26 mysqld ended

200213 15:23:27 mysqld started
200213 15:23:27 InnoDB: Started; log sequence number 0 56891
200213 15:23:27 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution
200214 8:53:13 [Note] /usr/libexec/mysqld: Normal shutdown

200214 8:53:15 InnoDB: Starting shutdown...
200214 8:53:16 InnoDB: Shutdown completed; log sequence number 0 56891
200214 8:53:16 [Note] /usr/libexec/mysqld: Shutdown complete
200214 08:54:06 mysqld ended

200214 08:54:06 mysqld started
200214 8:54:06 InnoDB: Started; log sequence number 0 56891
200214 8:54:06 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution
200214 8:54:08 [Note] /usr/libexec/mysqld: Normal shutdown

200214 8:54:10 InnoDB: Starting shutdown...
200214 8:54:11 InnoDB: Shutdown completed; log sequence number 0 56891
200214 8:54:11 [Note] /usr/libexec/mysqld: Shutdown complete

200214 08:54:11 mysqld ended

200214 08:57:26 mysqld started
200214 8:57:26 InnoDB: Started; log sequence number 0 56891
200214 8:57:26 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution
200214 8:57:28 [Note] /usr/libexec/mysqld: Normal shutdown

200214 8:57:28 InnoDB: Starting shutdown...
200214 8:57:29 InnoDB: Shutdown completed; log sequence number 0 56891
200214 8:57:29 [Note] /usr/libexec/mysqld: Shutdown complete

200214 08:57:29 mysqld ended

200214 08:57:30 mysqld started
200214 8:57:30 InnoDB: Started; log sequence number 0 56891
200214 8:57:30 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution
[root@glnagiosp01 ~]#

Moderator's Note: The profile has been shared with the support team but has been removed from the public forum.
User avatar
jdunitz
Posts: 235
Joined: Wed Feb 05, 2020 2:50 pm

Re: Nagios Load very high

Post by jdunitz »

I'm sorry to hear that this is still a problem for you.

There are a couple more things to look at that may be contributing to your DB corruption issue.

One is that you might be running out of available DB connections. Have a look at this document, and follow its recommendations, which will tell you how to determine if this could be your issue and make some adjustments if that is the case:

https://support.nagios.com/kb/article.php?id=513

After that, do another DB repair and let us know if that works for you.

Also, I noticed that your /var filesystem is rather full. Depending on how your system is set up, the DB repair could use either /tmp or /var/tmp for temp tables during the repair process. If /var were to fill up completely when that's going on, you'd have problems. So, see if you can clear out some logs and free up some space.

You may find these commands helpful in sorting out which files and directories are taking the most space:

# find /var/log -size +100000 -ls
# du /var/log | sort -n | tail

Exercise judgement before deleting files, of course, but if you find large log files that have already been rotated, those are good candidates for cleanup.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.

Be sure to check out our Knowledgebase for helpful articles and solutions!
lgaddam
Posts: 116
Joined: Wed Aug 28, 2019 1:01 am

Re: Nagios Load very high

Post by lgaddam »

Yeah, I saw that /var is utilizing more space.

Below is the output for big files occupying. They are httpd access logs.
Not sure why they are utilizing more space.

[root@glnagiosp01 httpd]# du -sh /var/log/*|sort -nr|grep G
7.2G /var/log/httpd
1.2G /var/log/sa
[root@nagiosp01 httpd]# df -h /var
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-var 12G 9.2G 1.6G 85% /var
[root@nagiosp01 httpd]#

[root@nagiosp01 httpd]# du -sh /var/log/httpd/*|sort -nr|grep G
1.8G /var/log/httpd/access_log.3
1.8G /var/log/httpd/access_log.1
1.7G /var/log/httpd/access_log.4
1.7G /var/log/httpd/access_log.2


I have truncated *log3 & *log4. Now the space is reduced to

[root@nagiosp01 httpd]# df -h /var
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-var 12G 6.1G 5.0G 55% /var
[root@nagiosp01 httpd]#

I dont think this is a problem, because while i ran dbrepair script, it went success. If this space issue is having problem then dbrepair script should fail when i ran two times. When i ran the dbrepair script, /var has 85% usage .

I have gone through the article provided by you, im not seeing those errors. So, i cannot go ahead and do chnages in DB configs on production machine as per article.

Please analyze our nagios system first and confirm me that i can go ahead and do changes then i will do.
For analyzing, let me know what logs do you require, i will provide you.
User avatar
jdunitz
Posts: 235
Joined: Wed Feb 05, 2020 2:50 pm

Re: Nagios Load very high

Post by jdunitz »

Here are a few suggestions for more things that can be contributing to your problems:

Firstly, we noticed that you're running Gnome and X on your XI server. That's going to chew up some resources that you need to go toward Nagios. So, disabling X would be a good idea.

Secondly, you appear to be having some Postgres issues, which may be helped by doing a Postgres vaccuum to clean up any junk (e.g., dead tuples) that have accumulated in your DB. Your system has both postgres AND mysql running, which was typical of older Nagios installations.

https://www.postgresql.org/docs/9.1/sql-vacuum.html

There are some nice instructions in this article:

https://support.nagios.com/kb/article/n ... ce-25.html

Go down to the heading "The postgresql service is not running or the database is not accepting commands", which is the part of the article that is relevant to your issue.

Also, please let us have a look at your /var/lib/pgsql/data/pg_log/postgresql-Fri.log, which may help us see more details about what happened last Friday with your postgres issues.

And one more thing for database diagnostics:
Identifying Connections

By default, the database server allows 151 connections. You can determine the current maximum allowed connections using this command:

mysql -uroot -pnagiosxi -e "show variables like 'max_connections';"


The output will be something like this:

+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 151 |
+-----------------+-------+


How do you know if you need to increase this value? We can run another query that shows us what the peak number of connections has been since the database server daemon was started:

mysql -uroot -pnagiosxi -e "show global status like 'Max_used_connections';"


The output will be something like this:

+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| Max_used_connections | 65 |
+----------------------+-------+

If the number returned is the same as (or close to max_connections) then you need to increase the allowed number of max_connections.
So please have a look at how many connections you're using; I suspect you're over the limit, which is why you're having issues.



Thirdly, because you seem to have a somewhat older system and a fairly large number of hosts and services that you're monitoring, increasing your max connection count in BOTH mysql and postgres, which will help your machine keep up with all the incoming data.

For postgres, you'll need to tweak max_connections, shared buffers in the postgres config file, and the SHMMAX kernel parameter.
https://stackoverflow.com/questions/307 ... n-postgres

For mysql, you'll change a couple settings in your my.conf:

Code: Select all

max_connections=1000
open_files_limit = 4096
...and restart your databases.


Finally, from what we can tell from what you sent, it looks like a number of services were restarted, and it's possible that they weren't started in the correct order. If possible, I might suggest rebooting your machine after tweaking your connection settings, to make sure that everything starts up cleanly.

I hope you find this helpful!
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.

Be sure to check out our Knowledgebase for helpful articles and solutions!
lgaddam
Posts: 116
Joined: Wed Aug 28, 2019 1:01 am

Re: Nagios Load very high

Post by lgaddam »

Please find below output as requested.


[root@nagiosp01 ~]# mysql -uroot -pnagiosxi -e "show variables like 'max_connections';"
+-----------------+-------+
| Variable_name | Value |
+-----------------+-------+
| max_connections | 100 |
+-----------------+-------+
[root@nagiosp01 ~]# mysql -uroot -pnagiosxi -e "show global status like 'Max_used_connections';"
+----------------------+-------+
| Variable_name | Value |
+----------------------+-------+
| Max_used_connections | 101 |
+----------------------+-------+
[root@nagiosp01 ~]#
lgaddam
Posts: 116
Joined: Wed Aug 28, 2019 1:01 am

Re: Nagios Load very high

Post by lgaddam »

Missed this output as well earlier

[root@nagiosp01 ~]# sestatus
SELinux status: disabled
[root@nagiosp01 ~]#
Locked