We are new customers. We have been running Nagios LS for about a month now. At first, performance was great, but over time, performance has degraded to the point of being more or less unusable. Most page requests time out, others eventually respond a minute or two later.
We are licensed for and are using 2 instances. Both instances are VMs with 4 vCPUs, 8 GB RAM, and 1.5 TB virtual hard disks for log storage. Server side, I can see that memory use is not an issue, but occasionally a single CPU will jump to 100% utilization. I know this is not enough information, but we are new to the product and not sure where to start, so I just wanted to get this thread going and see what information you all need in order to assist.
Poor performance.
-
npolovenko
- Support Tech
- Posts: 3457
- Joined: Mon May 15, 2017 5:00 pm
Re: Poor performance.
Hello, @NSchoenbaechler. Let's start by increasing the memory_limit in /etc/php.ini file. Please double the value you have there:
sed -i 's/^memory_limit.*/memory_limit = 1024M/g' /etc/php.ini
service httpd restart
Please upload your system profile here. To download the system profile go to the Admin menu, then System Status and click on Download System Profile.
Thank you
sed -i 's/^memory_limit.*/memory_limit = 1024M/g' /etc/php.ini
service httpd restart
Please upload your system profile here. To download the system profile go to the Admin menu, then System Status and click on Download System Profile.
Thank you
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
-
NSchoenbaechler
- Posts: 15
- Joined: Fri Feb 02, 2018 10:10 am
Re: Poor performance.
I upped the php.ini values as you suggested, but did not see any noticeable increase in performance. In fact, it was so bad that I really struggled to get this system profile downloaded. I was finally able to get it with one of my instances shut down - hopefully that does not affect what is included in the system profile. See attached.
You do not have the required permissions to view the files attached to this post.
-
scottwilkerson
- DevOps Engineer
- Posts: 19396
- Joined: Tue Nov 15, 2011 3:11 pm
- Location: Nagios Enterprises
- Contact:
Re: Poor performance.
according to your profile you only have 1 node active in the cluster which is likely part of the problem
I would restart elasticsearch on each of the instances.
You should be able to see both instances in Admin -> Manage Instances in a few minutes
Additionally, I am going to mention that while you can see there is free RAM on the system, Log Server by design only can allocate about 60% of RAM to the index process. Having so little memory and over 500G if indexes open and searchable is going to make the performance slow.
I would highly recommend updating the RAM on each of your instances to 32 or 64GB
I would restart elasticsearch on each of the instances.
Code: Select all
service elasticsearch restartAdditionally, I am going to mention that while you can see there is free RAM on the system, Log Server by design only can allocate about 60% of RAM to the index process. Having so little memory and over 500G if indexes open and searchable is going to make the performance slow.
I would highly recommend updating the RAM on each of your instances to 32 or 64GB
-
NSchoenbaechler
- Posts: 15
- Joined: Fri Feb 02, 2018 10:10 am
Re: Poor performance.
Increasing memory helped dramatically. Thanks for your assistance.