Logstash keeps crashing random
Logstash keeps crashing random
Hi,
Logstash keeps crashing every 2 weeks, sometimes more or less.
We already increased "LS_HEAP_SIZE" to a big amount. Process simply disappears and nothing is logged.
Any idea?
Thank you,
Saber
Logstash keeps crashing every 2 weeks, sometimes more or less.
We already increased "LS_HEAP_SIZE" to a big amount. Process simply disappears and nothing is logged.
Any idea?
Thank you,
Saber
Re: Logstash keeps crashing random
What version of Log Server is this? Also, currently what is your LS_HEAP_SIZE and LS_OPEN_FILES set to? How much memory does your system have?
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Be sure to check out our Knowledgebase for helpful articles and solutions!
Be sure to check out our Knowledgebase for helpful articles and solutions!
Re: Logstash keeps crashing random
It's always related to "LS_HEAP_SIZE" . We upgraded it to 64GB.. but still having the following
There must a be a leak somewhere..
Code: Select all
Oct 24 17:06:15 logstash: Error: Your application used more memory than the safety cap of 65536M.
Oct 24 17:06:15 logstash: Specify -J-Xmx####m to increase it (#### = cap size in MB).
Oct 24 17:06:15 logstash: Specify -w for full OutOfMemoryError stack trace-
scottwilkerson
- DevOps Engineer
- Posts: 19396
- Joined: Tue Nov 15, 2011 3:11 pm
- Location: Nagios Enterprises
- Contact:
Re: Logstash keeps crashing random
What is the output of this command
Code: Select all
grep HEAP /etc/sysconfig/logstashRe: Logstash keeps crashing random
Code: Select all
#LS_HEAP_SIZE="256m"-
scottwilkerson
- DevOps Engineer
- Posts: 19396
- Joined: Tue Nov 15, 2011 3:11 pm
- Location: Nagios Enterprises
- Contact:
Re: Logstash keeps crashing random
That is definitely a problem, you have it set to a level equal to the whole system memory, it needs to be MUCH lower, I would think about 2048m would be a good value to set it to.saber wrote:It's increased under "/etc/init.d/logstash" to 65536mCode: Select all
#LS_HEAP_SIZE="256m"
Elasticsearch is going to use 32G and you also need a good amount of free available for cached memory to enable elasticsearch to function properly
Re: Logstash keeps crashing random
We have 256GB of RAM which 128GB is assigned to elastic search (50%..) and 64GB to logstash..
Re: Logstash keeps crashing random
64GB is the max we recommend for a system since after 32GB(Elasticsearch takes half automatically), java will see performance issues with memory addressing. I'd recommend setting the heap size manually by editing /etc/sysconfig/elasticsearch and commenting out this line:
and replacing it with something like:
I would also lower the LS_HEAP_SIZE size to something lower than 32GB.
Is this a Cent/RHEL or Ubuntu/Debian install?
Code: Select all
#ES_HEAP_SIZE=$(expr $(free -m|awk '/^Mem:/{print $2}') / 2 )mCode: Select all
ES_HEAP_SIZE=3100mIs this a Cent/RHEL or Ubuntu/Debian install?
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Logstash keeps crashing random
Hi,
Total server memory is 256GB. If Elastic Search takes 50%, it's 128GB.
We have 128GB remaining. We assigned 64GB to Logstash because of those errors but no help.
It's a CentOS 7 install. We use syslog over tls.
I have attached our memory usage over 30 days. Clearly, there are no OOMs at all. It's always a logstash failure for unknown reasons..
Total server memory is 256GB. If Elastic Search takes 50%, it's 128GB.
We have 128GB remaining. We assigned 64GB to Logstash because of those errors but no help.
It's a CentOS 7 install. We use syslog over tls.
I have attached our memory usage over 30 days. Clearly, there are no OOMs at all. It's always a logstash failure for unknown reasons..
You do not have the required permissions to view the files attached to this post.
Re: Logstash keeps crashing random
128GB is four times what is recommended for Elasticsearch. Anything above 32 can cause performance problems:
https://www.elastic.co/blog/a-heap-of-trouble
I would recommend lowering the memory of logstash as well for the same reason. Ordinarily it shouldn't take anywhere near as much memory as it supposedly using. It may be a configuration issue - please PM me a profile from the machine after lowering the memory values for both services and restarting them.
A profile can be gathered under Admin > System > System Status > Download System Profile or from the command line with:
/usr/local/nagioslogserver/scripts/profile.sh
This will create /tmp/system-profile.tar.gz.
Note that this file can be very large and may not be able to be uploaded through a PM. This is usually due to the logs in the Logstash and/or Elasticsearch directories found in it. If it is too large, please open the profile, extract these directories/files and send them separately.
https://www.elastic.co/blog/a-heap-of-trouble
I would recommend lowering the memory of logstash as well for the same reason. Ordinarily it shouldn't take anywhere near as much memory as it supposedly using. It may be a configuration issue - please PM me a profile from the machine after lowering the memory values for both services and restarting them.
A profile can be gathered under Admin > System > System Status > Download System Profile or from the command line with:
/usr/local/nagioslogserver/scripts/profile.sh
This will create /tmp/system-profile.tar.gz.
Note that this file can be very large and may not be able to be uploaded through a PM. This is usually due to the logs in the Logstash and/or Elasticsearch directories found in it. If it is too large, please open the profile, extract these directories/files and send them separately.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.