Page 2 of 2

Re: Logstash Collector stops working

Posted: Fri May 06, 2016 1:48 pm
by eloyd
RAM, and the proper tuning of NLS Java heaps, is critical the proper running of a large NLS cluster. If anything else, if you can throw more memory at it, it will run better.

Re: Logstash Collector stops working

Posted: Fri May 06, 2016 1:51 pm
by hsmith
Somebody is watching the clock :)

Re: Logstash Collector stops working

Posted: Fri May 06, 2016 1:54 pm
by eloyd
Heheh. Trevor gets mad at me when I do that, but it's still fun. :)

Re: Logstash Collector stops working

Posted: Fri May 06, 2016 1:55 pm
by tmcdonald
*ahem* On-topic, please.

Re: Logstash Collector stops working

Posted: Fri May 06, 2016 3:02 pm
by dlukinski
hsmith wrote:Shouldn't be. How much RAM does your server have?

Code: Select all

free -m
[root@fikc-naglsprod01 ~]# free -m
total used free shared buffers cached
Mem: 7994 7187 806 0 142 1100
-/+ buffers/cache: 5944 2049
Swap: 255 36 219
[root@fikc-naglsprod01 ~]#

- should we increase SWAP ?

Re: Logstash Collector stops working

Posted: Fri May 06, 2016 8:35 pm
by eloyd
If you have the ability to increase physical memory (I do not know if this is a virtual machine or a physical machine), that will help more than increasing swap. As a general rule, swapping to disk is 100 times slower than dealing with RAM, so you want to avoid it if possible.

By default, NLS allocates 50% of available system RAM to elasticsearch. You can verify this in /etc/sysconfig/elasticsearch. It MAY be worthwhile to increase the ES_HEAP_SIZE to something bigger than 50% of RAM. You can test this out by changing the last line in this code:

Code: Select all

# Heap Size (defaults to 256m min, 1g max)
# Nagios Log Server Default to 0.5 physical Memory
ES_HEAP_SIZE=$(expr $(free -m|awk '/^Mem:/{print $2}') / 2 )m
to

Code: Select all

ES_HEAP_SIZE=5120m
Which will allocate 5G of RAM instead of ~4G of RAM to elasticsearch's Java heap. This might work. You might be good to go as far as 6114m, which would be 6GB, but I wouldn't push it any farther than that on a system with 8GB of RAM. Increasing swap will likely not gain you anything on this system.

A good article on elasticsearch JVM tuning can be found at https://www.elastic.co/guide/en/elastic ... izing.html

Re: Logstash Collector stops working

Posted: Mon May 09, 2016 9:32 am
by tmcdonald
Definitely agreed, especially on a system like NLS that likes a lot of RAM, swapping to disk is one of the worst things you can do for your performance.

Re: Logstash Collector stops working

Posted: Wed May 11, 2016 8:23 am
by dlukinski
eloyd wrote:If you have the ability to increase physical memory (I do not know if this is a virtual machine or a physical machine), that will help more than increasing swap. As a general rule, swapping to disk is 100 times slower than dealing with RAM, so you want to avoid it if possible.

By default, NLS allocates 50% of available system RAM to elasticsearch. You can verify this in /etc/sysconfig/elasticsearch. It MAY be worthwhile to increase the ES_HEAP_SIZE to something bigger than 50% of RAM. You can test this out by changing the last line in this code:

Code: Select all

# Heap Size (defaults to 256m min, 1g max)
# Nagios Log Server Default to 0.5 physical Memory
ES_HEAP_SIZE=$(expr $(free -m|awk '/^Mem:/{print $2}') / 2 )m
to

Code: Select all

ES_HEAP_SIZE=5120m
Which will allocate 5G of RAM instead of ~4G of RAM to elasticsearch's Java heap. This might work. You might be good to go as far as 6114m, which would be 6GB, but I wouldn't push it any farther than that on a system with 8GB of RAM. Increasing swap will likely not gain you anything on this system.

A good article on elasticsearch JVM tuning can be found at https://www.elastic.co/guide/en/elastic ... izing.html
Increased HEAP as advised and reboot the server.
- works well, going to keep in mind RAM increases

We could now close this support thread

Re: Logstash Collector stops working

Posted: Wed May 11, 2016 8:32 am
by eloyd
Glad I could help!