Page 1 of 1

New used memory from field [host] would be larger than confi

Posted: Thu Dec 10, 2015 4:58 am
by WillemDH
I'm again having issues with our NLS servers. After a few queries, there seem to be no logs at all visible in the gui. The problem is solved after restarting the elasticsearch service. Not hing has changed for about a month since the remtoe with Jesse. I can see these warnings in the elasticsearch logs:

Code: Select all

[2015-12-10 10:05:27,060][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5126453748 [4.7gb] from field [host] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:27,061][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5134113853 [4.7gb] from field [host] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:27,063][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5133540127 [4.7gb] from field [host] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:30,119][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5123330728 [4.7gb] from field [host] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:30,837][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5128943404 [4.7gb] from field [host] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:31,029][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5125094392 [4.7gb] from field [host] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:32,941][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5123274699 [4.7gb] from field [@timestamp] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:32,942][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5123409713 [4.7gb] from field [@timestamp] would be larger than configured breaker: 5122582118 [4.7gb], breaking
[2015-12-10 10:05:32,989][WARN ][indices.breaker          ] [95f9ab14-da22-4144-bb0b-6bbc5662115c] [FIELDDATA] New used memory 5124203738 [4.7gb] from field [@timestamp] would be larger than configured breaker: 5122582118 [4.7gb], breaking

Code: Select all

free -m
             total       used       free     shared    buffers     cached
Mem:         16073      15297        776          0         51       5059
-/+ buffers/cache:      10186       5887
Swap:         1999          0       1999
Heap Size is set to ES_HEAP_SIZE=8g in /etc/sysconfig/elasticsearch

I've read through https://www.elastic.co/guide/en/elastic ... usage.html

But I can't find indices.fielddata.cache.size anywhere in /usr/local/nagioslogserver/elasticsearch/config/elasticsearch.yml

Please advice how to prevent the webgui from stopping to show data after which we need to restart elasticsearch.

Grtz

Willem

Re: New used memory from field [host] would be larger than c

Posted: Thu Dec 10, 2015 1:53 pm
by jolson
I highly recommend simply increasing the amount of memory in Nagios Log Server. This will avoid the complication of having to mess with fielddata values - they are set at very reasonable defaults, and I fear that toggling them could cause weird behavior/instability in the future. I recommend adding more memory if possible, or upping your ES_HEAP_SIZE to 12-14GB if that's not possible.

The end goal is to increase the size of your ES_HEAP so that your fielddata has more 'space to roam'. Whether you accomplish this by adding additional memory or upping the HEAP_SIZE is up to you. Thanks!

Re: New used memory from field [host] would be larger than c

Posted: Thu Dec 10, 2015 2:26 pm
by WillemDH
Again? Hmmm I just expanded it from 8 to 16 GB one month ago remember. :o We didn't add any more data sources since.. I fear adding even more will not be possible for now. I'll try set the the ES_HEAP_SIZE to 12 GB. and will see what it does. I'll update this post next week or after New Year! Thanks.

Re: New used memory from field [host] would be larger than c

Posted: Thu Dec 10, 2015 2:29 pm
by jolson
No problem! How many indices do you currently have open? That could potentially impact the cluster status.

Code: Select all

curl -s 'localhost:9200/_cluster/health?level=indices&pretty' | grep logstash | wc -l
Let me know the result of the above - if you have many indexes open it could absolutely impact the functionality of your system. Thanks Willem!

Re: New used memory from field [host] would be larger than c

Posted: Thu Dec 10, 2015 2:45 pm
by WillemDH
Seems I have 31:

Code: Select all

 curl -s 'localhost:9200/_cluster/health?level=indices&pretty' | grep logstash | wc -l
31
I noticed that I'm using check_mem with -nocache. as it is suggested to add for Linux servers. I guess this doesn't apply for ELK nodes..

Code: Select all

COMMAND: /usr/local/nagios/libexec/check_nrpe -H srvnaglog01.gentgrp.gent.be -t 120 -c check_mem -a '-w 1 -c 0 -nocache'
OUTPUT: OK - 5438 / 16073 MB (33%) Free Memory, Used: 15911 MB, Shared: 0 MB, Buffers: 76 MB, Cached: 5276 MB | total=16073MB free=5438MB used=15911MB shared=0 buffers=76MB cached=5276MB
As the results is of course that it seems like it has quite some free memory while following your conclusion they haven't got enough memory.. As I have plenty of Nagios services checking our ELK nodes, I'd love to see one go critical once I'm experiencing the above issue. I'm not sure yet what I need to tune or monitor to achieve that though.

Grtz

Re: New used memory from field [host] would be larger than c

Posted: Thu Dec 10, 2015 3:58 pm
by jolson
Even though you do indeed have some free memory (your cache + actual free memory) elasticsearch has a maximum heap of 8GB, which is the restriction we're dealing with here. It might be worthwhile to check on the java heap size using a plugin, a simple command to check it can be initiated as follows:

Download the following plugin: https://exchange.nagios.org/directory/P ... at/details
Check on java stats like so:

Code: Select all

./javacheck.sh -j Elasticsearch
OK: jstat process Elasticsearch alive|pid=3004 heap=240277;1935360;12;-1;-1 perm=43997;169984;25;-1;-1

Re: New used memory from field [host] would be larger than c

Posted: Mon Dec 14, 2015 3:14 am
by WillemDH
Jesse,

On a NLS node:

Code: Select all

/usr/local/nagios/libexec/check_jstat.sh -j Elasticsearch
/usr/local/nagios/libexec/check_jstat.sh: line 99: jps: command not found
cat: /proc//status: No such file or directory
CRITICAL: process pid[] seems not to be a JAVA application
Grtz

Re: New used memory from field [host] would be larger than c

Posted: Mon Dec 14, 2015 9:53 am
by jolson
Try installing openjdk, it should provide you with jps:

Code: Select all

yum install java-1.7.0-openjdk-devel

Re: New used memory from field [host] would be larger than c

Posted: Fri Jan 08, 2016 6:01 am
by WillemDH
Jesse,

Thanks, installing java-1.7.0-openjdk-devel did the trick. i'll report in a few weeks how thing go. In the meantime I installed NLS 1.4 on both nodes. During my holidays everything worked smoothly. But load was generally less high due to many people on holiday... To be continued!

Willem

Re: New used memory from field [host] would be larger than c

Posted: Fri Jan 08, 2016 10:16 am
by jolson
I'm happy to hear it. I hope your holidays were happy. :)