Page 2 of 2

Re: Need some help on accessing log data

Posted: Tue Dec 08, 2015 4:11 pm
by gregwhite
The server has 32gig of physical memory. What should we increase heap size too?
The reason they are open back to July is they want to be able to search 6 months back. So far most queries are only looking back 3 months. What would best practices be? Close them after 3 months to improve performance? Then does that mean that we would have to go in and open up each index they want to search? That would be opening 90+ indices. Would adding another node improve performance?
The other issue I have was when I set the date to July, it only searched back to 10/30 and it only displayed the data from the current date and time.

Appreciate your patience.

Re: Need some help on accessing log data

Posted: Tue Dec 08, 2015 4:56 pm
by jolson
If your instance has 32GB of usable memory, the HEAP_SIZE should be _at least_ 16GB. To enable this to be handled automatically, you can set up your elasticsearch config file like so:
vi /etc/sysconfig/elasticsearch
Change:

Code: Select all

ES_HEAP_SIZE=1024m
To:

Code: Select all

ES_HEAP_SIZE=$(expr $(free -m|awk '/^Mem:/{print $2}') / 2 )m
Restart Elasticsearch:

Code: Select all

service elasticsearch restart
After the restart, Elasticsearch should be able to cache 12x more data than previously. Let me know if this helps with performance.

As far as LS_HEAP_SIZE in the /etc/sysconfig/logstash file, we recommend 1024m as a maximum for now.
Would adding another node improve performance?
It will roughly double speed of data querying (you can read from both instances at the same time).

Otherwise, I'm not sure exactly what is wrong. The fastest way to get this figured out is likely through a remote session - feel free to email [email protected] and link back to this thread - I'd be happy to take your ticket and present you with a remote session. Thanks!