The Nagios Log server is a 4 server cluster with ~1900 linux servers pointing to them. When this problem occurs, the log searches are no longer functional. the four servers have 2 cpus each and 8GB of memory. They do not appear to be bottle-necked by these resources. They are all VMs (VMWare). Restarting the elasticsearch daemon will clear this up, but the problem will reoccur anywhere for 1/2 hour to a day later.
Is there a way to prevent this from happening?
Nagios Log Server displaying "No Data to Display"
Nagios Log Server displaying "No Data to Display"
You do not have the required permissions to view the files attached to this post.
Re: Nagios Log Server displaying "No Data to Display"
You likely are running out of memory, that's the usual culprit.
Please PM me a copy of your profile, you can download it from Admin > System Status by clicking the Download System Profile button.
Attach these files as well (you will only have two of them):
Please PM me a copy of your profile, you can download it from Admin > System Status by clicking the Download System Profile button.
Attach these files as well (you will only have two of them):
Code: Select all
/etc/sysconfig/logstash
/etc/sysconfig/elasticsearch
/etc/default/logstash
/etc/default/elasticsearchRe: Nagios Log Server displaying "No Data to Display"
I've attached the information.
Re: Nagios Log Server displaying "No Data to Display"
As an FYI, I added an additional 4 GB to all log servers, bumping the memory up to 12 GB from 8 GB. This looks likes it may have resolved my issue
Re: Nagios Log Server displaying "No Data to Display"
I do not see anything attached.
Adding more memory and/or adjusting the HEAP size is usually what helps. Thank you for posting the update.
If you continue to have issues, please send those files. Otherwise, just let us know when we're okay to lock this up and mark it as resolved.
Thank you!
Adding more memory and/or adjusting the HEAP size is usually what helps. Thank you for posting the update.
If you continue to have issues, please send those files. Otherwise, just let us know when we're okay to lock this up and mark it as resolved.
Thank you!