Page 1 of 1

Best approach to monitoring large and many log files

Posted: Thu May 19, 2016 4:10 pm
by dlukinski
Hello LOG Support

We are trying to implement LOG shippping for a group of Linux servers (4 of them) with 50-200+ log files located in a single directory on each server.
LOGs are large (1-4 GB total size for each server) and update frequently.

What would be the best way to check & ship them (we also have XI) without overwhelming LOG server?

Thank you,
Dimitri

Re: Best approach to monitoring large and many log files

Posted: Thu May 19, 2016 4:20 pm
by eloyd
That depends on how many resources your NLS server has in terms of memory, CPU, disk, and network bandwidth available. What is your hardware configuration for your Log server?

Re: Best approach to monitoring large and many log files

Posted: Thu May 19, 2016 4:20 pm
by hsmith
Throw resources at it, add additional instances, implement round robin dns.

What are the specs of your current LogServer?

Edit: Eric!

Re: Best approach to monitoring large and many log files

Posted: Mon May 30, 2016 11:48 am
by dlukinski
hsmith wrote:Throw resources at it, add additional instances, implement round robin dns.

What are the specs of your current LogServer?

Edit: Eric!
4 CPU / 8 GB RAM / 500 GB DISK (virtual appliance). You we should increase RAM to 16 ?

Re: Best approach to monitoring large and many log files

Posted: Mon May 30, 2016 8:35 pm
by rkennedy
NLS is pretty ram heavy, I would aim for 16-32GB depending on the amount you're looking at taking in per day. It will cache things to ram, and that's where quite a bit of it will go. 4 CPU's should be fine. The disk space is dependent on how much logs you're planning on storing locally vs backing up to an external source.

Do you have an estimate of how much data this server would be taking in daily?

Re: Best approach to monitoring large and many log files

Posted: Tue May 31, 2016 2:16 pm
by dlukinski
rkennedy wrote:NLS is pretty ram heavy, I would aim for 16-32GB depending on the amount you're looking at taking in per day. It will cache things to ram, and that's where quite a bit of it will go. 4 CPU's should be fine. The disk space is dependent on how much logs you're planning on storing locally vs backing up to an external source.

Do you have an estimate of how much data this server would be taking in daily?
current intake is 200-500 MB a day with spikes of 3 GB daily when certain group of servers gets in trouble.
With 5 GB RAM allocated manually and many servers planned for log shipping I guess we should increase to 16 GB total and allocated 12 GB manually?

Re: Best approach to monitoring large and many log files

Posted: Tue May 31, 2016 2:34 pm
by rkennedy
Yeah, I would go for at least 16GB. How long are you keeping your indexes open? (all open indexes will still be cached to ram, so you'll want to make sure you don't have too many open indexes still as they will just sit in ram.)

Re: Best approach to monitoring large and many log files

Posted: Wed Jun 01, 2016 8:12 am
by dlukinski
rkennedy wrote:Yeah, I would go for at least 16GB. How long are you keeping your indexes open? (all open indexes will still be cached to ram, so you'll want to make sure you don't have too many open indexes still as they will just sit in ram.)
Thank you for the hint (did not realize this part of the equation)
Will now review open indexes policy too

This thread is OK to close as I have my answers

Re: Best approach to monitoring large and many log files

Posted: Wed Jun 01, 2016 8:40 am
by eloyd
Also, before you close it, I tell everyone to read https://www.elastic.co/guide/en/elastic ... izing.html

Know this before reading: By default, Nagios Log Server allocates 50% of RAM to elasticsearch heap and does not specifically allocate logstash memory (which defaults to 500MB). You may need to adjust these values by using the LS_HEAP_SIZE and ES_HEAP_SIZE parameters in /etc/sysconfig/logstash and /etc/sysconfig/elasticsearch.

Re: Best approach to monitoring large and many log files

Posted: Wed Jun 01, 2016 12:01 pm
by mcapra
Useful information! Closing this