Hello LOG Support
We are trying to implement LOG shippping for a group of Linux servers (4 of them) with 50-200+ log files located in a single directory on each server.
LOGs are large (1-4 GB total size for each server) and update frequently.
What would be the best way to check & ship them (we also have XI) without overwhelming LOG server?
Thank you,
Dimitri
Best approach to monitoring large and many log files
Re: Best approach to monitoring large and many log files
That depends on how many resources your NLS server has in terms of memory, CPU, disk, and network bandwidth available. What is your hardware configuration for your Log server?
Last edited by eloyd on Thu May 19, 2016 4:26 pm, edited 1 time in total.
Eric Loyd • http://everwatch.global • 844.240.EVER • @EricLoyd
I'm a Nagios Fanatic! • Join our public Nagios Discord Server!
Re: Best approach to monitoring large and many log files
Throw resources at it, add additional instances, implement round robin dns.
What are the specs of your current LogServer?
Edit: Eric!
What are the specs of your current LogServer?
Edit: Eric!
Former Nagios Employee.
me.
me.
Re: Best approach to monitoring large and many log files
4 CPU / 8 GB RAM / 500 GB DISK (virtual appliance). You we should increase RAM to 16 ?hsmith wrote:Throw resources at it, add additional instances, implement round robin dns.
What are the specs of your current LogServer?
Edit: Eric!
Re: Best approach to monitoring large and many log files
NLS is pretty ram heavy, I would aim for 16-32GB depending on the amount you're looking at taking in per day. It will cache things to ram, and that's where quite a bit of it will go. 4 CPU's should be fine. The disk space is dependent on how much logs you're planning on storing locally vs backing up to an external source.
Do you have an estimate of how much data this server would be taking in daily?
Do you have an estimate of how much data this server would be taking in daily?
Former Nagios Employee
Re: Best approach to monitoring large and many log files
current intake is 200-500 MB a day with spikes of 3 GB daily when certain group of servers gets in trouble.rkennedy wrote:NLS is pretty ram heavy, I would aim for 16-32GB depending on the amount you're looking at taking in per day. It will cache things to ram, and that's where quite a bit of it will go. 4 CPU's should be fine. The disk space is dependent on how much logs you're planning on storing locally vs backing up to an external source.
Do you have an estimate of how much data this server would be taking in daily?
With 5 GB RAM allocated manually and many servers planned for log shipping I guess we should increase to 16 GB total and allocated 12 GB manually?
Re: Best approach to monitoring large and many log files
Yeah, I would go for at least 16GB. How long are you keeping your indexes open? (all open indexes will still be cached to ram, so you'll want to make sure you don't have too many open indexes still as they will just sit in ram.)
Former Nagios Employee
Re: Best approach to monitoring large and many log files
Thank you for the hint (did not realize this part of the equation)rkennedy wrote:Yeah, I would go for at least 16GB. How long are you keeping your indexes open? (all open indexes will still be cached to ram, so you'll want to make sure you don't have too many open indexes still as they will just sit in ram.)
Will now review open indexes policy too
This thread is OK to close as I have my answers
Re: Best approach to monitoring large and many log files
Also, before you close it, I tell everyone to read https://www.elastic.co/guide/en/elastic ... izing.html
Know this before reading: By default, Nagios Log Server allocates 50% of RAM to elasticsearch heap and does not specifically allocate logstash memory (which defaults to 500MB). You may need to adjust these values by using the LS_HEAP_SIZE and ES_HEAP_SIZE parameters in /etc/sysconfig/logstash and /etc/sysconfig/elasticsearch.
Know this before reading: By default, Nagios Log Server allocates 50% of RAM to elasticsearch heap and does not specifically allocate logstash memory (which defaults to 500MB). You may need to adjust these values by using the LS_HEAP_SIZE and ES_HEAP_SIZE parameters in /etc/sysconfig/logstash and /etc/sysconfig/elasticsearch.
Eric Loyd • http://everwatch.global • 844.240.EVER • @EricLoyd
I'm a Nagios Fanatic! • Join our public Nagios Discord Server!
Re: Best approach to monitoring large and many log files
Useful information! Closing this
Former Nagios employee
https://www.mcapra.com/
https://www.mcapra.com/