Best approach to monitoring large and many log files

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
Locked
dlukinski
Posts: 1130
Joined: Tue Oct 06, 2015 9:42 am

Best approach to monitoring large and many log files

Post by dlukinski »

Hello LOG Support

We are trying to implement LOG shippping for a group of Linux servers (4 of them) with 50-200+ log files located in a single directory on each server.
LOGs are large (1-4 GB total size for each server) and update frequently.

What would be the best way to check & ship them (we also have XI) without overwhelming LOG server?

Thank you,
Dimitri
User avatar
eloyd
Cool Title Here
Posts: 2190
Joined: Thu Sep 27, 2012 9:14 am
Location: Rochester, NY
Contact:

Re: Best approach to monitoring large and many log files

Post by eloyd »

That depends on how many resources your NLS server has in terms of memory, CPU, disk, and network bandwidth available. What is your hardware configuration for your Log server?
Last edited by eloyd on Thu May 19, 2016 4:26 pm, edited 1 time in total.
Image
Eric Loyd • http://everwatch.global • 844.240.EVER • @EricLoyd
I'm a Nagios Fanatic! • Join our public Nagios Discord Server!
User avatar
hsmith
Agent Smith
Posts: 3539
Joined: Thu Jul 30, 2015 11:09 am
Location: 127.0.0.1
Contact:

Re: Best approach to monitoring large and many log files

Post by hsmith »

Throw resources at it, add additional instances, implement round robin dns.

What are the specs of your current LogServer?

Edit: Eric!
Former Nagios Employee.
me.
dlukinski
Posts: 1130
Joined: Tue Oct 06, 2015 9:42 am

Re: Best approach to monitoring large and many log files

Post by dlukinski »

hsmith wrote:Throw resources at it, add additional instances, implement round robin dns.

What are the specs of your current LogServer?

Edit: Eric!
4 CPU / 8 GB RAM / 500 GB DISK (virtual appliance). You we should increase RAM to 16 ?
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: Best approach to monitoring large and many log files

Post by rkennedy »

NLS is pretty ram heavy, I would aim for 16-32GB depending on the amount you're looking at taking in per day. It will cache things to ram, and that's where quite a bit of it will go. 4 CPU's should be fine. The disk space is dependent on how much logs you're planning on storing locally vs backing up to an external source.

Do you have an estimate of how much data this server would be taking in daily?
Former Nagios Employee
dlukinski
Posts: 1130
Joined: Tue Oct 06, 2015 9:42 am

Re: Best approach to monitoring large and many log files

Post by dlukinski »

rkennedy wrote:NLS is pretty ram heavy, I would aim for 16-32GB depending on the amount you're looking at taking in per day. It will cache things to ram, and that's where quite a bit of it will go. 4 CPU's should be fine. The disk space is dependent on how much logs you're planning on storing locally vs backing up to an external source.

Do you have an estimate of how much data this server would be taking in daily?
current intake is 200-500 MB a day with spikes of 3 GB daily when certain group of servers gets in trouble.
With 5 GB RAM allocated manually and many servers planned for log shipping I guess we should increase to 16 GB total and allocated 12 GB manually?
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: Best approach to monitoring large and many log files

Post by rkennedy »

Yeah, I would go for at least 16GB. How long are you keeping your indexes open? (all open indexes will still be cached to ram, so you'll want to make sure you don't have too many open indexes still as they will just sit in ram.)
Former Nagios Employee
dlukinski
Posts: 1130
Joined: Tue Oct 06, 2015 9:42 am

Re: Best approach to monitoring large and many log files

Post by dlukinski »

rkennedy wrote:Yeah, I would go for at least 16GB. How long are you keeping your indexes open? (all open indexes will still be cached to ram, so you'll want to make sure you don't have too many open indexes still as they will just sit in ram.)
Thank you for the hint (did not realize this part of the equation)
Will now review open indexes policy too

This thread is OK to close as I have my answers
User avatar
eloyd
Cool Title Here
Posts: 2190
Joined: Thu Sep 27, 2012 9:14 am
Location: Rochester, NY
Contact:

Re: Best approach to monitoring large and many log files

Post by eloyd »

Also, before you close it, I tell everyone to read https://www.elastic.co/guide/en/elastic ... izing.html

Know this before reading: By default, Nagios Log Server allocates 50% of RAM to elasticsearch heap and does not specifically allocate logstash memory (which defaults to 500MB). You may need to adjust these values by using the LS_HEAP_SIZE and ES_HEAP_SIZE parameters in /etc/sysconfig/logstash and /etc/sysconfig/elasticsearch.
Image
Eric Loyd • http://everwatch.global • 844.240.EVER • @EricLoyd
I'm a Nagios Fanatic! • Join our public Nagios Discord Server!
User avatar
mcapra
Posts: 3739
Joined: Thu May 05, 2016 3:54 pm

Re: Best approach to monitoring large and many log files

Post by mcapra »

Useful information! Closing this
Former Nagios employee
https://www.mcapra.com/
Locked