LogServer's memory be exhausted

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
User avatar
eloyd
Cool Title Here
Posts: 2190
Joined: Thu Sep 27, 2012 9:14 am
Location: Rochester, NY
Contact:

Re: LogServer's memory be exhausted

Post by eloyd »

Maybe not. Remember - the quest is not for lots of free memory, it's for lots of memory being used efficiently. The JVMs may have a lot of memory (or CPU) allocated because they're doing useful work. It's like buying a 3000 square foot house then only using 1000 square feet and saving the other 2000 square feet "in case you need it." Either buy a 1000 square foot house and use it all or buy a 3000 square foot house and use it all. But in both cases, use it efficiently.
Image
Eric Loyd • http://everwatch.global • 844.240.EVER • @EricLoyd
I'm a Nagios Fanatic! • Join our public Nagios Discord Server!
User avatar
hsmith
Agent Smith
Posts: 3539
Joined: Thu Jul 30, 2015 11:09 am
Location: 127.0.0.1
Contact:

Re: LogServer's memory be exhausted

Post by hsmith »

Let us know what happens.
Former Nagios Employee.
me.
bennspectrum
Posts: 30
Joined: Wed May 11, 2016 4:24 am

Re: LogServer's memory be exhausted

Post by bennspectrum »

Hello all,

According to this article, I want to ask something about the setup of the maintain & backup page.

If I want to keep the indexes 60 days or more than 60, any other configuration should I setup?

If I have a 8 cores cpu, 64GB memory machine, what is the loading limit could NLS afford? How much is the data volume one day?20GB or more?

I mean the NLS tolerance, have any suggestions or reference data?

Thanks.
User avatar
hsmith
Agent Smith
Posts: 3539
Joined: Thu Jul 30, 2015 11:09 am
Location: 127.0.0.1
Contact:

Re: LogServer's memory be exhausted

Post by hsmith »

bennspectrum wrote:If I want to keep the indexes 60 days or more than 60, any other configuration should I setup?
Open, or on the server?
bennspectrum wrote:If I have a 8 cores cpu, 64GB memory machine, what is the loading limit could NLS afford? How much is the data volume one day?20GB or more?
I've seen 40+GB per day work like this. YMMV depending on hardware.
bennspectrum wrote:I mean the NLS tolerance, have any suggestions or reference data?
I unfortunately don't have any best practices for configuration of how long to keep things open. I'd be happy to help with specific questions though.
Former Nagios Employee.
me.
User avatar
eloyd
Cool Title Here
Posts: 2190
Joined: Thu Sep 27, 2012 9:14 am
Location: Rochester, NY
Contact:

Re: LogServer's memory be exhausted

Post by eloyd »

Honestly, there is no best practices for how long to keep things open, since it all depends on what you need to do with your data. However, if you come to the Nagios 2016 World Conference, you can watch one of our consultants do a presentation on that very topic!

Details at https://conference.nagios.com/speakers/#Sean-Falzon
Image
Eric Loyd • http://everwatch.global • 844.240.EVER • @EricLoyd
I'm a Nagios Fanatic! • Join our public Nagios Discord Server!
User avatar
hsmith
Agent Smith
Posts: 3539
Joined: Thu Jul 30, 2015 11:09 am
Location: 127.0.0.1
Contact:

Re: LogServer's memory be exhausted

Post by hsmith »

I agree with the NWC 2016 plug, it's a great time!
Former Nagios Employee.
me.
bennspectrum
Posts: 30
Joined: Wed May 11, 2016 4:24 am

Re: LogServer's memory be exhausted

Post by bennspectrum »

Thank @hsmith and @eloyd
Open, or on the server?
I want it Open,hope that the data could keep 60, 70 ..., even more days, so I can query them conveniently.
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: LogServer's memory be exhausted

Post by rkennedy »

There isn't a great way to estimate how much memory usage NLS will need, but to keep all 60 days worth of logs open is going to rather difficult.

At 20GB/day of logs your cache will easily be overloaded after 1-2 weeks, for something like 60-70 days you would need a cluster to handle the load and would need an immense amount of ram.

I recommend opening the indexes per day as you need to, and drill down that way. This will allow you to conserve your memory, and avoid having to build a huge cluster to handle it all.
Former Nagios Employee
User avatar
eloyd
Cool Title Here
Posts: 2190
Joined: Thu Sep 27, 2012 9:14 am
Location: Rochester, NY
Contact:

Re: LogServer's memory be exhausted

Post by eloyd »

@rk is right - 60 days at dozens of gigs per day is going to be a BIG set of indexes, even if it's distributed. I always encourage our clients to examine what the goal is. Do you really need 60 days worth of search capability? That's a terrabyte of information you'll be searching at 20GB/day for 60 days. That's a LOT of data. Instead, ask can I just search recent data for trends, alert on those trends, and if I need to, open up past data to get more information?

In the end, however, the answer to your question is, "try it." It may work for you, it may not.
Image
Eric Loyd • http://everwatch.global • 844.240.EVER • @EricLoyd
I'm a Nagios Fanatic! • Join our public Nagios Discord Server!
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: LogServer's memory be exhausted

Post by rkennedy »

Thanks @eloyd!

@bennspectrum - let us know if you have any further questions.
Former Nagios Employee
Locked