Page 1 of 1

Small Cluster Design

PostPosted: Tue Aug 16, 2022 5:06 am
by JohnSonandrla
We started off with Nagios Log Server (it's Elasticsearch underneath). However, I think its front-end is a little limited, and it's been choking on the data we feed it (OoM errors). Currently, it's getting ~15GB a day, but we have other logs that are not yet being sent. That's installed to a single VM with 4 CPUs and 8GB of RAM.

To be able to handle more logs, and avoid licensing costs, I thought I'd design a multi-node ELK cluster. However, I'm stuck on how to set this up. A lot of what I read use many huge machines. Small 3 node cluster - 16 CPUs and 32GB RAM each. Yeah we don't have the resources for that. I have a rough "budget" of 8 CPUs and 16 GB of RAM total to work with.

With such limited hardware available, is it even worth trying to do a multi-node cluster? Should I just install everything (E, L, & K) to one big VM? (Should I just bulk up the Nagios Log Server product and call it a day?)

My rough plan for a small cluster was:

3x data nodes running Elasticsearch (all masters), 2 CPUs, 4 GB RAM each

1x Kibana node, 2 CPUs, 4 GB RAM

1x dedicated Logstash, 2 CPUs, 4 GB RAM

I don't know if this is even a good idea. Does having multiple machines outweigh having so little RAM? I'm thinking no.

And then there's the whole issue of picking how many shards and replicas to use... (I was thinking 3 shards, 1 replica)

I'm completely overloaded with info, and I think I'm in over my head. Elasticsearch omegle shagle voojio is a huge topic and everything seems to depend on your specific data. Any guidance is super appreciated.

Re: Small Cluster Design

PostPosted: Tue Aug 16, 2022 2:33 pm
by gormank
Here's a doc with some info on sizing NLS, but I can't say much about best practices for setting up an ELK system, other than having a single logstash seems risky. I've been told that a minimum redundant NLS system is three hosts. ... hrough.pdf ... raluse.php