Server slow performance and grok issue

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
tela
Posts: 6
Joined: Tue Mar 27, 2018 8:47 pm

Re: Server slow performance and grok issue

Post by tela »

scottwilkerson wrote:
tela wrote:What is the recommended configuration / setup /number of servers for current amount of log?
And do you know the issue with grok is expected behaviour or not?
Thanks for your suggestion.
In order to give a recommendation, I would need to know what you expect the peak messages per day

At a minimum, as I mentioned earlier
scottwilkerson wrote:I would strongly suggest planning a proper cluster with several instances in the cluster both to share the load and provide redundancy, also SSD's will help write and read the volume of data you have.
I take the days with 21GB as example or expect duing these time to send the log to nagiosls. It says there is 33,120,821 document.
Or is there any way to estimate?
tela
Posts: 6
Joined: Tue Mar 27, 2018 8:47 pm

Re: Server slow performance and grok issue

Post by tela »

mcapra wrote:
tela wrote:It seems regex positive lookbehind/lookahead (?<=etc) is support in grok debugger but not in nagios log server, is it normal?
Nagios Log Server uses Logstash under the hood for it's message parsing. The grok filter Logstash plugin uses Oniguruma for its regex library, which does indeed support lookaheads/behinds as described here:

Code: Select all

  (?=subexp)         look-ahead
  (?!subexp)         negative look-ahead
  (?<=subexp)        look-behind
  (?<!subexp)        negative look-behind

                     Subexp of look-behind must be fixed-width.
                     But top-level alternatives can be of various lengths.
                     ex. (?<=a|bc) is OK. (?<=aaa(?:b|cd)) is not allowed.

                     In negative look-behind, capturing group isn't allowed,
                     but non-capturing group (?:) is allowed.
We'd need to see the exact grok rule you're applying as well as a sample log message to identify any sort of mis-match between the third party grok debugger and what actually happens within the grok filter plugin.
When I try a new filter today it's work this time. just using (?<ActivityID>(?<=Activity ID\: )[^\s]*) get to specific value inside a windows event log message part.
I will share one when I have the error next time.
Thanks. :)
scottwilkerson
DevOps Engineer
Posts: 19396
Joined: Tue Nov 15, 2011 3:11 pm
Location: Nagios Enterprises
Contact:

Re: Server slow performance and grok issue

Post by scottwilkerson »

tela wrote:
scottwilkerson wrote:
tela wrote:What is the recommended configuration / setup /number of servers for current amount of log?
And do you know the issue with grok is expected behaviour or not?
Thanks for your suggestion.
In order to give a recommendation, I would need to know what you expect the peak messages per day

At a minimum, as I mentioned earlier
scottwilkerson wrote:I would strongly suggest planning a proper cluster with several instances in the cluster both to share the load and provide redundancy, also SSD's will help write and read the volume of data you have.
I take the days with 21GB as example or expect duing these time to send the log to nagiosls. It says there is 33,120,821 document.
Or is there any way to estimate?
If you provisioned the server like I mentioned above and gave then 32GB RAM I would guess you could get by with 2 instances if you don't need to retain very many days of active indexes, or 4 instances if you need to search over historical items regularly.
Former Nagios employee
Creator:
ahumandesign.com
enneagrams.com
Locked