Hey Jesse,
Thanks for the input. I'm already using
http://grokconstructor.appspot.com/, which I think is a little better then heroku grok debugger.
Another question:
if I make a grok filter, with a match that would catch 75 % of the syslog events for the type this grok filter applies too. What happens with the other 25 % of the logs which don't match with the created filter?
I've been working for several hours to get this grok filter for my Brocade switches correct. I've learned alot, but I would really like to find our why I keep getting the _grokparsefailure tag. So you say if I can find the correct filter, Nagios Log Server would stop tagging the logs with _grokparsefailure?
Some example logs:
Code: Select all
<188>mrt 18 15:59:14 10.54.22.160 raslogd: 2015/03/18-14:59:14, [TS-1001], 442, WWN 10:00:00:05:1e:8f:54:8c | FID 128, WARNING, DGSG_FSENC02_SANSWB01, NTP Query failed: 256.
Timestamp => 2015-03-18T14:59:14.254Z
Code: Select all
<190>mrt 18 10:14:38 10.41.37.172 raslogd: 2015/03/18-09:14:38, [SNMP-1005], 116, WWN 10:00:00:05:1e:8f:54:8a | FID 128, INFO, CPF_FSENC02_SANSWB01, SNMP configuration attribute, SNMPv3 Trap Recipient Port 1, has changed from 1162 to 162.
Timestamp => 2015-03-18T09:14:38.465Z
The filter I'm using at the moment (with an edited HOSTNAME in grok patterns file)
Code: Select all
if [type] == "syslog-brocade" {
grok {
match => { "message" => "<[\d]+>[a-z]+ [\d]+ [\d\:]+ %{IPV4:logsource}%{GREEDYDATA:program}: %{YEAR}\/%{MONTHNUM}\/%{MONTHDAY}-%{TIME}%{GREEDYDATA}WWN %{IPV6:wwn}%{GREEDYDATA}%{LOGLEVEL}\, %{HOSTNAME:hostname}" }
add_tag => "grokked"
}
}
Apart from the _grokparsefaulure issue I have another issue with the syslog messages from our Brocade switches, as the hour is one hour off it seems. I added the created timestamps by NLS under the log examples. Syslog messages from sources which do automatically get parsed do ahve the correct timestamp. Do i have to specify timezone in my filter or something?
Grtz and tx for helping me with this.
Willem