Replacing fields with mutate filter

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
Locked
Inova
Posts: 6
Joined: Tue Dec 09, 2014 9:16 am
Contact:

Replacing fields with mutate filter

Post by Inova »

Hello,

I have some issue when trying to replace some fields in NLS. Here an example of the log I try to parse :

Code: Select all

[Server:server-four] \u001b[0m\u001b[0m14:41:16,096 INFO  [org.jboss.as.webservices] (ServerService Thread Pool -- 30) JBAS015537: Activating WebServices Extension\u001b[0m
Here the filter configuration for processing these logs (note that my syslog send them to NLS with the tag 'console'):

Code: Select all

if [program] == 'console' {
    grok {
        match => [ 'message', '\[%{WORD}:%{USERNAME:server_name}\] (?:(\\u001b\[%{INT}m)+)%{TIME} %{LOGLEVEL:level} %{GREEDYDATA:msg}']
		named_captures_only => true
    }
    mutate {
        replace => [ 'type', '%{server_name}' ]
        replace => [ 'severity_label', '%{level}']
    }
}
I already tested the grok pattern on this site : https://grokdebug.herokuapp.com and everything seems to be correct.
The issue is that the mutate filter replace both 'type' and 'severity_label' fields with literally '%{server_name}' and '%{level}' and not the values (should be respectively 'server-four' and 'INFO').
I tried many different configuration and nothing works.

I hope somebody can help me with this. I'm pretty sure it's just a small missing in the configuration but I'm not able to find it.

Thanks in advance for your help.
bdgoecke
Posts: 36
Joined: Wed Oct 22, 2014 3:41 pm

Re: Replacing fields with mutate filter

Post by bdgoecke »

Could you go into your log server and grab and post either a screen shot of the detail of a record or the json from a message that should of been matched ?

Thanks.
Inova
Posts: 6
Joined: Tue Dec 09, 2014 9:16 am
Contact:

Re: Replacing fields with mutate filter

Post by Inova »

Hi,

Thanks for your reply. Please find two screenshots from NLS with both JSON and table view of an event that should match.
json.png
table.png
Note that there is some special characters (some console color code) added by Jboss.

Thanks in advance.
You do not have the required permissions to view the files attached to this post.
bdgoecke
Posts: 36
Joined: Wed Oct 22, 2014 3:41 pm

Re: Replacing fields with mutate filter

Post by bdgoecke »

Try this,

Code: Select all

if [program] == 'console' {
    grok {
        match => [ 'message', '\[%{WORD}:%{USERNAME:server_name}\] (?:(\\u001b\[%{INT}m)+)%{TIME} %{LOGLEVEL:level} %{GREEDYDATA:msg}']
      named_captures_only => true
      replace => [ 'type', '%{server_name}' ]
      replace => [ 'severity_label', '%{level}']
    }
}
I don't think it is the complete answer, but see if it will set the fields they way you want it.

==>brian.
Inova
Posts: 6
Joined: Tue Dec 09, 2014 9:16 am
Contact:

Re: Replacing fields with mutate filter

Post by Inova »

Hi Brian,

Thanks for your reply. Unfortunately, it didn't work, I still have the same output. Here the JSON output :

Code: Select all

{
  "_index": "logstash-2014.12.19",
  "_type": "%{server_name}",
  "_id": "oMX6jv8dTUWYmi21qEwdMw",
  "_score": null,
  "_source": {
    "message": "[Server:server-four] \u001b[33m\u001b[0m\u001b[33m09:54:10,005 WARN  [org.jgroups.protocols.TCP] (INT-2,server-four-31061()) JGRP000031: server-four-31061(): dropping unicast message to wrong destination 27564896-92c6-0d23-56c7-50474c87d79e()\u001b[0m",
    "@version": "1",
    "@timestamp": "2014-12-19T08:54:14.000Z",
    "type": "%{server_name}",
    "host": "172.25.0.13",
    "priority": 133,
    "timestamp": "Dec 19 09:54:14",
    "logsource": "INOV3-1-APV-02",
    "program": "console",
    "severity": 5,
    "facility": 16,
    "facility_label": "local0",
    "severity_label": "%{level}",
    "tags": [
      "_grokparsefailure"
    ]
  },
  "sort": [
    1418979254000,
    1418979254000
  ]
}
Note that this time the "highlight" part is not present in the JSON like in the screenshot in my previous message.

Do you have any other idea ?


I also have a configuration error in logstash, but I think it's "normal" with the filter you give ;)

Code: Select all

{:timestamp=>"2014-12-19T09:54:10.555000+0100", :message=>"Using milestone 1 input plugin 'syslog'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
{:timestamp=>"2014-12-19T09:54:10.633000+0100", :message=>"Using milestone 2 input plugin 'tcp'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.2/plugin-milestones", :level=>:warn}
{:timestamp=>"2014-12-19T09:54:10.698000+0100", :message=>"Unknown setting 'replace' for grok", :level=>:error}
Error: Something is wrong with your configuration.
Many thanks,
Quentin
bdgoecke
Posts: 36
Joined: Wed Oct 22, 2014 3:41 pm

Re: Replacing fields with mutate filter

Post by bdgoecke »

Okay, I didn't look close enough. Try with an "overwrite".

Rename the fields in the "match", and then add the overwrite (as I have done below.)

Code: Select all

if [program] == 'console' {
    grok {
        match => [ 'message', '\[%{WORD}:%{USERNAME:type}\] (?:(\\u001b\[%{INT}m)+)%{TIME} %{LOGLEVEL:severity_label} %{GREEDYDATA:msg}']
      named_captures_only => true
      overwrite => [ 'type' ]
      overwrite => [ 'severity_label']
    }
}
Inova
Posts: 6
Joined: Tue Dec 09, 2014 9:16 am
Contact:

Re: Replacing fields with mutate filter

Post by Inova »

Hi,

Thanks a lot for your reply. Here the JSON view of the result :

Code: Select all

{
  "_index": "logstash-2014.12.22",
  "_type": "syslog",
  "_id": "4VD0yGeLQu2ikSfNrnBUhQ",
  "_score": null,
  "_source": {
    "message": "[Server:server-four] \u001b[0m\u001b[0m09:32:13,827 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-3) JBAS010400: Bound data source [java:/comp/env/IBPCachestoreDataSource]\u001b[0m",
    "@version": "1",
    "@timestamp": "2014-12-22T08:32:15.000Z",
    "type": "syslog",
    "host": "172.25.0.13",
    "priority": 133,
    "timestamp": "Dec 22 09:32:15",
    "logsource": "INOV3-1-APV-02",
    "program": "console",
    "severity": 5,
    "facility": 16,
    "facility_label": "local0",
    "severity_label": "Notice",
    "tags": [
      "_grokparsefailure"
    ]
  },
  "sort": [
    1419237135000,
    1419237135000
  ]
}
I really don't understand what's wrong, the filter you gave should be okay but not in my nagios log server.
Do you think it can be an issue with the match instruction as the "_grokparsefailure" tag is always in the output ?

Thanks in advance,
Quentin
Inova
Posts: 6
Joined: Tue Dec 09, 2014 9:16 am
Contact:

Re: Replacing fields with mutate filter

Post by Inova »

Hi again,

Everything works correctly now. I re-wrote the filter and I removed this part :

Code: Select all

(?:(\\u001b\[%{INT}m)+)%{TIME}
which seems to wrong or not correctly understand by NLS but it worked well in https://grokdebug.herokuapp.com/.

Anyway, thanks a lot for the time you spent and your help !
Quentin
User avatar
lgroschen
Posts: 384
Joined: Wed Nov 27, 2013 1:17 pm

Re: Replacing fields with mutate filter

Post by lgroschen »

I have also had issues with regex types in the herokuapp debugger. Some will work when in practice, but grok has issues parsing some. It could be due to escaping issues, but not 100% sure. I'll lock this topic, but feel free to share or ask questions with how things are going.
/Luke
Locked