Page 3 of 4

Re: Multiple filters

Posted: Tue Apr 07, 2015 9:28 am
by jolson
The tcp input looks fine to me, and should work here. Are your logs being transferred via UDP? Are you sure that you have tcp/5548 open on your firewall?

Willem, I am a little confused - I do not understand where 'dcc' for the 'program' field is being set. Please see below:

Code: Select all

if [program] == "dcc" {
To me it looks like this is being set in the syslog-brocade filter - but the dcc logs never hit that filter - obviously that is not the case.

Code: Select all

{GREEDYDATA:program}
I do not see where your 'dcc' logs are inheriting the 'program == dcc' field. Am I missing something here?

Re: Multiple filters

Posted: Tue Apr 07, 2015 2:44 pm
by WillemDH
Jesse, dcc is assigned in the syslog-f5 filter => %{SYSLOGPROG}

I don't know what's wrong with the tcp input. I was pretty sure it was actually tcp and I'm 100 % sure the port in iptables is open as it works when set with syslog input...

Re: Multiple filters

Posted: Tue Apr 07, 2015 3:34 pm
by jolson
Willem,

I may have figured this out. Let's take a look at the following log, and the processes it goes through:

Code: Select all

<155>Mar 28 13:23:21 slot1/cpf_f5_1_vir_pr err dcc[9206]: 01310033:3: [SECEV] Request blocked, violations: Attack signature detected. HTTP protocol compliance sub violations: N/A. Evasion techniques sub violations: N/A. Web services security sub violations: N/A. Virus name: N/A. Support id: 8375986001652311748, source ip: 40.70.0.8, xff ip: 40.70.0.8, source port: 49949, destination ip: 40.70.1.138, destination port: 80, route_domain: 0, HTTP classifier: /Common/F5_External_1_RAMP_Policy, scheme HTTP, geographic location: , request: www.digipolis.be\r\nUser-Agent: libwww-perl/6.04\r\n>, username: , session_id:
Once grokked by your first filter, it turns into the following:

Code: Select all

{
  "loglevel": [
    [
      "err"
    ]
  ],
  "SYSLOGPROG": [
    [
      "dcc[9206]"
    ]
  ],
  "program": [
    [
      "dcc"
    ]
  ],
  "pid": [
    [
      "9206"
    ]
  ],
  "info": [
    [
      "01310033:3: [SECEV] Request blocked, violations: Attack signature detected. HTTP protocol compliance sub violations: N/A. Evasion techniques sub violations: N/A. Web services security sub violations: N/A. Virus name: N/A. Support id: 8375986001652311748, source ip: 40.70.0.8, xff ip: 40.70.0.8, source port: 49949, destination ip: 40.70.1.138, destination port: 80, route_domain: 0, HTTP classifier: /Common/F5_External_1_RAMP_Policy, scheme HTTP, geographic location: , request: www.digipolis.be\\r\\nUser-Agent: libwww-perl/6.04\\r\\n>, username: , session_id:"
    ]
  ]
}
After this, we are running the 'info' field through your second filter, which is defined as follows:

Code: Select all

%{GREEDYDATA:info}, source ip: %{IP:sourceip}, xff ip: %{IP:xffip}, source port: %{NUMBER:sourceport}, destination ip: %{IP:destinationip}, destination port: %{NUMBER:destinationport}, route_domain: %{NUMBER:routedomain}, HTTP classifier: %{GREEDYDATA:httpclassifier}, geographic location: , request: %{GREEDYDATA:request}, username: %{GREEDYDATA:username}, session_id: %{GREEDYDATA:sessionid}
Let's compare the very end of your log with the end of your filter.
Log:

Code: Select all

username: , session_id:
Filter:

Code: Select all

%{GREEDYDATA:username}, session_id: %{GREEDYDATA:sessionid}
Do you see what I'm seeing? ;)

It looks like there is an extra space after session ID in your filter. This means a literal 'space' must be observed or the filter will not match - there is no such space in your log. Removing the space, your filter looks like this:

Code: Select all

%{GREEDYDATA:info}, source ip: %{IP:sourceip}, xff ip: %{IP:xffip}, source port: %{NUMBER:sourceport}, destination ip: %{IP:destinationip}, destination port: %{NUMBER:destinationport}, route_domain: %{NUMBER:routedomain}, HTTP classifier: %{GREEDYDATA:httpclassifier}, geographic location: , request: %{GREEDYDATA:request}, username: %{GREEDYDATA:username}, session_id:%{GREEDYDATA:sessionid}
Give that a try please - let me know if it works for you. Thanks!

Re: Multiple filters

Posted: Wed Apr 08, 2015 9:43 am
by WillemDH
Jesse,

Seems like it could have solved something, but our dcc filter does not get applied yet.. :(

Code: Select all

if [type] == "syslog-f5" {
    grok {     
      break_on_match => false
      match => [ "message", "\A%{SYSLOG5424PRI}%{SYSLOGTIMESTAMP} slot1\/%{HOSTNAME:logsource} %{LOGLEVEL:loglevel} %{SYSLOGPROG}: %{GREEDYDATA:info}" ]
      remove_tag => "_grokparsefailure"
      add_tag => "grokked_syslog_f5"      
    }   
}

Code: Select all

if [program] == "dcc" {
    grok {   
      match => [ "info", "%{GREEDYDATA:info}, source ip: %{IP:sourceip}, xff ip: %{IP:xffip}, source port: %{NUMBER:sourceport}, destination ip: %{IP:destinationip}, destination port: %{NUMBER:destinationport}, route_domain: %{NUMBER:routedomain}, HTTP classifier: %{GREEDYDATA:httpclassifier}, geographic location: , request: %{GREEDYDATA:request}, username: %{GREEDYDATA:username}, session_id:%{GREEDYDATA:sessionid}" ]
      add_tag => "grokked_syslog_f5_dcc"
      }
}
Last example log:
01310038:2: [SECEV] Request violations: Illegal URL,Illegal file type. HTTP protocol compliance sub violations: N/A. Evasion techniques sub violations: N/A. Web services security sub violations: N/A. Virus name: N/A. Support id: 2705742410514919698, source ip: 20.10.55.39, xff ip: 20.10.55.39, source port: 50680, destination ip: 10.40.1.139, destination port: 80, route_domain: 0, HTTP classifier: /Common/POISHPPR_class, scheme HTTP, geographic location: , request: http://shp.antw.be/sites/>, username: , session_id: <8ea44a15eea17901>
tags of this log: grokked_syslog_f5,_grokparsefailure

Maybe it might be an idea to make an email support ticket for this? I truly want to understand how to make multiple filters for one source and I've tried like 100 + combinations but still no luck..

Also the fact that I'm not getting tcp input to work troubles me. If you think it would help to start a remote session to try resolving both issues, let me know then I send an email to support.

Grtz

Willem

Re: Multiple filters

Posted: Wed Apr 08, 2015 10:05 am
by jolson
Willem,

I am almost sure that a part of the problem here is the syslog input we're using. The syslog input has a built in filter that is applied before other filters - this filter is defined as follows:

Code: Select all

"match" => { "message" => "<%{POSINT:priority}>%{SYSLOGLINE}"
The above filter is going to mess with your logs so that we have unexpected results, which is why we need to use an input that does not have a built-in filter. The syslog input sets up both a tcp and udp listener on whatever port you define - a proper tcp or udp input should work almost exactly the same.

I think we should move over to a ticket - feel free to email your response to this message in and reference this thread, and I'll pick up the ticket. Thanks!

Re: Multiple filters

Posted: Thu Apr 09, 2015 1:48 pm
by WillemDH
Jesse,

Seems like something went wrong somewhere with the maintenance plan support subscription. I'm getting an email from xisupport-owner that I have no active subscription. Sent an email to sales for more information. I also just read that for Nagios Log Server with 2 nodes only 3 email support tickets are included. As I already used one for earlier issues, I'm not sure it's a good idea to use email support, as I would only be left with one ticket for 9 months..

Grtz

Willem

Re: Multiple filters

Posted: Thu Apr 09, 2015 2:00 pm
by tmcdonald
I just replied to you in a PM since it's account-related and I don't like posting that publicly.

Re: Multiple filters

Posted: Fri Apr 17, 2015 7:51 am
by WillemDH
Hello,

Jsut posting this for everyone, as I put alot of time in it's creation. It could be useful for F5 Load Balancer users.

Main filter

Code: Select all

if [type] == "syslog-f5" {
    grok {     
      break_on_match => false
      match => [ "message", "\A%{SYSLOG5424PRI}%{SYSLOGTIMESTAMP} slot1\/%{HOSTNAME:logsource} %{LOGLEVEL:severity_label} %{SYSLOGPROG}: %{GREEDYDATA:info}" ]
      add_tag => "grokked_syslog_f5"      
    }   
}
DCC filter

Code: Select all

if [program] == "dcc" {
        grok {          
          patterns_dir => "/usr/local/nagioslogserver/logstash/patterns"
          match => [ "info", "%{F5SEQ:f5_sequence}: %{GREEDYDATA:info}violations: %{GREEDYDATA:f5_violations}. HTTP protocol compliance sub violations: %{GREEDYDATA:f5_http_violations}. Evasion techniques sub violations: %{GREEDYDATA:f5_evasion_violations}. Web services security sub violations: %{GREEDYDATA:f5_web_svc_violations}. Virus name: %{GREEDYDATA:f5_virusname}. Support id: %{GREEDYDATA:f5_supportid}, source ip: %{IPNA:f5_sourceip}, xff ip: %{IPNA:f5_xffip}, source port: %{NUMBER:f5_sourceport}, destination ip: %{IPNA:f5_destinationip}, destination port: %{NUMBER:f5_destinationport}, route_domain: %{NUMBER:f5_routedomain}, HTTP classifier: %{GREEDYDATA:f5_http_classifier}, scheme %{SCHEME:f5_scheme}, geographic location:%{GREEDYDATA:f5_geolocation}, request: %{GREEDYDATA:f5_request}, username:%{GREEDYDATA:f5_username}, session_id: %{GREEDYDATA:f5_sessionid}" ]          
          match => [ "info", "%{GREEDYDATA:info}" ]
          remove_tag => "grokked_syslog_f5"
          add_tag => "grokked_syslog_f5_dcc"
          overwrite => [ "info" ]
          }
    }
In order for the above to work you will also need the following grok pattens:

Code: Select all

HOSTNAMEUND \b(?:[_0-9A-Za-z][_0-9A-Za-z-]{0,62})(?:\.(?:[_0-9A-Za-z][_0-9A-Za-z-]{0,62}))*(\.?|\b)
IPNA (?:%{IPV6}|%{IPV4}|N\/A)
SCHEME (HTTPS?)
F5SEQ ([0-9]*:[0-9])
For Nagios support, it's all working relatively well now, but I'm having one small issue where the f5 logs are assigning different format of syslog levels to the severity_label field. Would I need a 'conditional alter' in order change the value of the field

err to error
info to informational
crit to critical

etc

Code: Select all

  alter {
    condrewriteother => [ 
         "field_name", "expected_value", "field_name_to_change", "value",
         "field_name2", "expected_value2, "field_name_to_change2", "value2",
         ....
    ]
  }
Is alter supported by NLS?

Grtz

Willem

Re: Multiple filters

Posted: Fri Apr 17, 2015 9:25 am
by jolson
For Nagios support, it's all working relatively well now, but I'm having one small issue where the f5 logs are assigning different format of syslog levels to the severity_label field. Would I need a 'conditional alter' in order change the value of the field

err to error
info to informational
crit to critical

etc

alter {
condrewriteother => [
"field_name", "expected_value", "field_name_to_change", "value",
"field_name2", "expected_value2, "field_name_to_change2", "value2",
....
]
}



Is alter supported by NLS?
What do the different formats look like? This could be something that mutate is capable of handling: http://logstash.net/docs/1.4.2/filters/mutate

There should be no problems installing the contrib package to gain access to the alter filter:

Code: Select all

/usr/local/nagioslogserver/logstash/bin/plugin install contrib

Re: Multiple filters

Posted: Thu May 14, 2015 10:22 am
by WillemDH
Jesse,

I did

Code: Select all

/usr/local/nagioslogserver/logstash/bin/plugin install contrib
on both nodes. It did not really gave any confirmation it was installed or anything..

What do you mean with
What do the different formats look like?
ABout the mutate, I guess I'll have to use the gsub?

Code: Select all

filter {
  mutate {
    gsub => [
      # replace all forward slashes with underscore
      "fieldname", "/", "_",

      # replace backslashes, question marks, hashes, and minuses with
      # dot
      "fieldname2", "[\\?#-]", "."
    ]
  }
}
I'll do some tests.

EDIT: I can confirm this works 100 %:

Code: Select all

    mutate {
        gsub => [ 
            "severity_label", "err", "error",
            "severity_label", "info", "informational"
        ]
    }  
Grtz

Willem