Page 1 of 2

Grok filter for Solaris syslogd not working

Posted: Mon Jan 11, 2016 5:40 am
by batzos
The input in Nagios Log Server is syslogd from Solaris 10 and that I guess causes: Tags:" ["_grokparsefailure_sysloginput"]"
meaning that the format is unknown to the filter to tag the data. An example of a message I got from Solaris syslog is:
"message": "<37>Dec 10 12:41:07 sshd[1820]: [ID 800047 auth.notice] Failed keyboard-interactive for root from 10.10.15.14 port 50785 ssh2"

No fields are recognised and the fields that seem to be recognised are wrong as: severity and facility levels are set to 0 even though the Priority id here is 37, meaning 5 for severity and 4 for facility level.

What I can do, is to filter the message and get some fields recognised.

What I have set for a filter is the following, but it seems that it does not work. Can you check it?

In the global configuration as an input I have set:

syslog {
type => 'Solaris_syslog'
port => 514
}

And as a filter:

if [type] == 'Solaris_syslog' {
grok {
match => [ 'message', '<%{POSINT:solsyslog_pri}>%{SYSLOGTIMESTAMP:solsyslog_timestamp} %{SYSLOGHOST:solsyslog_hostname} %{DATA:solsyslog_program}(?:\[%{POSINT:solsyslog_pid}\])?: %{GREEDYDATA:solsyslog_message}' ]
}
}


Below I give you an example of a message I receive in order to understand the format of the logs sent by syslog:

Raw:

{
"_index": "logstash-2015.12.15",
"_type": "Solaris_syslog",
"_id": "AVGmXgp0j1dZHMLPbLrz",
"_score": null,
"_source": {
"message": "<31>Dec 15 16:59:19 SC[,SUNW.Event,cl53ux-rg,cl53ux-crnp-daemon,cl_apid]: [ID 507193 daemon.debug] Queued event 13599605",
"@version": "1",
"@timestamp": "2015-12-15T15:59:19.659Z",
"type": "Solaris_syslog",
"host": "10.20.24.27",
"tags": [
"_grokparsefailure_sysloginput"
],
"priority": 0,
"severity": 0,
"facility": 0,
"facility_label": "kernel",
"severity_label": "Emergency"
},
"sort": [
1450195159659,
1450195159659
]
}

Re: Grok filter for Solaris syslogd not working

Posted: Mon Jan 11, 2016 12:51 pm
by hsmith
I don't see a hostname there unless you obfuscated it so other people didn't see it. I got a filter like this one to be recognized:

Code: Select all

<%{POSINT:solsyslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{GREEDYDATA:Service}: %{GREEDYDATA: message} 
The information that comes back from the grok debugger I use looks like so:

Code: Select all

{
  "solsyslog_pri": [
    [
      "37"
    ]
  ],
  "syslog_timestamp": [
    [
      "Dec 10 12:41:07"
    ]
  ],
  "MONTH": [
    [
      "Dec"
    ]
  ],
  "MONTHDAY": [
    [
      "10"
    ]
  ],
  "TIME": [
    [
      "12:41:07"
    ]
  ],
  "HOUR": [
    [
      "12"
    ]
  ],
  "MINUTE": [
    [
      "41"
    ]
  ],
  "SECOND": [
    [
      "07"
    ]
  ],
  "Service": [
    [
      "sshd[1820]"
    ]
  ],
  "GREEDYDATA": [
    [
      "[ID 800047 auth.notice] Failed keyboard-interactive for root from 10.10.15.14 port 50785"
    ]
  ]
}
We could split it up more if you would like, but hopefully this can help you a bit.

Re: Grok filter for Solaris syslogd not working

Posted: Thu Jan 14, 2016 8:41 am
by batzos
Thank you for your reply. I set the filter you proposed:

Code: Select all

<%{POSINT:solsyslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{GREEDYDATA:Service}: %{GREEDYDATA: message} 
but still no change (_grokparsefailure_sysloginput).

The grok filter is integrated automatically by the installation of NLS or do we have to install this plugin manually and if yes how?
I also checked and there is no file or directory in:

Code: Select all

nagioslogserver/logstash/patterns/grok-patterns
Additionally, from the beginning of the trial version I got in system status:
Core Services
Search engine (elasticsearch) is stopped
Log collector (logstash) is stopped

and I cannot restart them.
I do not know if it plays any role.

Re: Grok filter for Solaris syslogd not working

Posted: Thu Jan 14, 2016 11:21 am
by hsmith
Can I ask for some clarification on how they cannot be restarted?

From the command line:

Code: Select all

service logstash start
service elasticsearch start
If either of those commands return errors, can you please post them here for me to review?

A couple other things:

Did you apply the configuration after you put my filter in place?
The Grok filter is integrated, and you should never have to touch it.

Re: Grok filter for Solaris syslogd not working

Posted: Fri Jan 15, 2016 11:04 am
by batzos
I have already tried to restart these services in the CLI, but I get the following for logstash:
[root@eicillp095 ~]# mon: [FAILED]
For elasticsearch I get no message.

I applied the configuration after I put the filter in place.

Re: Grok filter for Solaris syslogd not working

Posted: Fri Jan 15, 2016 1:45 pm
by hsmith
Can we see the output of these commands?

Code: Select all

tail /var/log/logstash/logstash.log
service elasticsearch status
service logstash status
cat /etc/*release*
getenforce
free -m
df -h
df -ih

Re: Grok filter for Solaris syslogd not working

Posted: Mon Jan 18, 2016 2:44 am
by batzos
The output is the following:

Code: Select all

tail /var/log/logstash/logstash.log
no results, again the CLI
[root@hostname ~]# tail /var/log/logstash/logstash.log
[root@ hostname ~]#

service elasticsearch status
elasticsearch (pid  2480) is running...

service logstash status
Logstash Daemon (pid  26469) is running...


cat /etc/*release*
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
cat: /etc/lsb-release.d: Is a directory
Red Hat Enterprise Linux Server release 6.7 (Santiago)
Red Hat Enterprise Linux Server release 6.7 (Santiago)
cpe:/o:redhat:enterprise_linux:6server:ga:server


getenforce
Disabled

free -m
           total       used       free     shared    buffers     cached
Mem:         15947      15624        322          0        187       5010
-/+ buffers/cache:      10426       5520
Swap:         1023         39        984


df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg01-rootvol
                      252G   19G  221G   8% /
tmpfs                 7.8G     0  7.8G   0% /dev/shm
/dev/sda1             248M   76M  160M  33% /boot


df -ih
Filesystem           Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg01-rootvol
                        16M  179K   16M    2% /
tmpfs                  2.0M     1  2.0M    1% /dev/shm
/dev/sda1               64K    45   64K    1% /boot

Re: Grok filter for Solaris syslogd not working

Posted: Mon Jan 18, 2016 5:52 pm
by jolson
For your Solaris logs, try changing your input filter from a syslog filter to a tcp/udp filter (depending on which protocol your syslogs are using).

Using Holdens filter, I would generate something like this:
2016-01-18 16_49_34-Instance Configuration • Nagios Log Server.png
Give that a try and let us know, thanks!

For your convenience-

input:

Code: Select all

tcp {
   port => 2222
   type => 'solaris-logs'
}
filter:

Code: Select all

if [type] == 'solaris-logs' {
    grok {
        match => [ 'message', '<%{POSINT:solsyslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{GREEDYDATA:Service}: %{GREEDYDATA: message}']
    }
}

Re: Grok filter for Solaris syslogd not working

Posted: Tue Jan 19, 2016 11:22 am
by batzos
Unfortunately we tried to define a destination port in syslogd for SOLARIS 10, but it does not "accept" it. We have to stick with the default destination port of 514 and udp connection. I just deactivated the previous input and I created a new one with a new name:

Code: Select all

udp {
    type => 'solaris-logs'
    port => 514
}
using udp instead of syslog. I also deactivated the previous filter for Solaris and I added the new one you suggested with a new name.
I also checked with iptables:

Code: Select all

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:2057
2    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:2056
3    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:5544
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:3515
5    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpts:9300:9400
6    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:443
7    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:80

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
and even though 514 was not in the list, I could receive the logs. After this I added some ports including 514:

Code: Select all

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination
1    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:5544
2    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:5644
3    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:5644
4    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           state NEW udp dpt:514
5    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:514
6    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:2057
7    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:2056
8    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:5544
9    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:3515
10   ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpts:9300:9400
11   ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:443
12   ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:80

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination
Still nothing changed. How long does it take to switch to the new configuration? Is it implemented when the new snapshot is created?

Re: Grok filter for Solaris syslogd not working

Posted: Tue Jan 19, 2016 11:45 am
by jolson
Still nothing changed. How long does it take to switch to the new configuration? Is it implemented when the new snapshot is created?
Typically configs will apply within 5 minutes. Can you show me a screenshot of you Global Configuration page with your Solaris input/filter expanded please?

I'd also like to see the following:

Code: Select all

cat /usr/local/nagioslogserver/logstash/etc/conf.d/*