No Logs Appearing - Logstash Failing to Parse Date

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
Locked
NCATmax
Posts: 24
Joined: Mon Jan 14, 2019 10:22 am

No Logs Appearing - Logstash Failing to Parse Date

Post by NCATmax »

Hello,

This is a continuation of a previous thread I created. I sent the system profile twice and did not get a response.

In any case, the issue I am having is that log files from a few Linux servers and from a Palo Alto firewall are not showing up in Nagios Log Server.

After further investigation, I found errors for these devices in the Logstash log file. I believe the actual error is:

Code: Select all

:response=>{"create"=>{"_index"=>"logstash-2019.12.13", "_type"=>"syslog", "_id"=>"AW8Ab04im8e-JsUH61
c5", "status"=>400, "error"=>"MapperParsingException[failed to parse [timestamp8601]]; nested: MapperParsingException[
failed to parse date field [2019-12-13 18:04:52.81], tried both date format [dateOptionalTime], and timestamp number w
ith locale []]; nested: IllegalArgumentException[Invalid format: \"2019-12-13 18:04:52.81\" is malformed at \" 18:04:5
2.81\"]; "}}
My Inputs configuration is very simple:

Code: Select all

syslog {
  port => 514
  type => 'syslog'
}

syslog {
    port => 20514
    type => 'syslog'
    tags => 'Linux-Max'
}
I have noticed that when I remove the type => 'syslog' line from the second input, the log files do start appearing.

One thing that was pointed out in the last thread is the use of UTC times. The Linux servers and the Nagios Log Server are all using EST. However, in the logstash log file, within a single entry, I see the actual timestamp of the even, I see timestamps that are 5 hours ahead of the event (which is UTC, this makes sense) and I also see timestamps that are 10 hours ahead of the event. (I wonder if the UTC info is being processed a second time?)

I have attached two files, both containing entries in the logstash log file. One file contains two entries sent by Linux systems and the second file contains an entry send by the Palo Alto firewall. The errors are not exactly the same, but they all have to do with not recognizing a date format.


I would be more than happy to provide any additional information. Please let me know what is needed.


Great thanks.
You do not have the required permissions to view the files attached to this post.
User avatar
cdienger
Support Tech
Posts: 5045
Joined: Tue Feb 07, 2017 11:26 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by cdienger »

Logs coming in on the same input need to use the same formatting so that parsing works properly. The format that the input expects is somewhat flexible initially but once the first message comes in, the format is set(until the next day's index is created). For example, the syslog input expects all input to follow rfc3164 which can send a message like:

Code: Select all

<0>1990 Oct 22 10:52:01 TZ-6 scapegoat.dmz.example.org 10.1.2.3 sched[0]: That's All Folks!
If a message with a different date format then comes in([2019-12-13 18:04:52.81]) you'll see a message logged like the one you see.

The fix is to make sure that all devices use the same format or configure another input for these devices. For example:

Code: Select all

syslog {
    port => 20515
    type => 'alternative-syslog'
    tags => 'alternative Linux-Max'
}
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
NCATmax
Posts: 24
Joined: Mon Jan 14, 2019 10:22 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by NCATmax »

This sounds like the problem. I will make some changes and report back. Could I request that this thread be left open for a couple of business days?

Only Linux servers send logs to the second input listed above, and rsyslog on those servers is configured the same. If I don't make any changes to the configuration, will the logs start being accepted again, once the new index is created?

Rsyslog is configured according to the "Add New Log Source" section.
NCATmax
Posts: 24
Joined: Mon Jan 14, 2019 10:22 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by NCATmax »

I also have a more general question. Right now, I am sending logs from some Linux servers, a Palo Alto firewall, and some VMware hosts. It seems that every log source is a little different. What are some best practices for categorizing the different sources?

Is it best practice to send every "type" of logs (eg all Linux logs, all VMware logs) to a different input? Otherwise, is there a way to differentiate between different log types on the same input? Each of the different "types" of logs will be used by a different group of people.

For example, I decided to point the Linux servers at port 20514 so I could tag them. That way, I could easily look at logs only from Linux servers, and I could easily apply custom filters, etc. to just the Linux logs.
User avatar
cdienger
Support Tech
Posts: 5045
Joined: Tue Feb 07, 2017 11:26 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by cdienger »

Yes, we'll keep it open and wait for your update.

If you don't make any changes to the config then you'll continue to see issues likely along the lines of one input working but not the other(since both are setting the same value for the type field).

I'd recommend setting up an input for each group of devices - one input for Linux servers, one for the firewall, and one for the vmware hosts. Make sure that the type field is unique for each input.

https://assets.nagios.com/downloads/nag ... Server.pdf has an example of how to set the type field for specific hosts(in cases where different logs formats need to come in on the same input).
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
NCATmax
Posts: 24
Joined: Mon Jan 14, 2019 10:22 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by NCATmax »

I think I understand now. I need to use a unique "type" field for each type of device that I am monitoring.

So this appears to mean that each type of device (Linux servers, firewalls, ESXi, etc) will not only use a separate input, but a separate type and port as well.

I did read that documentation page previously. My concern was that I may not always know every device that is sending logs to an input, but it could work if I did. I also see that the "type" that is used is "syslog-esxi", and I now understand why.

I will go and try to get each type of device onto its own input.

I appreciate your help.
User avatar
cdienger
Support Tech
Posts: 5045
Joined: Tue Feb 07, 2017 11:26 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by cdienger »

Glad to help!
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
NCATmax
Posts: 24
Joined: Mon Jan 14, 2019 10:22 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by NCATmax »

Hello, here is the update:

Everything looks good now! All logs that I expect to see are now showing up.

I gave each type of source (Linux server, Palo Alto firewall, VMware hosts) a different "type" field. And to easily assign a type field to just the logs it belongs to, I moved each type of source to a separate input and port. This also makes filtering easier because you can set up filters on each different type.

Thank you for the assistance, I believe this problem has been solved.
User avatar
cdienger
Support Tech
Posts: 5045
Joined: Tue Feb 07, 2017 11:26 am

Re: No Logs Appearing - Logstash Failing to Parse Date

Post by cdienger »

Glad to hear! We'll lock the thread now.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Locked