Page 2 of 2
Re: Import from log file does not show any results
Posted: Tue Jun 02, 2015 3:20 pm
by jolson
If this is on a private network without internet access, can it make any difference?
It shouldn't matter whether NLS can talk to the internet or not as long as the install went okay. Can you telnet to any port mentioned above? For instance, 5544 or 2057.
If that works, it means logstash is up and listening properly on those ports. If that test fails, it's possible that logstash is down.
Re: Import from log file does not show any results
Posted: Tue Jun 02, 2015 3:28 pm
by gsl_ops_practice
seems to work
Re: Import from log file does not show any results
Posted: Tue Jun 02, 2015 4:29 pm
by jolson
Please restart elasticsearch/logstash and see if that helps things along.
Code: Select all
service elasticsearch restart
service logstash restart
If you visit the Web GUI and note that it's still blank, check the time on your box. It's possible that it's logging into the past (or future).
If the date is wrong, use our timezone script to change it:
Code: Select all
cd /usr/local/nagioslogserver/scripts
./change_timezone.sh -z America/Chicago
Be sure the date is proper:
Let us know if that helps.
Re: Import from log file does not show any results
Posted: Thu Jun 04, 2015 10:51 am
by gsl_ops_practice
Ok, after setting the timezones correctly on all hosts and restarting all services it now works, I can see logs in the dashboard.
My only question is this - is there a way to have the dashboard display my log entries over time instead of one spike at the time of import? I would like to see it as if it was imported over time, I am trying to analyize an application log to look for a pattern and with one spike there is no discernible pattern.
Thank you.
Re: Import from log file does not show any results
Posted: Thu Jun 04, 2015 11:11 am
by jolson
This is definitely possible.
Incoming logs are typically tagged as the current time (in UTC) of Nagios Log Server, and are displayed in the GUI according to that tagged time. The exception to this is if the 'date' is picked up by Logstash properly. Do some reading here to see what I mean:
http://www.logstash.net/docs/1.4.3/filters/date
The date filter is especially important for sorting events and for backfilling old data. If you don’t get the date correct in your event, then searching for them later will likely sort out of order.
In the absence of this filter, logstash will choose a timestamp based on the first time it sees the event (at input time), if the timestamp is not already set in the event. For example, with file input, the timestamp is set to the time of each read.
You will need to setup the 'date' filter to parse your incoming logs. To do so, your input/filter chain might look something like this.
Input is first, filter is second:
Code: Select all
syslog {
type => 'nagiosincominglogs'
port => 8999
}
if [type] == "nagiosincominglogs" {
date {
match => [ "logdate", "MMM dd YYY HH:mm:ss",
"MMM d YYY HH:mm:ss", "ISO8601" ]
}
}
For the above to work, your 'timestamp' field
must be called 'logdate'.
Your field is very likely going to be named something different. Take a look at some logs imported already, and check for which field contains the 'timestamp' of your logs.
2015-06-04 11_09_47-Dashboard • Nagios Log Server - Firefox Developer Edition.png
Assuming that your timestamp field is called 'timestamp' as mine is, we simply change 'logdate' to 'timestamp' as per above. If you do not have a timestamp field, you will need to create another filter to parse that field out appropriately - likely a grok filter. Let me know if you need further assistance with this.
Re: Import from log file does not show any results
Posted: Tue Aug 30, 2016 6:58 pm
by gsl_ops_practice
Apologies for reviving a fairly old thread, but this has become a requirement once again. When importing logs into NagiosLog, everything shows up as one massive peak on the graph and the log entries are not showing up on the time graph correctly.
I have googled this for a few hours and didn't come up with a way that worked. The below message, I need the timestamp in the first square bracket field to replace the @timestamp field in NagiosLog.
What I have in my filter (without breaking logstash)
Code: Select all
if [program] == 'nagiosincominglogs' {
grok {
match => [ 'message', '%{LOGLEVEL:Loglevel} ?<mytimestamp>:\[%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{HOUR:hour}:%{MINUTE:minute}:%{SECOND:second}\]']
}
date {
match => [ "mytimestamp", "YYYY-MM-dd HH:mm:ss" ]
}
}
Source data sample:
Code: Select all
INFO [2016-08-30 00:02:14,942][Some source info] - INFO - JMS Message sent, queue [queue:///somequeue?someclient=1], template [default], transaction id [11111111111], correlation id [222222222222222222222], body [333333333333333333 AAA BB 11011678571650511272016083012021400221000000000000IT^000176^N^^I^AAA^BB123^20160830^000214^AAA^BB123^20160830^163000^AAA^BB123^20160830^163000^^^^^AAA^AAA^AAA^AA^001^1234567890^P^WWW^WWW^777777777 ^^P^888888888^^^^SOMEDATA^SOMEMOREDATA^999999999^F^^^^^AA^QWERTY^**EOM**]
Re: Import from log file does not show any results
Posted: Wed Aug 31, 2016 11:40 am
by mcapra
The built-in TIMESTAMP_ISO8601 pattern is a bit weird, so I had to define my own pattern when trying to apply the date filter:
Code: Select all
if [type] == 'gsl_test' {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:logdate}" ]
}
date {
match => [ "logdate", "YYYY-MM-dd HH:mm:ss,SSS" ]
}
}
Be mindful of existing indices if recycling the
logdate field in my provided filter.
My event
without the filter applied:
2016_08_31_11_35_56_Dashboard_Nagios_Log_Server.png
My event
with the filter applied:
2016_08_31_11_37_59_Dashboard_Nagios_Log_Server.png
See if parsing out the individual field as I have done using the
TIMESTAMP_ISO8601 pattern then applying the date filter on the separate field solves your use case.
Re: Import from log file does not show any results
Posted: Wed Aug 31, 2016 2:39 pm
by gsl_ops_practice
Thank you, I am now able to import old logs correctly, please consider this issue resolved.