Best way I know how to explain it.
Data is being written to an index from syslog sources for the same month and date, just the year is 2015. I have tried to close / delete the 2015 index (where the syslog events are going) but it just comes right back.
Could it be coming from the source this way?
Getting 2015-01-05 instead of 2016-01-05
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Getting 2015-01-05 instead of 2016-01-05
You do not have the required permissions to view the files attached to this post.
Re: Getting 2015-01-05 instead of 2016-01-05
What is the date of Nagios Log Server? How about the remote system?
In general timestamps work as follows:
By default, all logs will be tagged with the current time of Nagios Log Server (in UTC). Those logs are sent to the appropriate index (which would be todays index).
A date filter can be involved, and the date filter allows the timestamp to be overwritten by a timestamp from a remote host. The syslog input that you're using includes this date filter. Check on the date/time of your remote servers as well - if everything looks appropriate, you may have to restart Logstash.
It'd also be worth checking out the Logstash logs:
Code: Select all
dateBy default, all logs will be tagged with the current time of Nagios Log Server (in UTC). Those logs are sent to the appropriate index (which would be todays index).
A date filter can be involved, and the date filter allows the timestamp to be overwritten by a timestamp from a remote host. The syslog input that you're using includes this date filter. Check on the date/time of your remote servers as well - if everything looks appropriate, you may have to restart Logstash.
Code: Select all
service logstash restartCode: Select all
tail -n200 /var/log/logstash/logstash.log-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Getting 2015-01-05 instead of 2016-01-05
Already restarted logstash and elasticsearch for the cluster.
Date is correct,
Here is a selection from the log that I think relates to this issue:
Date is correct,
Here is a selection from the log that I think relates to this issue:
Code: Select all
{:timestamp=>"2016-01-05T09:24:59.054000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>155, :exception=>org.elasticsearch.indices.IndexMissingException: [logstash-2015.01.05] missing, :backtrace=>["org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.indexRoutingTable(org/elasticsearch/cluster/routing/operation/plain/PlainOperationRouting.java:245)", "org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.shards(org/elasticsearch/cluster/routing/operation/plain/PlainOperationRouting.java:259)", "org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.shards(org/elasticsearch/cluster/routing/operation/plain/PlainOperationRouting.java:255)", "org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.indexShards(org/elasticsearch/cluster/routing/operation/plain/PlainOperationRouting.java:70)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:242)", "org.elasticsearch.action.bulk.TransportBulkAction.doExecute(org/elasticsearch/action/bulk/TransportBulkAction.java:153)", "org.elasticsearch.action.bulk.TransportBulkAction.doExecute(org/elasticsearch/action/bulk/TransportBulkAction.java:65)", Re: Getting 2015-01-05 instead of 2016-01-05
This could be what we're experiencing:
https://github.com/logstash-plugins/log ... e/issues/3
I checked out a test cluster, and this was also happening for me - when there is not a year present in your syslog data, the year defaults to the year that the logstash process was started.
This problem is in the date filter, and it has been resolved: https://github.com/logstash-plugins/log ... ate/pull/4
"date filter version 2.1.0 published with this fix."
The fix for now is restarting the Logstash process, and I have put this fix on our roadmap. After you've restarted logstash on _every_ instance in your cluster, try deleting the old index and seeing if it reappears. Any luck?
https://github.com/logstash-plugins/log ... e/issues/3
I checked out a test cluster, and this was also happening for me - when there is not a year present in your syslog data, the year defaults to the year that the logstash process was started.
This problem is in the date filter, and it has been resolved: https://github.com/logstash-plugins/log ... ate/pull/4
"date filter version 2.1.0 published with this fix."
The fix for now is restarting the Logstash process, and I have put this fix on our roadmap. After you've restarted logstash on _every_ instance in your cluster, try deleting the old index and seeing if it reappears. Any luck?
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Getting 2015-01-05 instead of 2016-01-05
Our linux team was saying that the year is missing from their logs right before I checked this thread again.
I did stop/start elasticsearch and logstash from the gui to no affect. Should this be done from the command line? I did this there for the whole cluster.
Also, should I stop logstash on both nodes then start them up individually?
I did stop/start elasticsearch and logstash from the gui to no affect. Should this be done from the command line? I did this there for the whole cluster.
Also, should I stop logstash on both nodes then start them up individually?
Re: Getting 2015-01-05 instead of 2016-01-05
Yeah, lets try it out from the command line - no need to restart elasticsearch. The year-old logs could certainly have been contributing to the old index generation.
log into both nodes and issue:
service logstash stop
service logstash start
All that matters is that their last start date was not in 2015 - I'm not sure what kind of calls the GUI makes, so just to be safe I think it's a good bet to perform all of this on the command line.
If the old index _still_ reopens, I'd check which hosts are sending those old logs by adjusting the time period to 'custom' and selecting the appropriate date range, for example: Check the hosts that your logs are arriving from and verify that the remote hosts definitely have their time set up properly - after you have restarted logstash I'm reasonably certain that's the only thing that could be wrong short of a bug.
log into both nodes and issue:
service logstash stop
service logstash start
All that matters is that their last start date was not in 2015 - I'm not sure what kind of calls the GUI makes, so just to be safe I think it's a good bet to perform all of this on the command line.
If the old index _still_ reopens, I'd check which hosts are sending those old logs by adjusting the time period to 'custom' and selecting the appropriate date range, for example: Check the hosts that your logs are arriving from and verify that the remote hosts definitely have their time set up properly - after you have restarted logstash I'm reasonably certain that's the only thing that could be wrong short of a bug.
You do not have the required permissions to view the files attached to this post.
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Getting 2015-01-05 instead of 2016-01-05
Sorry I forgot about this one. Restarting the logstash service workaround took care of the issue for now.
Could always create a cron job to restart the service at 12:00:99999.....
Can close the thread.
Could always create a cron job to restart the service at 12:00:99999.....
Can close the thread.
Re: Getting 2015-01-05 instead of 2016-01-05
Glad to see this working. I'll close this out now. If you ever need assistance in the future, feel free to open a new thread.
Former Nagios Employee