Daily indexs are rolling over now around 6 pm
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Daily indexs are rolling over now around 6 pm
I was going over my index status when I noticed at 6:00 pm EST that I had a new index dated for tomorrow, 8-27.
I then dug into it and noticed that I "had" new indexes being created at 10:00 PM every day except for the past two days which happened at 6.
I can't figure out why this is happening other than it is tied to the backup cmd subsystem job. I did notice for some reason it was set to 16:00 hours, but that is 4pm.
I also noticed that the log (dated today 8-26) was a dead even 1gig. Not sure if that means anything.
I am worried about this as these are supposed to be daily indexes. Right now the 26th dated index stopped at 6 pm and all the rest of the events will be in the 8-27 index.
Here are some screenshots. Let me know why this would be happening around 10 , then 6, instead of closer to midnight and why it would create a index dated for the next day.
I already checked the time of the sever and it is correct. I have not edited any elasticsearch config files. This is a big concern for me and hope we can get this figured out.
Here is the directory with the indexes. You can see the date time stamps.
Here are the cmb subsystem jobs. I did change the backup time to 23:59:59 after I discovered the issue.
Screenshot of Index from LogServer GUI.
I then dug into it and noticed that I "had" new indexes being created at 10:00 PM every day except for the past two days which happened at 6.
I can't figure out why this is happening other than it is tied to the backup cmd subsystem job. I did notice for some reason it was set to 16:00 hours, but that is 4pm.
I also noticed that the log (dated today 8-26) was a dead even 1gig. Not sure if that means anything.
I am worried about this as these are supposed to be daily indexes. Right now the 26th dated index stopped at 6 pm and all the rest of the events will be in the 8-27 index.
Here are some screenshots. Let me know why this would be happening around 10 , then 6, instead of closer to midnight and why it would create a index dated for the next day.
I already checked the time of the sever and it is correct. I have not edited any elasticsearch config files. This is a big concern for me and hope we can get this figured out.
Here is the directory with the indexes. You can see the date time stamps.
Here are the cmb subsystem jobs. I did change the backup time to 23:59:59 after I discovered the issue.
Screenshot of Index from LogServer GUI.
You do not have the required permissions to view the files attached to this post.
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Daily indexs are rolling over now around 6 pm
I would like to add... We did test sending in Syslog data from a central syslog server on Tuesday the 25th. It did send old events which led to the creation of an index dated Feb 8, 2015.
Does not explain the 10 pm index creation for the previous days (which should be midnight) but might (maybe?) be related to the change to 6pm last couple of days.
Also, an index is supposed to hold whatever comes in that day correct? No size limitations?
Here is a screenshot:
Does not explain the 10 pm index creation for the previous days (which should be midnight) but might (maybe?) be related to the change to 6pm last couple of days.
Also, an index is supposed to hold whatever comes in that day correct? No size limitations?
Here is a screenshot:
You do not have the required permissions to view the files attached to this post.
Re: Daily indexs are rolling over now around 6 pm
That is correct.Also, an index is supposed to hold whatever comes in that day correct? No size limitations?
As for the issue, this certainly *has* to be a date problem. I have a theory about what is happening.
When logstash sends events over to elasticsearch, elasticsearch will take in those events using the current time (in UTC) of the server. Elasticsearch and Logstash will by default always use UTC.
This explains why your logs have been rotating at 20:00 EST - since 20:00 EST = 00:00 UTC.
That is to say, this is perfectly normal behavior. The only thing that looks out of place is the two logs that were generated at 16:00 - did you do anything particular to make these logs generate, or did they generate on their own?
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Daily indexs are rolling over now around 6 pm
I didn't do anything. I just noticed this when we were doing a big syslog test from a central syslog server and I looked at the index the next day and noticed the above.
Problem I have with the 10pm time is if someone request a restore of a date and the last two hours are missing that could be a potential issue.
Problem I have with the 10pm time is if someone request a restore of a date and the last two hours are missing that could be a potential issue.
Re: Daily indexs are rolling over now around 6 pm
Certainly - however we do not recommend ever switching Elasticsearch/Logstash from UTC to your local timezone - this has the potential to create a lot of problems.last two hours are missing that could be a potential issue.
By default, Nagios Log Server will translate the times displayed in the Web GUI from UTC to your localtime - this should ensure that you have exactly the last two days' worth of logs available if you were to select 'last 2 days'.
How do your index timestamps look now?
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Daily indexs are rolling over now around 6 pm
Sorry I didn't clarify that "Last two days comment".
We are going to move backups to slow storage after 30 days and remove them from the index.
My concern is that someone will ask for a loading from a certain date and the last two hours of that day will be missing.
We are going to move backups to slow storage after 30 days and remove them from the index.
My concern is that someone will ask for a loading from a certain date and the last two hours of that day will be missing.
Re: Daily indexs are rolling over now around 6 pm
I can give you the best description of how timestamping works in Nagios Log Server to my knowledge.
1. When logs come in, they enter logstash. Logstash will assign the @timestamp field according to the current time in UTC. This is the data that is entered into Elasticsearch, which is in turn the data that generates your index. The first log of the day (in UTC time) will generate the appropriate Index.
2. If a log comes in and you have the date filter set, the date filter will override the UTC timestamp, and will assign a timestamp however you've decided to to apply the date filter.
The only way that your indices would generate at different times (20:00 being correct, 16:00 being incorrect) is if logstash received a log and ran the 'date' filter against it. This could also happen if the timezone of your instances was changed four hours backward.
Lets double check to ensure that the date of Nagios Log Server is set properly.
Replace 'America/Chicago/ with your appropriate timezone.
Performing the above procedure will cause httpd and logstash to restart. Ensure that you perform the above procedure on all of your instances of Nagios Log Server.
1. When logs come in, they enter logstash. Logstash will assign the @timestamp field according to the current time in UTC. This is the data that is entered into Elasticsearch, which is in turn the data that generates your index. The first log of the day (in UTC time) will generate the appropriate Index.
2. If a log comes in and you have the date filter set, the date filter will override the UTC timestamp, and will assign a timestamp however you've decided to to apply the date filter.
The only way that your indices would generate at different times (20:00 being correct, 16:00 being incorrect) is if logstash received a log and ran the 'date' filter against it. This could also happen if the timezone of your instances was changed four hours backward.
Lets double check to ensure that the date of Nagios Log Server is set properly.
Code: Select all
date
cd /usr/local/nagioslogserver/scripts
./change_timezone.sh -z America/ChicagoPerforming the above procedure will cause httpd and logstash to restart. Ensure that you perform the above procedure on all of your instances of Nagios Log Server.
This should not be a concern as long as indices are being generated at the expected times. The difference between 20:00 and 16:00 is what I find concerning, and that needs to be resolved. Indices should always begin generating at the same time unless the 'date' filter is set explicitly.My concern is that someone will ask for a loading from a certain date and the last two hours of that day will be missing.
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Daily indexs are rolling over now around 6 pm
Here is the Date results. It shows the appropriate time and date.
It is still generating new indices at 16:00 hours. Could this be because older data syslogs were sent in during the test? I did notice two more indexes were create for dates in February.
Code: Select all
[nagios@servera indices]$ date
Tue Sep 1 14:52:43 EDT 2015
You do not have the required permissions to view the files attached to this post.
Re: Daily indexs are rolling over now around 6 pm
It's possible. I would like to see the current active inputs/filters/outputs on your Nagios Log Server instance(s). Please run the following on each instance of yours:
I would also like to know what types of hosts you have reporting to Nagios Log Server (Windows, Linux, etc), and I'd like to see the configurations you've defined for those hosts as well. Feel free to just say that you're using the default configurations if you haven't customized them.
Thank you!
Code: Select all
cat /usr/local/nagioslogserver/logstash/etc/conf.d/*Thank you!
-
krobertson71
- Posts: 444
- Joined: Tue Feb 11, 2014 10:16 pm
Re: Daily indexs are rolling over now around 6 pm
Here you go:
Code: Select all
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Wed, 19 Aug 2015 16:58:09 -0400
#
#
# Global inputs
#
input {
syslog {
type => 'syslog'
port => 5544
}
tcp {
type => 'eventlog'
port => 3515
codec => json {
charset => 'CP1252'
}
}
tcp {
type => 'import_raw'
tags => 'import_raw'
port => 2056
}
tcp {
type => 'import_json'
tags => 'import_json'
port => 2057
codec => json
}
}
#
# Local inputs
#
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Wed, 19 Aug 2015 16:58:09 -0400
#
#
# Global filters
#
filter {
if [program] == 'apache_access' {
grok {
match => [ 'message', '%{COMBINEDAPACHELOG}']
}
date {
match => [ 'timestamp', 'dd/MMM/yyyy:HH:mm:ss Z' ]
}
mutate {
replace => [ 'type', 'apache_access' ]
convert => [ 'bytes', 'integer' ]
convert => [ 'response', 'integer' ]
}
}
if [program] == 'apache_error' {
grok {
match => [ 'message', '\[(?<timestamp>%{DAY:day} %{MONTH:month} %{MONTHDAY} %{TIME} %{YEAR})\] \[%{WORD:class}\] \[%{WORD:originator} %{IP:clientip}\] %{GREEDYDATA:errmsg}']
}
mutate {
replace => [ 'type', 'apache_error' ]
}
}
if [type] == "eventlog" {
if [EventID] == 256
or [EventID] == 258
or [EventID] == 7036
{
drop { }
}
}
if [program] == 'AssetCore' {
grok {
match => [ 'message', '%{DATESTAMP:timestamp} %{WORD:sub_process} *%{WORD:error_code} %{GREEDYDATA:message}' ]
overwrite => [ "message" ]
}
}
}
#
# Local filters
#
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Wed, 19 Aug 2015 16:58:09 -0400
#
#
# Required output for Nagios Log Server
#
output {
elasticsearch {
cluster => '907e60a9-dc29-411e-96e8-2dfe503e0867'
host => 'localhost'
index_type => '%{type}'
node_name => 'b2733b10-233a-4593-9428-85145cd54c77'
protocol => 'transport'
workers => 4
}
}
#
# Global outputs
#
#
# Local outputs
#