elasticsearch log file filling up fast
-
Sampath.Basireddy
- Posts: 252
- Joined: Wed Dec 14, 2016 12:30 pm
elasticsearch log file filling up fast
Since yesterday elasticsearch log file started filling up very fast and as a result am running out of space.
NLS Version: 2.1.4
Please suggest..
NLS Version: 2.1.4
Please suggest..
Re: elasticsearch log file filling up fast
What kind of messages are in the log? Please provide a copy and feel free to delete afterwards if necessary.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
-
Sampath.Basireddy
- Posts: 252
- Joined: Wed Dec 14, 2016 12:30 pm
Re: elasticsearch log file filling up fast
Looks like it stopped sometime last night.
Here is some text from the log which seems to be related to parsing:
Otherwise, is it possible to stop elasticsearch logging?
Here is some text from the log which seems to be related to parsing:
Code: Select all
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [check_interval]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:411)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:706)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:497)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:544)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:465)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:201)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:574)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1.doRun(TransportShardReplicationOperationAction.java:440)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NumberFormatException: For input string: "5m"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:145)
at org.elasticsearch.index.mapper.core.LongFieldMapper.innerParseCreateField(LongFieldMapper.java:288)
at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:239)
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:401)
... 12 moreRe: elasticsearch log file filling up fast
We'll need a bigger portion of the log to see the context. Feel free to PM me the file if needed.
Logging can be disabled by editing /etc/sysconfig/elasticsearch, and commenting out line 28:
Then restarting the elasticsearch service:
Logging can be disabled by editing /etc/sysconfig/elasticsearch, and commenting out line 28:
Code: Select all
#LOG_DIR=/var/log/elasticsearchCode: Select all
service elasticsearch restartAs of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
-
Sampath.Basireddy
- Posts: 252
- Joined: Wed Dec 14, 2016 12:30 pm
Re: elasticsearch log file filling up fast
Thank You @cdienger.
I think that is helps for now. Issue is stopped for now.
I pm'd you the log file, though it does not have lot of data, but when the file was getting filled up, it appeared to have similar kind of logs were written to the log.
Thank You.
I think that is helps for now. Issue is stopped for now.
I pm'd you the log file, though it does not have lot of data, but when the file was getting filled up, it appeared to have similar kind of logs were written to the log.
Thank You.
Re: elasticsearch log file filling up fast
It looks like the wrong data type is selected for one of the fields in the alert config tables. Please run the following so I can get a copy of the config and take a closer look:
Please PM me the /tmp/nagioslogserver.tar.gz that this creates.
Code: Select all
curl -XPOST http://localhost:9200/nagioslogserver/_export?path=/tmp/nagioslogserver.tar.gzAs of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: elasticsearch log file filling up fast
Did you disable logging or did the restart of elasticsearch fix it?
I wouldn't leave logging disabled permanently as you generally need access to the information to debug future issues.
I wouldn't leave logging disabled permanently as you generally need access to the information to debug future issues.
-
Sampath.Basireddy
- Posts: 252
- Joined: Wed Dec 14, 2016 12:30 pm
Re: elasticsearch log file filling up fast
It stopped all of a sudden. I did not disable logging yet and restarting elasticsearch did not help either.
But again it started filling up since late last evening. I will PM the log file along with curl output.
But again it started filling up since late last evening. I will PM the log file along with curl output.
Re: elasticsearch log file filling up fast
The latest logs show a different problem dealing with the parsing of dates in some of the incoming logs.
It's important that logs with different syslog formats use separate inputs. If NLS accepts logs from a machine with one format it expects the same format for that input/type going forward. Are you importing vmware logs by chance? This is a common scenario that would lead to issues like this and is documented in https://assets.nagios.com/downloads/nag ... Server.pdf(page 9).
The errors look like:
It's important that logs with different syslog formats use separate inputs. If NLS accepts logs from a machine with one format it expects the same format for that input/type going forward. Are you importing vmware logs by chance? This is a common scenario that would lead to issues like this and is documented in https://assets.nagios.com/downloads/nag ... Server.pdf(page 9).
The errors look like:
If you search for "failed to execute bulk item" in the log that was last sent you can see in the same line the host and logsource that is causing the issue.[2020-06-23 08:28:59,165][DEBUG][action.bulk ] [41b94e87-cf97-48ac-a5c6-ed795f9e33f2] [logstash-2020.06.23][0] failed to execute bulk item (index) index {[logstash-2020.06.23 ...
...
Caused by: org.elasticsearch.index.mapper.MapperParsingException: failed to parse date field [Jun 23 08:22:12], tried both date format [dateOptionalTime], and timestamp number with locale []
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
-
Sampath.Basireddy
- Posts: 252
- Joined: Wed Dec 14, 2016 12:30 pm
Re: elasticsearch log file filling up fast
No, we are not importing vmware logs.
Yes, we have different inputs for different syslog formats. And this is happening randomly, not a regular issue.
Since the last few days, I disabled elasticsearch logging. I will enable and see if it is still continuing and get the source details and verify the config.
Yes, we have different inputs for different syslog formats. And this is happening randomly, not a regular issue.
Since the last few days, I disabled elasticsearch logging. I will enable and see if it is still continuing and get the source details and verify the config.