Page 1 of 2

elasticsearch log file filling up fast

Posted: Tue Jun 16, 2020 3:42 pm
by Sampath.Basireddy
Since yesterday elasticsearch log file started filling up very fast and as a result am running out of space.

NLS Version: 2.1.4

Please suggest..

Re: elasticsearch log file filling up fast

Posted: Wed Jun 17, 2020 10:20 am
by cdienger
What kind of messages are in the log? Please provide a copy and feel free to delete afterwards if necessary.

Re: elasticsearch log file filling up fast

Posted: Wed Jun 17, 2020 1:04 pm
by Sampath.Basireddy
Looks like it stopped sometime last night.

Here is some text from the log which seems to be related to parsing:

Code: Select all

org.elasticsearch.index.mapper.MapperParsingException: failed to parse [check_interval]
	at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:411)
	at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:706)
	at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:497)
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:544)
	at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:493)
	at org.elasticsearch.index.shard.IndexShard.prepareCreate(IndexShard.java:465)
	at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:201)
	at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.performOnPrimary(TransportShardReplicationOperationAction.java:574)
	at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$1.doRun(TransportShardReplicationOperationAction.java:440)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:36)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NumberFormatException: For input string: "5m"
	at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
	at java.lang.Long.parseLong(Long.java:589)
	at java.lang.Long.parseLong(Long.java:631)
	at org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:145)
	at org.elasticsearch.index.mapper.core.LongFieldMapper.innerParseCreateField(LongFieldMapper.java:288)
	at org.elasticsearch.index.mapper.core.NumberFieldMapper.parseCreateField(NumberFieldMapper.java:239)
	at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:401)
	... 12 more
Otherwise, is it possible to stop elasticsearch logging?

Re: elasticsearch log file filling up fast

Posted: Wed Jun 17, 2020 4:24 pm
by cdienger
We'll need a bigger portion of the log to see the context. Feel free to PM me the file if needed.

Logging can be disabled by editing /etc/sysconfig/elasticsearch, and commenting out line 28:

Code: Select all

#LOG_DIR=/var/log/elasticsearch
Then restarting the elasticsearch service:

Code: Select all

service elasticsearch restart

Re: elasticsearch log file filling up fast

Posted: Thu Jun 18, 2020 9:16 am
by Sampath.Basireddy
Thank You @cdienger.

I think that is helps for now. Issue is stopped for now.

I pm'd you the log file, though it does not have lot of data, but when the file was getting filled up, it appeared to have similar kind of logs were written to the log.

Thank You.

Re: elasticsearch log file filling up fast

Posted: Thu Jun 18, 2020 4:34 pm
by cdienger
It looks like the wrong data type is selected for one of the fields in the alert config tables. Please run the following so I can get a copy of the config and take a closer look:

Code: Select all

curl -XPOST http://localhost:9200/nagioslogserver/_export?path=/tmp/nagioslogserver.tar.gz
Please PM me the /tmp/nagioslogserver.tar.gz that this creates.

Re: elasticsearch log file filling up fast

Posted: Thu Jun 18, 2020 4:42 pm
by ssax
Did you disable logging or did the restart of elasticsearch fix it?

I wouldn't leave logging disabled permanently as you generally need access to the information to debug future issues.

Re: elasticsearch log file filling up fast

Posted: Tue Jun 23, 2020 8:36 am
by Sampath.Basireddy
It stopped all of a sudden. I did not disable logging yet and restarting elasticsearch did not help either.

But again it started filling up since late last evening. I will PM the log file along with curl output.

Re: elasticsearch log file filling up fast

Posted: Tue Jun 23, 2020 3:20 pm
by cdienger
The latest logs show a different problem dealing with the parsing of dates in some of the incoming logs.

It's important that logs with different syslog formats use separate inputs. If NLS accepts logs from a machine with one format it expects the same format for that input/type going forward. Are you importing vmware logs by chance? This is a common scenario that would lead to issues like this and is documented in https://assets.nagios.com/downloads/nag ... Server.pdf(page 9).

The errors look like:
[2020-06-23 08:28:59,165][DEBUG][action.bulk ] [41b94e87-cf97-48ac-a5c6-ed795f9e33f2] [logstash-2020.06.23][0] failed to execute bulk item (index) index {[logstash-2020.06.23 ...
...
Caused by: org.elasticsearch.index.mapper.MapperParsingException: failed to parse date field [Jun 23 08:22:12], tried both date format [dateOptionalTime], and timestamp number with locale []
If you search for "failed to execute bulk item" in the log that was last sent you can see in the same line the host and logsource that is causing the issue.

Re: elasticsearch log file filling up fast

Posted: Mon Jul 06, 2020 5:11 pm
by Sampath.Basireddy
No, we are not importing vmware logs.

Yes, we have different inputs for different syslog formats. And this is happening randomly, not a regular issue.

Since the last few days, I disabled elasticsearch logging. I will enable and see if it is still continuing and get the source details and verify the config.