Page 1 of 2

Logstash continuously crashing..

Posted: Tue Apr 04, 2017 11:30 am
by uma K
My logstash is continuously crashing.. I have changed the heap size also from 256m to 1024m.. Please help

Apr 04, 2017 9:26:44 AM org.elasticsearch.transport.netty.MessageChannelHandler messageReceived
WARNING: [abd0aca5-8cbf-4f11-988e-be0d778f5f95] Message not fully read (response) for [59] handler org.elasticsearch.action.TransportActionNodeProxy$1@69a1dd11, error [false], resetting
Exception in thread "Ruby-0-Thread-35: /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:406" org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.action.bulk.BulkResponse]
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(org/elasticsearch/transport/netty/MessageChannelHandler.java:155)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(org/elasticsearch/transport/netty/MessageChannelHandler.java:130)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(org/elasticsearch/common/netty/channel/SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(org/elasticsearch/common/netty/channel/DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(org/elasticsearch/common/netty/channel/DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(org/elasticsearch/common/netty/channel/Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(org/elasticsearch/common/netty/handler/codec/frame/FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(org/elasticsearch/common/netty/handler/codec/frame/FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(org/elasticsearch/common/netty/handler/codec/frame/FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(org/elasticsearch/common/netty/channel/SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(org/elasticsearch/common/netty/channel/DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(org/elasticsearch/common/netty/channel/DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(org/elasticsearch/common/netty/channel/Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(org/elasticsearch/common/netty/channel/Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(org/elasticsearch/common/netty/channel/socket/nio/NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(org/elasticsearch/common/netty/channel/socket/nio/AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(org/elasticsearch/common/netty/channel/socket/nio/AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(org/elasticsearch/common/netty/channel/socket/nio/AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(org/elasticsearch/common/netty/channel/socket/nio/NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(org/elasticsearch/common/netty/util/ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(org/elasticsearch/common/netty/util/internal/DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:745)
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.Arrays.copyOfRange(Arrays.java:2694)
at java.lang.String.<init>(String.java:203)
at org.apache.lucene.util.CharsRef.toString(CharsRef.java:210)
at org.apache.lucene.util.CharsRefBuilder.toString(CharsRefBuilder.java:162)
at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:286)
at org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at org.elasticsearch.action.bulk.BulkItemResponse.readFrom(BulkItemResponse.java:268)
at org.elasticsearch.action.bulk.BulkItemResponse.readBulkItem(BulkItemResponse.java:243)
at org.elasticsearch.action.bulk.BulkResponse.readFrom(BulkResponse.java:106)
at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:153)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:130)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)

Re: Logstash continuously crashing..

Posted: Tue Apr 04, 2017 11:36 am
by mcapra
Those look like Elasticsearch log messages. Can you share the full contents of both /var/log/logstash and /var/log/elasticsearch? This command should put them in the /tmp/43224_1.zip file:

Code: Select all

(zip -r /tmp/43224_1.zip /var/log/logstash) && (zip -r /tmp/43224_1.zip /var/log/elasticsearch)
My initial hunch is that this machine simply needs more memory.

Re: Logstash continuously crashing..

Posted: Tue Apr 04, 2017 11:41 am
by uma K
Please find attached

Re: Logstash continuously crashing..

Posted: Tue Apr 04, 2017 3:51 pm
by uma K
Can you assist on this?

Re: Logstash continuously crashing..

Posted: Tue Apr 04, 2017 4:06 pm
by mcapra
Can you share the outputs of the following commands executed from the CLI of your Nagios Log Server machine:

Code: Select all

service elasticsearch restart
df -h
free -m
tail -n 50 /var/log/elasticsearch/*.log

Re: Logstash continuously crashing..

Posted: Tue Apr 04, 2017 4:21 pm
by uma K

Code: Select all

service elasticsearch restart
Stopping elasticsearch:                                    [  OK  ]
Starting elasticsearch:                                    [  OK  ]

 df -h
Filesystem            Size  Used Avail Use% Mounted on
rootfs                 99G  2.6G   95G   3% /
devtmpfs              3.9G  156K  3.9G   1% /dev
tmpfs                 4.0G     0  4.0G   0% /dev/shm
/dev/sda1              99G  2.6G   95G   3% /
/dev/mapper/vg_app-lv_app
                      197G   50G  148G  26% /app


free -m
             total       used       free     shared    buffers     cached
Mem:          8001       6721       1279          0        274       1549
-/+ buffers/cache:       4896       3104
Swap:          255          5        250

 tail -n 50 /var/log/elasticsearch/*.log
==> /var/log/elasticsearch/9837e558-ecbc-40b0-87a6-344382e520c5_index_indexing_slowlog.log <==

==> /var/log/elasticsearch/9837e558-ecbc-40b0-87a6-344382e520c5_index_search_slowlog.log <==

==> /var/log/elasticsearch/9837e558-ecbc-40b0-87a6-344382e520c5.log <==
[2017-04-04 14:17:37,216][DEBUG][action.index             ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-04-04 14:17:41,807][DEBUG][action.index             ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-04-04 14:18:41,877][DEBUG][action.index             ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-04-04 14:19:41,911][DEBUG][action.index             ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2017-04-04 14:20:08,529][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] stopping ...
[2017-04-04 14:20:08,562][WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by an exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:120)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:72)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:56)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.execute(DefaultChannelPipeline.java:636)
        at org.elasticsearch.common.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496)
        at org.elasticsearch.common.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:781)
        at org.elasticsearch.common.netty.channel.Channels.write(Channels.java:725)
        at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
        at org.elasticsearch.common.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784)
        at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
        at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:195)
        at org.elasticsearch.rest.action.support.RestActionListener.onFailure(RestActionListener.java:60)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase.finishAsFailed(TransportShardReplicationOperationAction.java:536)
        at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$PrimaryPhase$3.onClusterServiceClose(TransportShardReplicationOperationAction.java:509)
        at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onClose(ClusterStateObserver.java:217)
        at org.elasticsearch.cluster.service.InternalClusterService.doStop(InternalClusterService.java:174)
        at org.elasticsearch.common.component.AbstractLifecycleComponent.stop(AbstractLifecycleComponent.java:105)
        at org.elasticsearch.node.internal.InternalNode.stop(InternalNode.java:307)
        at org.elasticsearch.node.internal.InternalNode.close(InternalNode.java:331)
        at org.elasticsearch.bootstrap.Bootstrap$1.run(Bootstrap.java:82)
[2017-04-04 14:20:09,327][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] stopped
[2017-04-04 14:20:09,327][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] closing ...
[2017-04-04 14:20:09,409][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] closed
[2017-04-04 14:20:12,604][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] version[1.6.0], pid[29225], build[cdd3ac4/2015-06-09T13:36:34Z]
[2017-04-04 14:20:12,604][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] initializing ...
[2017-04-04 14:20:12,752][INFO ][plugins                  ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] loaded [knapsack-1.5.2.0-f340ad1], sites []
[2017-04-04 14:20:12,932][INFO ][env                      ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] using [1] data paths, mounts [[/app (/dev/mapper/vg_app-lv_app)]], net usable_space [147.6gb], net total_space [196.8gb], types [ext4]
[2017-04-04 14:20:26,815][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] initialized
[2017-04-04 14:20:26,816][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] starting ...
[2017-04-04 14:20:27,015][INFO ][transport                ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/136.133.236.12:9300]}
[2017-04-04 14:20:27,028][INFO ][discovery                ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] 9837e558-ecbc-40b0-87a6-344382e520c5/NjtmV7x6Q-awETOHgQAKig
[2017-04-04 14:20:30,155][INFO ][cluster.service          ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] detected_master [a40fda6a-2269-44c8-9c95-77eaf5a865dd][FioO8FFfR66WbqxZZR8MBQ][X1LOGW02.mnao.net][inet[/136.133.238.46:9300]]{max_local_storage_nodes=1}, added {[a40fda6a-2269-44c8-9c95-77eaf5a865dd][FioO8FFfR66WbqxZZR8MBQ][X1LOGW02.mnao.net][inet[/136.133.238.46:9300]]{max_local_storage_nodes=1},}, reason: zen-disco-receive(from master [[a40fda6a-2269-44c8-9c95-77eaf5a865dd][FioO8FFfR66WbqxZZR8MBQ][X1LOGW02.mnao.net][inet[/136.133.238.46:9300]]{max_local_storage_nodes=1}])
[2017-04-04 14:20:30,431][INFO ][http                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[localhost/127.0.0.1:9200]}
[2017-04-04 14:20:30,431][INFO ][node                     ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] started

Re: Logstash continuously crashing..

Posted: Tue Apr 04, 2017 4:31 pm
by mcapra
Can you share the output of the following commands executed from the CLI of your Nagios Log Server machine:

Code: Select all

curl -XGET localhost:9200/_nodes/jvm?pretty
curl 'localhost:9200/_cluster/health?level=indices&pretty'
Also afterwards, a fresh output of:

Code: Select all

tail -n 50 /var/log/elasticsearch/*.log

Re: Logstash continuously crashing..

Posted: Tue Apr 04, 2017 4:58 pm
by uma K
Please find attached..
I have 4 instances but only 2 are showing up here..


tail -n 50 /var/log/elasticsearch/*.log
==> /var/log/elasticsearch/9837e558-ecbc-40b0-87a6-344382e520c5_index_indexing_slowlog.log <==

==> /var/log/elasticsearch/9837e558-ecbc-40b0-87a6-344382e520c5_index_search_slowlog.log <==

==> /var/log/elasticsearch/9837e558-ecbc-40b0-87a6-344382e520c5.log <==
at org.elasticsearch.common.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2017-04-04 14:20:41,616][DEBUG][action.search.type ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] All shards failed for phase: [query_fetch]
org.elasticsearch.index.IndexShardMissingException: [nagioslogserver][0] missing
at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:210)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:548)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:532)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:373)
at org.elasticsearch.search.action.SearchServiceTransportAction$11.call(SearchServiceTransportAction.java:333)
at org.elasticsearch.search.action.SearchServiceTransportAction$11.call(SearchServiceTransportAction.java:330)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2017-04-04 14:20:41,630][DEBUG][action.search.type ] [abd0aca5-8cbf-4f11-988e-be0d778f5f95] All shards failed for phase: [query_fetch]
org.elasticsearch.index.IndexShardMissingException: [nagioslogserver][0] missing
at org.elasticsearch.index.IndexService.shardSafe(IndexService.java:210)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:548)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:532)
at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:373)
at org.elasticsearch.search.action.SearchServiceTransportAction$11.call(SearchServiceTransportAction.java:333)
at org.elasticsearch.search.action.SearchServiceTransportAction$11.call(SearchServiceTransportAction.java:330)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Re: Logstash continuously crashing..

Posted: Wed Apr 05, 2017 11:28 am
by mcapra
uma K wrote: I have 4 instances but only 2 are showing up here..
Can you show me the Elasticsearch logs from the 2 machines that are missing? This should get them into the 43224_2.zip files:

Code: Select all

(zip -r /tmp/43224_2.zip /var/log/elasticsearch)
Also the output of this from the CLI of the 2 missing machines:

Code: Select all

curl -XGET localhost:9200/_nodes/jvm?pretty
I do see 2 different JVM minor versions already, which might be part of the problem.

Re: Logstash continuously crashing..

Posted: Wed Apr 05, 2017 1:33 pm
by uma K
Please find the attachments