Logstash crash on large import
Posted: Thu Oct 15, 2015 6:01 pm
Guys,
On NLS 1.3 when I use logstash-forwarder to start forwarding new logs to the NLS server and it's playing catchup on the logfiles. The logstash process dies with the following error. If I restart, it catches up briefly and dies again.
[root@nagiosls weveland]# Oct 15, 2015 6:51:49 PM org.elasticsearch.transport.TransportService$Adapter checkForTimeout
WARNING: [43fb704b-8fa3-4b2d-9d8e-7bffe63b7e8c] Received response for a request that has timed out, sent [13863ms] ago, timed out [815ms] ago, action [cluster:monitor/nodes/info], node [[#transport#-1][nagiosls.srs.localnet][inet[localhost/127.0.0.1:9300]]], id [63]
Oct 15, 2015 6:51:55 PM org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler doSample
INFO: [43fb704b-8fa3-4b2d-9d8e-7bffe63b7e8c] failed to get node info for [#transport#-1][nagiosls.srs.localnet][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] request_id [63] timed out after [13048ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Error: Your application used more memory than the safety cap of 500M.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace
On NLS 1.3 when I use logstash-forwarder to start forwarding new logs to the NLS server and it's playing catchup on the logfiles. The logstash process dies with the following error. If I restart, it catches up briefly and dies again.
[root@nagiosls weveland]# Oct 15, 2015 6:51:49 PM org.elasticsearch.transport.TransportService$Adapter checkForTimeout
WARNING: [43fb704b-8fa3-4b2d-9d8e-7bffe63b7e8c] Received response for a request that has timed out, sent [13863ms] ago, timed out [815ms] ago, action [cluster:monitor/nodes/info], node [[#transport#-1][nagiosls.srs.localnet][inet[localhost/127.0.0.1:9300]]], id [63]
Oct 15, 2015 6:51:55 PM org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler doSample
INFO: [43fb704b-8fa3-4b2d-9d8e-7bffe63b7e8c] failed to get node info for [#transport#-1][nagiosls.srs.localnet][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] request_id [63] timed out after [13048ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Error: Your application used more memory than the safety cap of 500M.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace