Guys,
On NLS 1.3 when I use logstash-forwarder to start forwarding new logs to the NLS server and it's playing catchup on the logfiles. The logstash process dies with the following error. If I restart, it catches up briefly and dies again.
[root@nagiosls weveland]# Oct 15, 2015 6:51:49 PM org.elasticsearch.transport.TransportService$Adapter checkForTimeout
WARNING: [43fb704b-8fa3-4b2d-9d8e-7bffe63b7e8c] Received response for a request that has timed out, sent [13863ms] ago, timed out [815ms] ago, action [cluster:monitor/nodes/info], node [[#transport#-1][nagiosls.srs.localnet][inet[localhost/127.0.0.1:9300]]], id [63]
Oct 15, 2015 6:51:55 PM org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler doSample
INFO: [43fb704b-8fa3-4b2d-9d8e-7bffe63b7e8c] failed to get node info for [#transport#-1][nagiosls.srs.localnet][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9300]][cluster:monitor/nodes/info] request_id [63] timed out after [13048ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Error: Your application used more memory than the safety cap of 500M.
Specify -J-Xmx####m to increase it (#### = cap size in MB).
Specify -w for full OutOfMemoryError stack trace
Logstash crash on large import
Re: Logstash crash on large import
I restarted the elasticsearch and logstash processes and it appears to be running a bit longer now.
Re: Logstash crash on large import
Looks like it ran long enough to complete.
Here is a statistics image of the import process.
Here is a statistics image of the import process.
You do not have the required permissions to view the files attached to this post.
- Box293
- Too Basu
- Posts: 5126
- Joined: Sun Feb 07, 2010 10:55 pm
- Location: Deniliquin, Australia
- Contact:
Re: Logstash crash on large import
How much physical memory does each of your instances of log server have?
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Logstash crash on large import
1 instance - 16GB
Re: Logstash crash on large import
What you experienced is logstash falling behind - likely due to a lack of memory. 16GB is good for a sustained input of logs, but it's not great for importing several million logs in a short period of time. You would likely have to bump your memory up to 32GB in preperation for an import of this size. How much free memory is on your node in normal operation?
Code: Select all
free -mRe: Logstash crash on large import
Code: Select all
total used free shared buffers cached
Mem: 16080 15835 245 0 274 5251
-/+ buffers/cache: 10308 5771
Swap: 255 0 255
Re: Logstash crash on large import
The strange part is, it crashed on the smaller import. After the restart later on I imported a larger set of logs without incident. Again this morning I did the same on the servers partner.
You do not have the required permissions to view the files attached to this post.
Re: Logstash crash on large import
While odd, it's not unprecedented if the following is true:
1. Logstash was running for awhile, and your memory consumption was likely very high (~200MB free).
2. You attempted to import the data mention, and logstash crashed due to memory errors.
3. Logstash was restarted, and then data could be imported properly.
What likely happened is that logstash had some logs in its buffers/had no more memory to consume when the initial import was attempted. I've seen this problem before, and some people have had success with the following.
Edit the logstash config:
Uncomment LS_HEAP_SIZE and increase it to 1024M:
LS_HEAP_SIZE="1024m"
Restart logstash:
In general, the above shouldn't be necessary - though it might be a good precautionary measure for you when doing a mass import of data. Something to think about - thanks!
1. Logstash was running for awhile, and your memory consumption was likely very high (~200MB free).
2. You attempted to import the data mention, and logstash crashed due to memory errors.
3. Logstash was restarted, and then data could be imported properly.
What likely happened is that logstash had some logs in its buffers/had no more memory to consume when the initial import was attempted. I've seen this problem before, and some people have had success with the following.
Edit the logstash config:
Code: Select all
vi /etc/sysconfig/logstashLS_HEAP_SIZE="1024m"
Restart logstash:
Code: Select all
service logstash restart