logstash is stopping to receive the log. however, the process is on and status is green.
i found the below error:
og4j, [2015-03-26T02:43:14.037] WARN: org.elasticsearch.transport.netty: [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] Message not fully read (response) for [
26] handler org.elasticsearch.action.TransportActionNodeProxy$1@103170a8, error [false], resetting
after i restart elasticsearch and logstash, it works again
logstash stopped
-
pccwglobalit
- Posts: 105
- Joined: Wed Mar 11, 2015 9:00 pm
Re: logstash stopped
before the restart, i found the following log in elasticsearch.
3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b is localhost
[2015-03-26 02:40:48,553][WARN ][transport.netty ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] exception caught on transport layer [[id: 0x6fd28609]], closing connection
java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:150)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-03-26 02:41:12,518][WARN ][discovery ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] waited for 30s and no initial state was set by the discovery
[2015-03-26 02:41:12,533][INFO ][http ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[localhost/127.0.0.1:9200]}
[2015-03-26 02:41:12,534][INFO ][node ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] started
[2015-03-26 02:41:16,233][DEBUG][action.admin.cluster.state] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] no known master node, scheduling a retry
[2015-03-26 02:41:20,158][DEBUG][action.admin.indices.create] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] no known master node, scheduling a retry
[2015-03-26 02:41:42,878][INFO ][cluster.service ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] detected_master [181841a1-d717-437c-bd36-6d4a8344abe6][27i9IWLSSzuIYss8GpQ-tA][nls1-hht5.it.pccwglobal.com][inet[/192.168.78.10:9300]]{max_local_storage_nodes=1}, added {[33c658b5-db74-480c-bb71-32c430f77b00][DwWIPf3bS5C4VZbFDAMQrQ][nls3-tmh2.it.pccwglobal.com][inet[/192.168.191.10:9300]]{max_local_storage_nodes=1},[181841a1-d717-437c-bd36-6d4a8344abe6][27i9IWLSSzuIYss8GpQ-tA][nls1-hht5.it.pccwglobal.com][inet[/192.168.78.10:9300]]{max_local_storage_nodes=1},[41ae2858-4896-4233-8049-1912a47d09e9][8ZM_cuz7Q6yAxr5VstoUaQ][nls1-be.it.pccwglobal.com][inet[/192.168.1.32:9300]]{max_local_storage_nodes=1},}, reason: zen-disco-receive(from master [[181841a1-d717-437c-bd36-6d4a8344abe6][27i9IWLSSzuIYss8GpQ-tA][nls1-hht5.it.pccwglobal.com][inet[/192.168.78.10:9300]]{max_local_storage_nodes=1}])
3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b is localhost
[2015-03-26 02:40:48,553][WARN ][transport.netty ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] exception caught on transport layer [[id: 0x6fd28609]], closing connection
java.net.NoRouteToHostException: No route to host
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:150)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-03-26 02:41:12,518][WARN ][discovery ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] waited for 30s and no initial state was set by the discovery
[2015-03-26 02:41:12,533][INFO ][http ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[localhost/127.0.0.1:9200]}
[2015-03-26 02:41:12,534][INFO ][node ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] started
[2015-03-26 02:41:16,233][DEBUG][action.admin.cluster.state] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] no known master node, scheduling a retry
[2015-03-26 02:41:20,158][DEBUG][action.admin.indices.create] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] no known master node, scheduling a retry
[2015-03-26 02:41:42,878][INFO ][cluster.service ] [3a9a3cab-0971-4b03-b2d4-d4f2ea4d0b5b] detected_master [181841a1-d717-437c-bd36-6d4a8344abe6][27i9IWLSSzuIYss8GpQ-tA][nls1-hht5.it.pccwglobal.com][inet[/192.168.78.10:9300]]{max_local_storage_nodes=1}, added {[33c658b5-db74-480c-bb71-32c430f77b00][DwWIPf3bS5C4VZbFDAMQrQ][nls3-tmh2.it.pccwglobal.com][inet[/192.168.191.10:9300]]{max_local_storage_nodes=1},[181841a1-d717-437c-bd36-6d4a8344abe6][27i9IWLSSzuIYss8GpQ-tA][nls1-hht5.it.pccwglobal.com][inet[/192.168.78.10:9300]]{max_local_storage_nodes=1},[41ae2858-4896-4233-8049-1912a47d09e9][8ZM_cuz7Q6yAxr5VstoUaQ][nls1-be.it.pccwglobal.com][inet[/192.168.1.32:9300]]{max_local_storage_nodes=1},}, reason: zen-disco-receive(from master [[181841a1-d717-437c-bd36-6d4a8344abe6][27i9IWLSSzuIYss8GpQ-tA][nls1-hht5.it.pccwglobal.com][inet[/192.168.78.10:9300]]{max_local_storage_nodes=1}])
Re: logstash stopped
That looks like an elasticsearch log. Could you please run a tail on your logstash log?
There are several reasons why logstash might not start properly - one of the most common reasons is incorrect input/filter/output definition. Navigate to your NLS GUI and click "Administration -> Global Configuration -> Verify" and let us know if verify completes properly.
Thanks!
Code: Select all
tail -n30 /var/log/logstash/logstash.logThanks!