Node 1 looks like search queries from the Dashboard. Node 2 has no activity for today, most recent chunk is here:cat /var/log/elasticsearch/*.log
Code: Select all
015-07-14 13:54:48,797][INFO ][node ] [33ff6054-696c-48f0-8155-1917aff9d8d1] started
[2015-07-14 13:54:48,823][INFO ][gateway ] [33ff6054-696c-48f0-8155-1917aff9d8d1] recovered [0] indices into cluster_state
[2015-07-14 13:55:11,295][INFO ][node ] [33ff6054-696c-48f0-8155-1917aff9d8d1] stopping ...
[2015-07-14 13:55:11,324][INFO ][node ] [33ff6054-696c-48f0-8155-1917aff9d8d1] stopped
[2015-07-14 13:55:11,324][INFO ][node ] [33ff6054-696c-48f0-8155-1917aff9d8d1] closing ...
[2015-07-14 13:55:11,331][INFO ][node ] [33ff6054-696c-48f0-8155-1917aff9d8d1] closed
[2015-07-14 10:49:46,009][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] version[1.3.2], pid[5447], build[dee175d/2014-08-13T14:29:30Z]
[2015-07-14 10:49:46,014][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] initializing ...
[2015-07-14 10:49:46,033][INFO ][plugins ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] loaded [knapsack-1.3.2.0-d5501ef], sites []
[2015-07-14 10:49:52,472][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] initialized
[2015-07-14 10:49:52,472][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] starting ...
[2015-07-14 10:49:53,237][INFO ][transport ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.1.249:9300]}
[2015-07-14 10:49:53,341][INFO ][discovery ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] e8945dd0-ae36-4699-a0fc-43811a9c38e1/2p8j0OlRQ8uXjuOsn-FAzA
[2015-07-14 10:49:56,440][INFO ][cluster.service ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] new_master [4bee07f8-6f40-451a-a5bb-666e9a22b387][2p8j0OlRQ8uXjuOsn-FAzA][schpnag2][inet[/192.168.1.249:9300]]{max_local_storage_nodes=1}, reason: zen-disco-join (elected_as_master)
[2015-07-14 10:49:56,468][INFO ][http ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[localhost/127.0.0.1:9200]}
[2015-07-14 10:49:56,468][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] started
[2015-07-14 10:49:56,494][INFO ][gateway ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] recovered [0] indices into cluster_state
[2015-07-14 10:50:14,091][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] stopping ...
[2015-07-14 10:50:14,117][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] stopped
[2015-07-14 10:50:14,117][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] closing ...
[2015-07-14 10:50:14,126][INFO ][node ] [4bee07f8-6f40-451a-a5bb-666e9a22b387] closed
Node 1 not particularly interesting, seeing mostly my messages. Node 2 we have a repeating error...cat /var/log/logstash/logstash.log
Code: Select all
{:timestamp=>"2015-07-15T10:32:03.694000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>org.elasticsearch.client.transport.NoNodeAvailableException: No node available, :backtrace=>["org.elasticsearch.client.transport.TransportClientNodesService.execute(org/elasticsearch/client/transport/TransportClientNodesService.java:219)", "org.elasticsearch.client.transport.support.InternalTransportClient.execute(org/elasticsearch/client/transport/support/InternalTransportClient.java:106)", "org.elasticsearch.client.support.AbstractClient.bulk(org/elasticsearch/client/support/AbstractClient.java:147)", "org.elasticsearch.client.transport.TransportClient.bulk(org/elasticsearch/client/transport/TransportClient.java:360)", "org.elasticsearch.action.bulk.BulkRequestBuilder.doExecute(org/elasticsearch/action/bulk/BulkRequestBuilder.java:165)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:85)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:59)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)", "RUBY.bulk(/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:207)", "RUBY.flush(/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch.rb:315)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1339)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193)", "RUBY.buffer_receive(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159)", "RUBY.receive(/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch.rb:311)", "RUBY.handle(/usr/local/nagioslogserver/logstash/lib/logstash/outputs/base.rb:86)", "RUBY.worker_setup(/usr/local/nagioslogserver/logstash/lib/logstash/outputs/base.rb:78)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}
Nothing, at least for today. Yesterday on node 2 there was the NoNodeAvailableException which I pointed out in OP. Fairly certain that's one of the things we fixed already in the config.tail -n20 /var/log/httpd/error_log
Nothing exciting on either node. Just standard User Agent strings from my workstation, which makes sense.tail -n20 /var/log/httpd/access_log
Nothing exciting. Only exists on node 1. Does not exist on node 2.tail -f /usr/local/nagioslogserver/var/jobs.log
tail -f /usr/local/nagioslogserver/var/poller.log