Page 1 of 2

Logstash - "None of the configured nodes are available"

Posted: Wed Apr 26, 2017 4:15 pm
by Jklre
I have been seeing errors appearing in the logstash log file every night for the past several days.

I have a 2 node cluster, one node is running logstash. This seems to be happening when CPU utilization is over 80% Have you guys seen this error before? I went ahead and added 2 more CPU's to each node to see if this alleviates the issue.

Here are some sample logs i am seeing. Thank you.

Code: Select all

{:timestamp=>"2017-04-21T07:48:01.459000-0700", :message=>"Got error to send bulk of actions: None of the configured nodes are available: []", :level=>:error}
{:timestamp=>"2017-04-21T07:48:01.514000-0700", :message=>"Got error to send bulk of actions: None of the configured nodes are available: []", :level=>:error}
{:timestamp=>"2017-04-21T07:48:01.561000-0700", :message=>"Failed to flush outgoing items", :outgoing_count=>6, :exception=>org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [], :backtrace=>["org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(org/elasticsearch/client/transport/TransportClientNodesService.java:279)", "org.elasticsearch.client.transport.TransportClientNodesService.execute(org/elasticsearch/client/transport/TransportClientNodesService.java:198)", "org.elasticsearch.client.transport.support.InternalTransportClient.execute(org/elasticsearch/client/transport/support/InternalTransportClient.java:106)", "org.elasticsearch.client.support.AbstractClient.bulk(org/elasticsearch/client/support/AbstractClient.java:163)", "org.elasticsearch.client.transport.TransportClient.bulk(org/elasticsearch/client/transport/TransportClient.java:356)", "org.elasticsearch.action.bulk.BulkRequestBuilder.doExecute(org/elasticsearch/action/bulk/BulkRequestBuilder.java:164)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:91)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:65)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:466)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:466)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:465)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:465)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:490)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:490)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:489)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:489)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1341)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193)", "RUBY.buffer_initialize(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:112)", "org.jruby.RubyKernel.loop(org/jruby/RubyKernel.java:1511)", "RUBY.buffer_initialize(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:110)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}
{:timestamp=>"2017-04-21T07:48:01.515000-0700", :message=>"Failed to flush outgoing items", :outgoing_count=>8, :exception=>org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [], :backtrace=>["org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(org/elasticsearch/client/transport/TransportClientNodesService.java:279)", "org.elasticsearch.client.transport.TransportClientNodesService.execute(org/elasticsearch/client/transport/TransportClientNodesService.java:198)", "org.elasticsearch.client.transport.support.InternalTransportClient.execute(org/elasticsearch/client/transport/support/InternalTransportClient.java:106)", "org.elasticsearch.client.support.AbstractClient.bulk(org/elasticsearch/client/support/AbstractClient.java:163)", "org.elasticsearch.client.transport.TransportClient.bulk(org/elasticsearch/client/transport/TransportClient.java:356)", "org.elasticsearch.action.bulk.BulkRequestBuilder.doExecute(org/elasticsearch/action/bulk/BulkRequestBuilder.java:164)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:91)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:65)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:466)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:466)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:465)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:465)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:490)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:490)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:489)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:489)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1341)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193)", "Stud::Buffer.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:193)", "RUBY.buffer_initialize(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:112)", "org.jruby.RubyKernel.loop(org/jruby/RubyKernel.java:1511)", "RUBY.buffer_initialize(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:110)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

Re: Logstash - "None of the configured nodes are available"

Posted: Wed Apr 26, 2017 4:24 pm
by mcapra
The Elasticsearch logs may have some specific text indicating why they are unavailable. Can you share the Elasticsearch logs? They're typically found here:

Code: Select all

/var/log/elasticsearch

Re: Logstash - "None of the configured nodes are available"

Posted: Wed Apr 26, 2017 4:36 pm
by Jklre
Here's a segment of the elastic search logs from that time frame. Looks like it goes into a bunch of garbage collection during that time frame.

elasticLogsnippet.txt
mcapra wrote:The Elasticsearch logs may have some specific text indicating why they are unavailable. Can you share the Elasticsearch logs? They're typically found here:

Code: Select all

/var/log/elasticsearch

Re: Logstash - "None of the configured nodes are available"

Posted: Thu Apr 27, 2017 9:20 am
by mcapra
Can you share the outputs of these commands:

Code: Select all

curl -XGET 'localhost:9200/_nodes/jvm?pretty'
curl -XGET 'localhost:9200/_cluster/health?level=indices&pretty'
curl -XGET 'localhost:9200/logstash-*/_stats'
They may be quite large. Might want to send them to a file and attach it to your post.

Re: Logstash - "None of the configured nodes are available"

Posted: Thu Apr 27, 2017 11:11 am
by Jklre
mcapra wrote:Can you share the outputs of these commands:

Code: Select all

curl -XGET 'localhost:9200/_nodes/jvm?pretty'
curl -XGET 'localhost:9200/_cluster/health?level=indices&pretty'
curl -XGET 'localhost:9200/logstash-*/_stats'
They may be quite large. Might want to send them to a file and attach it to your post.
Here you go.
1.txt
2.txt
3.txt

Re: Logstash - "None of the configured nodes are available"

Posted: Thu Apr 27, 2017 2:48 pm
by mcapra
Have you tried giving these systems more resources? Specifically, more RAM so garbage collection doesn't need to happen as aggressively as it is currently.

Alternatively, you might try adjusting the settings on the Backup & Maintenance page to close some of the older indices you no longer need to search through on a regular basis. That should free up a bit of memory.

Re: Logstash - "None of the configured nodes are available"

Posted: Thu Apr 27, 2017 3:12 pm
by Jklre
mcapra wrote:Have you tried giving these systems more resources? Specifically, more RAM so garbage collection doesn't need to happen as aggressively as it is currently.

Alternatively, you might try adjusting the settings on the Backup & Maintenance page to close some of the older indices you no longer need to search through on a regular basis. That should free up a bit of memory.

I added 2 additional cpu's as cpu utilization was reaching over 80% during the times these errors were occurring.
CPU.png
Memory utilization is only around 70% on each node. Would adjusting the size of the JVM heap memory have any effect on this?
memory.png
Thank you.

Re: Logstash - "None of the configured nodes are available"

Posted: Thu Apr 27, 2017 3:18 pm
by mcapra
Jklre wrote: Memory utilization is only around 70% on each node. Would adjusting the size of the JVM heap memory have any effect on this?
It might. But if you're at 70% usage and haven't altered the sysconfig file we use for Elasticsearch, then Elasticsearch only has 50% of the memory available on your machine allocated to the heap. This is the directive in /etc/sysconfig/elasticsearch you'd need to modify:

Code: Select all

ES_HEAP_SIZE=$(expr $(free -m|awk '/^Mem:/{print $2}') / 2 )m
But, there is a specific reason we only allocate 50% of the available memory to Elasticsearch. It leaves 50% available for Logstash as well as general Elasticsearch maintenance tasks run by curator in the background.

Re: Logstash - "None of the configured nodes are available"

Posted: Thu Apr 27, 2017 4:09 pm
by Jklre
Out of 6gb of memory peak utilization for everything running on this system is at 72% which leaves 1.6gb available that doesn't seem to be being used. Watching our logstash process it seems to only take up about 300mb of memory. What would be the effect of changing the memory cap to 60 or 70% before adding additional physical memory? We only have logstash running on both nodes but are only sending messages to 1. Would we even need the logstash service running on the second node? If not we can shutdown the service on node 2 increase that one to 70% memory utilization and have node 1 running at 60%. That should increase the amount of physical memory utilized but leave wiggle room for other processes that need to run. Unless there are some other factors that I am unaware of. Let me know what you think. Thank you.

Re: Logstash - "None of the configured nodes are available"

Posted: Thu Apr 27, 2017 4:31 pm
by mcapra
Jklre wrote:What would be the effect of changing the memory cap to 60 or 70% before adding additional physical memory?
That might help alleviate some of the aggressive garbage collection that's happening. It's hard to say exactly why the garbage collection is happening so aggressively at that specific time without enabling some seriously verbose Elasticsearch logging, though.