Code: Select all
curl -XGET http://localhost:9200/_cat/shards | grep STARTEDCode: Select all
curl -XGET http://localhost:9200/_cat/shards | grep STARTEDI have attached the output to this reply.cdienger wrote:16Gigs may not be enough depending on how many indices ES has open at any given time and how much data is coming in. Out of the 16Gigs total, ES is given half of it to load all open indices, run queries, maintenance, etc... Run the following to get a list of open indices and their size:
ES should have at least enough memory allocated to it load all indices listed.Code: Select all
curl -XGET http://localhost:9200/_cat/shards | grep STARTED
Code: Select all
[2017-06-28 03:07:06,592][WARN ][index.shard ] [791cc6c8-f646-495e-9e58-1ec21a24b61c] [logstash-2017.06.28][4] Failed to perform scheduled engine optimize/merge
org.elasticsearch.index.engine.OptimizeFailedEngineException: [logstash-2017.06.28][4] force merge failed
...
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
Code: Select all
{:timestamp=>"2017-06-28T03:07:29.298000+0200", :message=>"Got error to send bulk of actions: Failed to deserialize exception response from stream", :level=>:error}
{:timestamp=>"2017-06-28T03:07:29.298000+0200", :message=>"Failed to flush outgoing items", :outgoing_count=>2, :exception=>org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream, [backtrace here], :level=>:warn}