Page 1 of 1

Insufficient memory for Elasticsearch process on v1.4.0

Posted: Fri Jan 15, 2016 10:06 am
by milan
Hello everyone

After the upgrade on ver 1.4.0 the elasticsearch service is crashing giving following error message:

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000784660000, 1898577920, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1898577920 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/jvm-33003/hs_error.log


Before upgrade elasticsearch service worked well with ~4 Gb ram à Node.

Can somebody help with this problem?

Many thx and best regards
Milan

Re: Insufficient memory for Elasticsearch process on v1.4.0

Posted: Fri Jan 15, 2016 10:34 am
by hsmith
How much data worth of logs are you receiving?

Re: Insufficient memory for Elasticsearch process on v1.4.0

Posted: Tue Jan 26, 2016 11:12 am
by milan
Hello
Sorry for late answer.

We are receiving aprox. 1Gb log data per day, which should be okl. for 4-instance Cluster.

Could maybe "sudden traffic-spikes" be the reason for this crushes?

We have increased the amount of memory on all Nodes from 4 Gb to 8 Gb and since then we haven't exeprienced any problems.

Best Regards
Milan

Re: Insufficient memory for Elasticsearch process on v1.4.0

Posted: Tue Jan 26, 2016 11:18 am
by hsmith
I'm glad to hear that this is resolved. NLS takes a lot of RAM. Are you a current customer? If so, we should get you access to the customer forums.

Re: Insufficient memory for Elasticsearch process on v1.4.0

Posted: Tue Jan 26, 2016 11:21 am
by milan
Hello

Yes we are current customer.

Should we make request over sales, or direct over admin?

Many thanks and best regards
Milan

Re: Insufficient memory for Elasticsearch process on v1.4.0

Posted: Tue Jan 26, 2016 11:24 am
by hsmith
You should send a request to [email protected]. I figured with that much data, you're a current customer. Glad we could help you with this as well. :) I'm going to close this thread as the issue is resolved.