Logstash error on Amazon EC2 instance

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
nick.wiechers
Posts: 5
Joined: Thu Sep 29, 2016 9:41 am

Logstash error on Amazon EC2 instance

Post by nick.wiechers »

I've been trying to evaluate Nagios Log Server using the pre built Amazon instance. Although it all appears to install correctly, Logstash crashes after about a minute with the following error. If I restart it, it crash again about a minute later. It looks to be a problem with Elastic Search and Openssl. Any ideas.

Code: Select all

Starting Logstash Daemon: [ OK ]
[ec2-user@ip-10-35-158-100 elasticsearch]$ LoadError: load error: openssl/pkcs12 -- java.lang.InternalError: null
require at org/jruby/RubyKernel.java:1085
require at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
require at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
require at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.4/lib/polyglot.rb:65
(root) at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/jopenssl/load.rb:25
require at org/jruby/RubyKernel.java:1085
require at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
require at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
require at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.4/lib/polyglot.rb:65
(root) at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/openssl.rb:1
require at org/jruby/RubyKernel.java:1085
require at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
require at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:53
require at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/polyglot-0.3.4/lib/polyglot.rb:65
(root) at file:/usr/local/nagioslogserver/logstash/vendor/jar/jruby-complete-1.7.11.jar!/META-INF/jruby.home/lib/ruby/shared/openssl.rb:1
(root) at /usr/local/nagioslogserver/logstash/lib/logstash/inputs/tcp.rb:1
each at org/jruby/RubyArray.java:1613
register at /usr/local/nagioslogserver/logstash/lib/logstash/inputs/tcp.rb:65
start_inputs at /usr/local/nagioslogserver/logstash/lib/logstash/pipeline.rb:135
start_inputs at /usr/local/nagioslogserver/logstash/lib/logstash/pipeline.rb:134
run at /usr/local/nagioslogserver/logstash/lib/logstash/runner.rb:168
call at org/jruby/RubyProc.java:271
run at /usr/local/nagioslogserver/logstash/lib/logstash/pipeline.rb:72
Last edited by tmcdonald on Tue Oct 04, 2016 11:26 am, edited 1 time in total.
Reason: Please use [code][/code] tags around long output
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: Logstash error on Amazon EC2 instance

Post by rkennedy »

I'll need a few pieces of information from you -
1. The output of ls -al /usr/local/nagioslogserver/logstash
2. A copy of your /etc/init.d/logstash script
3. How much ram did you deploy to this VM?
Former Nagios Employee
nick.wiechers
Posts: 5
Joined: Thu Sep 29, 2016 9:41 am

Re: Logstash error on Amazon EC2 instance

Post by nick.wiechers »

Hi

The instance is an m1.small i.e 1.7GB of RAM

See below for other requested information.

Nick

Code: Select all

ls -al /usr/local/nagioslogserver/logstash
drwxr-xr-x.  9 nagios nagios 4096 Sep 29 09:49 .
drwxrwxr-x. 10 nagios nagios 4096 Sep 29 09:52 ..
drwxr-xr-x.  2 nagios nagios 4096 Sep 29 09:49 bin
drwxrwxr-x.  3 nagios nagios 4096 Sep 29 09:49 etc
drwxr-xr-x.  3 nagios nagios 4096 Sep 29 09:49 lib
-rw-r--r--.  1 nagios nagios  591 Sep 29 09:49 LICENSE
drwxr-xr-x.  2 nagios nagios 4096 Sep 29 09:49 locales
drwxr-xr-x.  2 nagios nagios 4096 Sep 29 09:49 patterns
-rw-r--r--.  1 nagios nagios 3450 Sep 29 09:49 README.md
drwxr-xr-x. 10 nagios nagios 4096 Sep 29 09:49 spec
drwxr-xr-x.  8 nagios nagios 4096 Sep 29 09:49 vendor

Code: Select all

/etc/init.d/logstash

Code: Select all

[ec2-user@ip-10-77-12-71 init.d]$ more logstash
#! /bin/sh
#
#       /etc/rc.d/init.d/logstash
#
#       Starts Logstash as a daemon
#
# chkconfig: 2345 90 10
# description: Starts Logstash as a daemon.

### BEGIN INIT INFO
# Provides: logstash
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: S 0 1 6
# Short-Description: Logstash
# Description: Starts Logstash as a daemon.
### END INIT INFO

. /etc/rc.d/init.d/functions
NAME=logstash
DESC="Logstash Daemon"
DEFAULT=/etc/sysconfig/$NAME

if [ `id -u` -ne 0 ]; then
   echo "You need root privileges to run this script"
   exit 1
fi

# The following variables can be overwritten in $DEFAULT
PATH=/bin:/usr/bin:/sbin:/usr/sbin

# See contents of file named in $DEFAULT for comments
LS_USER=logstash
LS_GROUP=logstash
LS_HOME=/usr/local/nagioslogserver
LS_HEAP_SIZE="500m"
LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME}/tmp"
LS_LOG_FILE=/var/log/logstash/$NAME.log
LS_CONF_DIR=/etc/logstash/conf.d
LS_OPEN_FILES=16384
LS_NICE=19
LS_OPTS=""
LS_PIDFILE=/var/run/$NAME/$NAME.pid

# End of variables that can be overwritten in $DEFAULT

if [ -f "$DEFAULT" ]; then
  . "$DEFAULT"
fi

# Define other required variables
PID_FILE=${LS_PIDFILE}

DAEMON="$LS_HOME/bin/logstash"
DAEMON_OPTS="agent -f ${LS_CONF_DIR} -l ${LS_LOG_FILE} ${LS_OPTS}"

#
# Function that starts the daemon/service
#
do_start()
{

  if [ -z "$DAEMON" ]; then
    echo "not found - $DAEMON"
    exit 1
  fi

  if pidofproc -p "$PID_FILE" >/dev/null; then
    failure
    exit 99
  fi

  # Prepare environment
  HOME="${HOME:-$LS_HOME}"
  JAVA_OPTS="${LS_JAVA_OPTS}"
  ulimit -n ${LS_OPEN_FILES}
  cd "${LS_HOME}"
  export PATH HOME JAVA_OPTS LS_HEAP_SIZE LS_JAVA_OPTS LS_USE_GC_LOGGING
  test -n "${JAVACMD}" && export JAVACMD

  nice -n ${LS_NICE} runuser -s /bin/sh -c "exec $DAEMON $DAEMON_OPTS" ${LS_USER
} > /dev/null 1>&1 < /dev/null &

  RETVAL=$?
  local PID=$!
  # runuser forks rather than execing our process.
  usleep 500000
  JAVA_PID=$(ps axo ppid,pid | awk -v "ppid=$PID" '$1==ppid {print $2}')
  PID=${JAVA_PID:-$PID}
  echo $PID > $PID_FILE
  [ $PID = $JAVA_PID ] && success
}

#
# Function that stops the daemon/service
#
do_stop()
{
    killproc -p $PID_FILE $DAEMON
    RETVAL=$?
    echo
    [ $RETVAL = 0 ] && rm -f ${PID_FILE}
}

case "$1" in
  start)
    echo -n "Starting $DESC: "
    do_start
    touch /var/run/logstash/$NAME
    ;;
  stop)
    echo -n "Stopping $DESC: "
    do_stop
    rm /var/run/logstash/$NAME
    ;;
  restart|reload)
    echo -n "Restarting $DESC: "
    do_stop
    do_start
    ;;
  status)
    echo -n "$DESC"
    status -p $PID_FILE
    exit $?
    ;;
  *)
    echo "Usage: $SCRIPTNAME {start|stop|status|restart}" >&2
    exit 3
    ;;
esac

echo
exit 0
[ec2-user@ip-10-77-12-71 init.d]$
Last edited by dwhitfield on Tue Oct 04, 2016 11:33 am, edited 1 time in total.
Reason: added code breaks...not sure about /etc/init.d/logstash
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: Logstash error on Amazon EC2 instance

Post by rkennedy »

This may not be enough ram, NLS is going to be pretty ram intensive and I generally recommend people start off with at least 4-8GB.

A couple more things I'll need from you -
- Could you post your logstash log for me to look at? Hoping this will have bit more of an indication. /var/log/logstash/logstash.log
- Could you also post your install.log? I'd like to see if there were any issues when it ran through the install. It seems as if you're missing a dependency at this point.
Former Nagios Employee
nick.wiechers
Posts: 5
Joined: Thu Sep 29, 2016 9:41 am

Re: Logstash error on Amazon EC2 instance

Post by nick.wiechers »

Hi

The Logstash log is empty and there is no install.log that I can find.
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: Logstash error on Amazon EC2 instance

Post by rkennedy »

Hmm, is there an elasticsearch log you could post? It would be located at /var/log/elasticsearch/elasticsearch/*.log (where * is your cluster ID)

It might be faster starting from scratch, and deploying with 4GB memory at least. This could have failed due to that sheer aspect, as with two java applications ram will be a necessity.

Just to get some clarification (I believe you did #1) - did you follow our guide @ https://assets.nagios.com/downloads/nag ... -Cloud.pdf or did you manually install it with a ./fullinstall?
Former Nagios Employee
nick.wiechers
Posts: 5
Joined: Thu Sep 29, 2016 9:41 am

Re: Logstash error on Amazon EC2 instance

Post by nick.wiechers »

Hi

I've rebuilt the instance using an m2.large instance with 17GB of memory but still have the same problem. The instance was built using the procedure at https://assets.nagios.com/downloads/nag ... 1475057214

The Elasticsearch log is below. There are some JVM errors.

Regards

Nick

Code: Select all

[2016-10-06 05:54:09,742][WARN ][common.jna               ] Unable to lock JVM m
emory (ENOMEM). This can result in part of the JVM being swapped out. Increase R
LIMIT_MEMLOCK (ulimit).
[2016-10-06 05:54:09,879][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] version[1.3.2], pid[1700], build[dee175d/2014-08-13T14:29:30Z]
[2016-10-06 05:54:09,879][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] initializing ...
[2016-10-06 05:54:09,899][INFO ][plugins                  ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] loaded [knapsack-1.3.2.0-d5501ef], sites []
[2016-10-06 05:54:13,343][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] initialized
[2016-10-06 05:54:13,350][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] starting ...
[2016-10-06 05:54:13,602][INFO ][transport                ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.104.218.78:9300]}
[2016-10-06 05:54:13,608][INFO ][discovery                ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] 89101335-82c5-4b7e-b9ce-859e1cb637e4/EzBIqop8Ts6VdiruYum8WA
[2016-10-06 05:54:16,630][INFO ][cluster.service          ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] new_master [e896e96f-6ca8-4dac-9544-8321c0bf7301][EzBIqop8Ts6
VdiruYum8WA][ip-10-104-218-78][inet[/10.104.218.78:9300]]{max_local_storage_node
s=1}, reason: zen-disco-join (elected_as_master)
[2016-10-06 05:54:16,672][INFO ][http                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[
localhost/127.0.0.1:9200]}
[2016-10-06 05:54:16,673][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] started
[2016-10-06 05:54:16,690][INFO ][gateway                  ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] recovered [0] indices into cluster_state
[2016-10-06 05:54:18,324][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] stopping ...
[2016-10-06 05:54:18,348][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] stopped
[2016-10-06 05:54:18,348][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] closing ...
[2016-10-06 05:54:18,355][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] closed
[2016-10-06 05:55:31,633][WARN ][common.jna               ] Unable to lock JVM m
emory (ENOMEM). This can result in part of the JVM being swapped out. Increase R
LIMIT_MEMLOCK (ulimit).
[2016-10-06 05:55:31,811][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] version[1.3.2], pid[888], build[dee175d/2014-08-13T14:29:30Z]
[2016-10-06 05:55:31,811][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] initializing ...
[2016-10-06 05:55:31,853][INFO ][plugins                  ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] loaded [knapsack-1.3.2.0-d5501ef], sites []
[2016-10-06 05:55:35,846][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] initialized
[2016-10-06 05:55:35,846][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] starting ...
[2016-10-06 05:55:36,076][INFO ][transport                ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.104.218.78:9300]}
[2016-10-06 05:55:36,083][INFO ][discovery                ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] 89101335-82c5-4b7e-b9ce-859e1cb637e4/4K6pRe6pQhmri2LAjt7xBw
[2016-10-06 05:55:39,116][INFO ][cluster.service          ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] new_master [e896e96f-6ca8-4dac-9544-8321c0bf7301][4K6pRe6pQhm
ri2LAjt7xBw][ip-10-104-218-78.eu-west-1.compute.internal][inet[/10.104.218.78:93
00]]{max_local_storage_nodes=1}, reason: zen-disco-join (elected_as_master)
[2016-10-06 05:55:39,146][INFO ][http                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[
localhost/127.0.0.1:9200]}
[2016-10-06 05:55:39,147][INFO ][node                     ] [e896e96f-6ca8-4dac-9544-8321c0bf7301] started
[2016-10-06 05:55:39,160][INFO ][gateway                  ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] recovered [0] indices into cluster_state
[2016-10-06 05:56:44,005][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [nagioslogserver] creating index, cause [auto(index api)], sh
ards [1]/[1], mappings [cf_option, node, reactor_server, snapshot, alert, _defau
lt_, query, commands, snmp_reactor, nrdp_server, user]
[2016-10-06 05:56:44,635][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [nagioslogserver_log] creating index, cause [auto(index api)]
, shards [5]/[1], mappings []
[2016-10-06 05:56:44,835][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [nagioslogserver_log] update_mapping [SECURITY] (dynamic)
[2016-10-06 05:56:44,966][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [nagioslogserver] update_mapping [node] (dynamic)
[2016-10-06 05:56:56,833][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [kibana-int] creating index, cause [auto(index api)], shards
[5]/[1], mappings []
[2016-10-06 05:56:57,064][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [kibana-int] update_mapping [dashboard] (dynamic)
[2016-10-06 05:57:01,325][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [nagioslogserver_log] update_mapping [POLLER] (dynamic)
[2016-10-06 05:57:01,679][INFO ][cluster.metadata         ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] [nagioslogserver_log] update_mapping [JOBS] (dynamic)
[2016-10-06 06:02:18,648][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] stopping ...
[2016-10-06 06:02:18,724][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] stopped
[2016-10-06 06:02:18,724][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] closing ...
[2016-10-06 06:02:18,739][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] closed
[2016-10-06 06:02:31,435][WARN ][common.jna               ] Unable to lock JVM m
emory (ENOMEM). This can result in part of the JVM being swapped out. Increase R
LIMIT_MEMLOCK (ulimit).
[2016-10-06 06:02:31,556][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] version[1.3.2], pid[1714], build[dee175d/2014-08-13T14:29:30Z
]
[2016-10-06 06:02:31,556][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] initializing ...
[2016-10-06 06:02:31,576][INFO ][plugins                  ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] loaded [knapsack-1.3.2.0-d5501ef], sites []
[2016-10-06 06:02:35,061][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] initialized
[2016-10-06 06:02:35,061][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] starting ...
[2016-10-06 06:02:35,265][INFO ][transport                ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.104.218.78:9300]}
[2016-10-06 06:02:35,270][INFO ][discovery                ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] 89101335-82c5-4b7e-b9ce-859e1cb637e4/_dpLJESNSfGSmfxUexxvTQ
[2016-10-06 06:02:38,325][INFO ][cluster.service          ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] new_master [e896e96f-6ca8-4dac-9544-8321c0bf7301][_dpLJESNSfG
SmfxUexxvTQ][ip-10-104-218-78.eu-west-1.compute.internal][inet[/10.104.218.78:93
00]]{max_local_storage_nodes=1}, reason: zen-disco-join (elected_as_master)
[2016-10-06 06:02:38,358][INFO ][http                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[
localhost/127.0.0.1:9200]}
[2016-10-06 06:02:38,359][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] started
[2016-10-06 06:02:39,157][INFO ][gateway                  ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] recovered [3] indices into cluster_state
[2016-10-06 06:09:52,451][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] stopping ...
[2016-10-06 06:09:52,538][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] stopped
[2016-10-06 06:09:52,539][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] closing ...
[2016-10-06 06:09:52,546][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] closed
[2016-10-06 06:10:53,274][WARN ][common.jna               ] Unable to lock JVM m
emory (ENOMEM). This can result in part of the JVM being swapped out. Increase R
LIMIT_MEMLOCK (ulimit).
[2016-10-06 06:10:53,436][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] version[1.3.2], pid[877], build[dee175d/2014-08-13T14:29:30Z]
[2016-10-06 06:10:53,437][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] initializing ...
[2016-10-06 06:10:53,474][INFO ][plugins                  ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] loaded [knapsack-1.3.2.0-d5501ef], sites []
[2016-10-06 06:10:57,446][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] initialized
[2016-10-06 06:10:57,446][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] starting ...
[2016-10-06 06:10:57,682][INFO ][transport                ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address
{inet[/10.104.218.78:9300]}
[2016-10-06 06:10:57,688][INFO ][discovery                ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] 89101335-82c5-4b7e-b9ce-859e1cb637e4/wcwygrONSRmHX2x-XYrq2A
[2016-10-06 06:11:00,747][INFO ][cluster.service          ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] new_master [e896e96f-6ca8-4dac-9544-8321c0bf7301][wcwygrONSRm
HX2x-XYrq2A][ip-10-104-218-78.eu-west-1.compute.internal][inet[/10.104.218.78:93
00]]{max_local_storage_nodes=1}, reason: zen-disco-join (elected_as_master)
[2016-10-06 06:11:00,797][INFO ][http                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] bound_address {inet[/127.0.0.1:9200]}, publish_address {inet[
localhost/127.0.0.1:9200]}
[2016-10-06 06:11:00,798][INFO ][node                     ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] started
[2016-10-06 06:11:01,332][DEBUG][action.search.type       ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] All shards failed for phase: [query_fetch]
[2016-10-06 06:11:01,344][DEBUG][action.search.type       ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] All shards failed for phase: [query_fetch]
[2016-10-06 06:11:01,422][DEBUG][action.search.type       ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] All shards failed for phase: [query_fetch]
[2016-10-06 06:11:01,425][DEBUG][action.search.type       ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] All shards failed for phase: [query_fetch]
[2016-10-06 06:11:01,991][INFO ][gateway                  ] [e896e96f-6ca8-4dac-
9544-8321c0bf7301] recovered [3] indices into cluster_state
[ec2-user@ip-10-104-218-78 elasticsearch]$
Last edited by dwhitfield on Mon Oct 10, 2016 3:27 pm, edited 3 times in total.
Reason: editing for clarity
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: Logstash error on Amazon EC2 instance

Post by rkennedy »

Taking a look at this error -

Code: Select all

[2016-10-06 05:54:09,742][WARN ][common.jna               ] Unable to lock JVM m
emory (ENOMEM). This can result in part of the JVM being swapped out. Increase R
LIMIT_MEMLOCK (ulimit).
I was able to find a similar issue here - https://github.com/elastic/elasticsearch/issues/9357

Which recommends running ulimit -l unlimited. Could you attempt this, and then try to start logstash / elasticsearch once again?
Former Nagios Employee
nick.wiechers
Posts: 5
Joined: Thu Sep 29, 2016 9:41 am

Re: Logstash error on Amazon EC2 instance

Post by nick.wiechers »

Hi

Thanks for your reply. running ulimit -l unlimited didn't help. I did a bit of reading around this error and is seems that rather more is involved to change the memory limits.

I used the EC2 image to save some time and avoid this kind of issue as I assumed it would just switch on and work. I presume you can start the image successfully?
User avatar
mcapra
Posts: 3739
Joined: Thu May 05, 2016 3:54 pm

Re: Logstash error on Amazon EC2 instance

Post by mcapra »

The Java exceptions being thrown lead me to believe something within the JVM is seriously wrong.

Can you share the output of:

Code: Select all

java -version
We don't currently have a way to test this unfortunately, but you might try upgrading to Java 1.8 on this machine if possible.
Former Nagios employee
https://www.mcapra.com/
Locked