Page 1 of 1

logstash dies with IOError on messages

Posted: Tue Jun 11, 2019 8:53 am
by nagioscarnovale
I have two instances

on each node nagioslog server dies and I only see the following log in /var/log/messages

logstash: IOError: Connessione interrotta dal corrispondente
logstash: each at org/jruby/RubyIO.java:3565
logstash: tcp_receiver at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-2.0.5/lib/logstash/inputs/syslog.rb:173
logstash: tcp_listener at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-2.0.5/lib/logstash/inputs/syslog.rb:159

Thanks in advance
Nick

Re: logstash dies with IOError on messages

Posted: Tue Jun 11, 2019 4:06 pm
by cdienger
/var/log/logstash/logstash.log may have more details. How much memory is on the machine? It may be necessary to up the memory allocated to the logstash process - for steps see: https://support.nagios.com/kb/article/n ... g-576.html.

Re: logstash dies with IOError on messages

Posted: Wed Jun 12, 2019 3:49 am
by nagioscarnovale
Thanks for the reply cdienger

On the two nodes we have 32 GB RAM

the setting of the LS_HEAP_SIZE LS_OPEN_FILES

is shown below

node01
more /etc/init.d/logstash | grep LS_HEAP_SIZE
LS_HEAP_SIZE="1000m"
more /etc/init.d/logstash | grep LS_OPEN_FILES
LS_OPEN_FILES=32768

free -m
total used free shared buff/cache available
Mem: 32013 17661 4007 47 10344 13757
Swap: 2047 0 2047


node02
more /etc/init.d/logstash | grep LS_HEAP_SIZE
LS_HEAP_SIZE="1000m"
more /etc/init.d/logstash | grep LS_OPEN_FILES
LS_OPEN_FILES=32768

free -m
total used free shared buff/cache available
Mem: 32013 17635 9046 47 5330 13783
Swap: 2047 0 2047

##############################
These are two crash events in /var/log/messages node1

Jun 11 15:51:15 nagioslogserver-01 logstash: IOError: Connessione interrotta dal corrispondente
Jun 11 15:51:15 nagioslogserver-01 logstash: each at org/jruby/RubyIO.java:3565
Jun 11 15:51:15 nagioslogserver-01 logstash: tcp_receiver at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gem
s/logstash-input-syslog-2.0.5/lib/logstash/inputs/syslog.rb:173
Jun 11 15:51:15 nagioslogserver-01 logstash: tcp_listener at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gem
s/logstash-input-syslog-2.0.5/lib/logstash/inputs/syslog.rb:159
..........
Jun 11 16:39:36 nagioslogserver-01 logstash: IOError: Connessione interrotta dal corrispondente
Jun 11 16:39:36 nagioslogserver-01 logstash: each at org/jruby/RubyIO.java:3565
Jun 11 16:39:36 nagioslogserver-01 logstash: tcp_receiver at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gem
s/logstash-input-syslog-2.0.5/lib/logstash/inputs/syslog.rb:173
Jun 11 16:39:36 nagioslogserver-01 logstash: tcp_listener at /usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gem
s/logstash-input-syslog-2.0.5/lib/logstash/inputs/syslog.rb:159

....
in the same time interval in /var/log/logstash I have the following log.

"Pipeline main started" it is relative to my restart "systemctl restart logstash"


{:timestamp=>"2019-06-11T15:28:44.524000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connessione rifiutata (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T15:28:44.724000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connessione rifiutata (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T15:28:45.621000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connessione rifiutata (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T15:58:35.373000+0200", :message=>"Pipeline main started"}
{:timestamp=>"2019-06-11T16:53:45.655000+0200", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T16:53:45.700000+0200", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T16:53:45.713000+0200", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T16:53:45.723000+0200", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T16:53:45.820000+0200", :message=>"Pipeline main started"}
{:timestamp=>"2019-06-11T16:53:46.523000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
{:timestamp=>"2019-06-11T16:53:48.534000+0200", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200\

......
Is it necessary to open the traffic between server and client on the following ports ?

tcp6: 3515, 2056, 5544, 2057
udp6: 5544


or only

tcp6: 5544
udp6: 5544

thank you so much

Re: logstash dies with IOError on messages

Posted: Thu Jun 13, 2019 3:22 pm
by scottwilkerson
The error you are getting make it seem that logstash cannot communicate with elasticsearch, it needs port 9300 open to do so
nagioscarnovale wrote:Is it necessary to open the traffic between server and client on the following ports ?

tcp6: 3515, 2056, 5544, 2057
udp6: 5544
Yes that is also correct

Re: logstash dies with IOError on messages

Posted: Fri Jun 14, 2019 3:12 am
by nagioscarnovale
Thanks for the reply

Re: logstash dies with IOError on messages

Posted: Fri Jun 14, 2019 6:27 am
by scottwilkerson
nagioscarnovale wrote:Thanks for the reply
no problem