Failed to flush outgoing items

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
stecino
Posts: 248
Joined: Thu Mar 14, 2013 4:42 pm

Failed to flush outgoing items

Post by stecino »

Hello all,

I am getting following errors in my logstash which breaks the transport pipe

{:timestamp=>"2014-12-30T20:15:35.274000-0500", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>#<Encoding::UndefinedConversionError: ""\x80"" from ASCII-8BIT to UTF-8>,
:backtrace=>["org/jruby/RubyString.java:7575:in `encode'", "json/ext/GeneratorMethods.java:71:in `to_json'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:100:in
`bulk_ftw'", "org/jruby/RubyArray.java:2404:in `collect'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:97:in `bulk_ftw'",
"/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:80:in `bulk'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch.rb:315:in `flush'",
"/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1339:in `each'",
"/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in
`buffer_flush'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in `buffer_receive'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch.rb:311:in
`receive'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/base.rb:86:in `handle'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/base.rb:78:in `worker_setup'"], :level=>:warn}

From what I read up, the issue is most likely due to logstash and elastic search compatibility issue?

Someone suggested alternative to add Ruby Filter

ruby {
code => "begin; if !event['message'].nil?; event['message'] = event['message'].force_encoding('ASCII-8BIT').encode('UTF-8', :invalid => :replace, :undef => :replace, :replace => '?'); end; rescue; end;"
}
sreinhardt
-fno-stack-protector
Posts: 4366
Joined: Mon Nov 19, 2012 12:10 pm

Re: Failed to flush outgoing items

Post by sreinhardt »

Well it looks like you are attempting to convert from what logstash thinks is ascii 8-bit encoding, to utf-8. Which would normally be fine, but it seems that your log is sending the control character \x80, which is not a valid ascii character as the ascii table ends at \x7F. The correct route to resolve this, is to determine what your actual character set being sent in is, and alter the logstash input accordingly. You could apply that filter, but to me it doesn't seem to be the correct response as you might be losing otherwise perfectly valid charcacters by replacing them with spaces, when with the proper originating encoding it should translate just fine.
Nagios-Plugins maintainer exclusively, unless you have other C language bugs with open-source nagios projects, then I am happy to help! Please pm or use other communication to alert me to issues as I no longer track the forum.
stecino
Posts: 248
Joined: Thu Mar 14, 2013 4:42 pm

Re: Failed to flush outgoing items

Post by stecino »

sreinhardt wrote:Well it looks like you are attempting to convert from what logstash thinks is ascii 8-bit encoding, to utf-8. Which would normally be fine, but it seems that your log is sending the control character \x80, which is not a valid ascii character as the ascii table ends at \x7F. The correct route to resolve this, is to determine what your actual character set being sent in is, and alter the logstash input accordingly. You could apply that filter, but to me it doesn't seem to be the correct response as you might be losing otherwise perfectly valid charcacters by replacing them with spaces, when with the proper originating encoding it should translate just fine.
How would I go about doing this?
tmcdonald
Posts: 9117
Joined: Mon Sep 23, 2013 8:40 am

Re: Failed to flush outgoing items

Post by tmcdonald »

What sort of device is sending those logs? That is where you would be able to find out the encoding. There might be an option to change it, or if you have documentation it might specify what the encoding is.
Former Nagios employee
stecino
Posts: 248
Joined: Thu Mar 14, 2013 4:42 pm

Re: Failed to flush outgoing items

Post by stecino »

tmcdonald wrote:What sort of device is sending those logs? That is where you would be able to find out the encoding. There might be an option to change it, or if you have documentation it might specify what the encoding is.
These are all linux servers so far. I don't have anything else added
sreinhardt
-fno-stack-protector
Posts: 4366
Joined: Mon Nov 19, 2012 12:10 pm

Re: Failed to flush outgoing items

Post by sreinhardt »

Do you happen to change any language or region settings on the sending servers, that is different from the log server?

I did find that \x80 seems to possibly be a euro character just like the dollar sign, but that it is not technically valid on the ascii table and i believe you need to switch the base input to utf-8 on the logstash side, but not 100% sure. I'll see if I can dig up the config change you need to set utf-8 as your default input for that filter.
Nagios-Plugins maintainer exclusively, unless you have other C language bugs with open-source nagios projects, then I am happy to help! Please pm or use other communication to alert me to issues as I no longer track the forum.
stecino
Posts: 248
Joined: Thu Mar 14, 2013 4:42 pm

Re: Failed to flush outgoing items

Post by stecino »

I am getting another error but this time it this:

[root@pden2nls1 logstash]# cat logstash.log | grep 2015-02-26 | more
{:timestamp=>"2015-02-26T15:49:18.529000-0800", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>#<Errno::EBADF: Bad file descriptor - Bad file descriptor>, :backtrace=>["org/jruby/RubyIO.java:2097:in `close'"
, "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/connection.rb:173:in `connect'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0
.39/lib/ftw/connection.rb:139:in `connect'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/request.rb:86:in `execute'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/
lib/ftw/request.rb:78:in `execute'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/agent.rb:325:in `execute'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/ftw-0.0.39/lib/ftw/a
gent.rb:217:in `post!'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:106:in `bulk_ftw'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch/protocol.rb:80:in `bulk'", "/usr/lo
cal/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch.rb:315:in `flush'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1339:i
n `each'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in
`buffer_flush'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:159:in `buffer_receive'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/elasticsearch.rb:311:in `receive'", "/
usr/local/nagioslogserver/logstash/lib/logstash/outputs/base.rb:86:in `handle'", "/usr/local/nagioslogserver/logstash/lib/logstash/outputs/base.rb:78:in `worker_setup'"], :level=>:warn}

When I start seeing these errors in the logstash log, I notice that index files are much smaller than in normal operational state. This results in failure in elasticSearch and logstash. The containers are running but the status on NLS admin tab is set to red. If I try to restart, through the GUI or going into the VM itself, it will come up but status won't clear from red to green. But when I delete those small index files, then restart works, and new index files are created.

My question: what could cause this problem. How can I monitor status of the elasticSearch and logstash per cluster node that appears on the GUI.
scottwilkerson
DevOps Engineer
Posts: 19396
Joined: Tue Nov 15, 2011 3:11 pm
Location: Nagios Enterprises
Contact:

Re: Failed to flush outgoing items

Post by scottwilkerson »

This can be caused by a bug which was fixed in 1.3.. Once you upgrade to 1.3 you should re-Apply configuration (even if you have no changes) to update to a new config format.
Former Nagios employee
Creator:
Human Design Website
Get Your Human Design Chart
stecino
Posts: 248
Joined: Thu Mar 14, 2013 4:42 pm

Re: Failed to flush outgoing items

Post by stecino »

scottwilkerson wrote:This can be caused by a bug which was fixed in 1.3.. Once you upgrade to 1.3 you should re-Apply configuration (even if you have no changes) to update to a new config format.
Cool thanks, I will upgrade to 1.3 I am currently on 1.1
Also, is there a way to monitor the cluster status. Do you have any API I could use?
stecino
Posts: 248
Joined: Thu Mar 14, 2013 4:42 pm

Re: Failed to flush outgoing items

Post by stecino »

I did the upgrade to 1.3, downloaded source tarball. But it still showing 1.1
Also, on one of the cluster nodes although upgrade restarts elastic search, it would show down on the GUI. I had to kill all the instances and manually restart logstash and elastic search.
Locked