Page 1 of 1

No Logstash Config?

Posted: Wed Aug 31, 2016 11:54 am
by StormTheGates
Hello, I attempted to do a full install via the source on CentOS 6.8

Everything seems to be ok except that no data is being collected / stored. When attempting to start the LogStash service via the GUI I get the following error:

{:timestamp=>"2016-08-31T11:27:50.927000-0400", :message=>"Error: No config files found: /usr/local/nagioslogserver/logstash/etc/conf.d/*\nCan you make sure this path is a logstash config file?"}


This seems fairly straight forward. However when I go to look, there are NO files in /usr/local/nagioslogserver/logstash/etc/conf.d/

Were there supposed to be files created during the install? Am I missing something / do I need to set up my own config files?

Thank you

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 12:21 pm
by rkennedy
There should have been configuration files created in this directory.

Code: Select all

[root@localhost conf.d]# ll
total 12
-rw-rw-r--. 1 apache apache 636 Aug 31 12:55 000_inputs.conf
-rw-rw-r--. 1 apache apache 987 Aug 31 12:55 500_filters.conf
-rw-rw-r--. 1 apache apache 501 Aug 31 12:55 999_outputs.conf
[root@localhost conf.d]# pwd
/usr/local/nagioslogserver/logstash/etc/conf.d
[root@localhost conf.d]#
Could you upload your install.log for us to take a look at? Was anything previously installed on this machine or is it minimal?

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 12:33 pm
by StormTheGates
Thank you for the fast reply. install.log has been pasted to:

https://ybin.me/p/4fcc87ce6c9379ff#toXr ... HzMwjGaBY=

There was an issue with the install process early on because my host has blocked NTP traffic, I had to get around. But after that it worked.

This server is very minimal, except for the fact that it has been secured to DISA STIG standards. This may have been the reason permissions lacked to write these files or something.

Would it be possible to download these files somewhere or find them in the source zip?

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 12:47 pm
by rkennedy
Nothing is standing out in the install.log. The hardening is more than likely what caused them to not be created. I'll paste the 3 files provided, but keep in mind, there may be other strange issues on the machine we're unaware of because of the hardening.

Code: Select all

[root@localhost conf.d]# cat 000_inputs.conf
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Wed, 31 Aug 2016 16:55:55 +0000
#

#
# Global inputs
#

input {
    syslog {
        type => 'syslog'
        port => 5544
    }
    tcp {
        type => 'eventlog'
        port => 3515
        codec => json {
            charset => 'CP1252'
        }
    }
    tcp {
        type => 'import_raw'
        tags => 'import_raw'
        port => 2056
    }
    tcp {
        type => 'import_json'
        tags => 'import_json'
        port => 2057
        codec => json
    }
}

#
# Local inputs
#

Code: Select all

[root@localhost conf.d]# cat 500_filters.conf
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Wed, 31 Aug 2016 16:55:55 +0000
#

#
# Global filters
#

filter {
    if [program] == 'apache_access' {
        grok {
            match => [ 'message', '%{COMBINEDAPACHELOG}']
        }
        date {
            match => [ 'timestamp', 'dd/MMM/yyyy:HH:mm:ss Z', 'MMM dd HH:mm:ss', 'ISO8601' ]
        }
        mutate {
            replace => [ 'type', 'apache_access' ]
             convert => [ 'bytes', 'integer' ]
             convert => [ 'response', 'integer' ]
        }
    }

    if [program] == 'apache_error' {
        grok {
            match => [ 'message', '\[(?<timestamp>%{DAY:day} %{MONTH:month} %{MONTHDAY} %{TIME} %{YEAR})\] \[%{WORD:class}\] \[%{WORD:originator} %{IP:clientip}\] %{GREEDYDATA:errmsg}']
        }
        mutate {
            replace => [ 'type', 'apache_error' ]
        }
    }
}

#
# Local filters
#

Code: Select all

[root@localhost conf.d]# cat 999_outputs.conf
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Wed, 31 Aug 2016 16:55:55 +0000
#

#
# Required output for Nagios Log Server
#

output {
    elasticsearch {
        cluster => 'a686a258-4a2f-4744-9356-0f96f3323ed7'
        host => 'localhost'
        document_type => '%{type}'
        node_name => ''
        protocol => 'transport'
        workers => 4
    }
}

#
# Global outputs
#



#
# Local outputs
#


Re: No Logstash Config?

Posted: Wed Aug 31, 2016 1:14 pm
by StormTheGates
Thank you that was very helpful! Making progress for sure :)

New small problem:

When starting the Logstash Collector I now get this message

Code: Select all

{:timestamp=>"2016-08-31T14:12:45.843000-0400", :message=>"syslog listener died", :protocol=>:tcp, :address=>"0.0.0.0:5544", :exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use>, :backtrace=>["org/jruby/ext/socket/RubyTCPServer.java:118:in `initialize'", "org/jruby/RubyIO.java:853:in `new'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-0.1.6/lib/logstash/inputs/syslog.rb:152:in `tcp_listener'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-0.1.6/lib/logstash/inputs/syslog.rb:117:in `server'", "/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-syslog-0.1.6/lib/logstash/inputs/syslog.rb:101:in `run'"], :level=>:warn}
The key part being the Address already in use - bind - Address already in use

Pretty specific and apparent whats up. My question is this, is rsync meant to be running on 5544? In my rsync.conf

Code: Select all

# Provides UDP syslog reception
$ModLoad imudp
UDPServerRun 5544

# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 5544
Should I adjust these to be a different port?

Thank you for your help

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 1:26 pm
by rkennedy
When you navigate to Home -> Administration -> Global Configuration, do you see the Inputs present there? Yes, you can change 5544 to any port you'd like, which would eliminate the issue.

We do not listen on 514 by default, since it requires root privs. If you want to change it to this to anything <1024, feel free to by using this document - https://assets.nagios.com/downloads/nag ... Server.pdf

As for rsync running on 5544, I haven't heard of this before. My googling didn't get very far either. It may be custom to your environment. Are you sure it wasn't rsyslog?
We are now ready to configure rsyslog. Open the configuration file for rsyslog. It is located here:

/etc/rsyslog.conf

Usually, this is a basic configuration that has been shipped with the operating system. In the end, our configuration should look somehow like this (the minimum for our scenario):

$ModLoad imudp.so
$ModLoad ommysql
$ModLoad sm_cust_bindcdr

$UDPServerRun 514
(see http://www.rsyslog.com/tag/udp/ / http://www.rsyslog.com/doc/master/confi ... imudp.html)

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 1:57 pm
by StormTheGates
I guess I just got a bit confused. For the server where Nagios Log Server is installed, I should or should NOT have the rsyslog service running? (I manually configured rsyslog to run on 5544 thinking it was what was needed)

Is the logstash doing the role of the rsyslog? So when I have my node doing

Code: Select all

$WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList   # run asynchronously
$ActionResumeRetryCount -1    # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
*.*   @@10.10.10.117:5544  
The 5544 should be the logstash service not an rsyslog service?

Without it the following occurs:

Code: Select all

{:timestamp=>"2016-08-31T15:05:35.966000-0400", :message=>"Got error to send bulk of actions: None of the configured nodes are available: []", :level=>:error}
{:timestamp=>"2016-08-31T15:05:35.966000-0400", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [], :backtrace=>["org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(org/elasticsearch/client/transport/TransportClientNodesService.java:279)", "org.elasticsearch.client.transport.TransportClientNodesService.execute(org/elasticsearch/client/transport/TransportClientNodesService.java:198)", "org.elasticsearch.client.transport.support.InternalTransportClient.execute(org/elasticsearch/client/transport/support/InternalTransportClient.java:106)", "org.elasticsearch.client.support.AbstractClient.bulk(org/elasticsearch/client/support/AbstractClient.java:163)", "org.elasticsearch.client.transport.TransportClient.bulk(org/elasticsearch/client/transport/TransportClient.java:356)", "org.elasticsearch.action.bulk.BulkRequestBuilder.doExecute(org/elasticsearch/action/bulk/BulkRequestBuilder.java:164)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:91)", "org.elasticsearch.action.ActionRequestBuilder.execute(org/elasticsearch/action/ActionRequestBuilder.java:65)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::Elasticsearch::Protocols::NodeClient.bulk(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch/protocol.rb:224)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:466)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:466)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:465)", "LogStash::Outputs::ElasticSearch.submit(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:465)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:490)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:490)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:489)", "LogStash::Outputs::ElasticSearch.flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-0.2.8-java/lib/logstash/outputs/elasticsearch.rb:489)", "RUBY.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:219)", "org.jruby.RubyHash.each(org/jruby/RubyHash.java:1341)", "RUBY.buffer_flush(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:216)", "RUBY.buffer_initialize(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:112)", "org.jruby.RubyKernel.loop(org/jruby/RubyKernel.java:1511)", "RUBY.buffer_initialize(/usr/local/nagioslogserver/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.19/lib/stud/buffer.rb:110)"], :level=>:warn}
The important part being "Got error to send bulk of actions: None of the configured nodes are available: []"

If you can point me in the right direction Id be very appreciative :)

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 2:11 pm
by StormTheGates
Success! I got it by changing the hash in 999_outputs.conf to match my clusters hash from the system dashboard.

Things seem to be appearing now! Thank you for all of your help!

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 2:31 pm
by rkennedy
Awesome, and no problem! Nice catch there, I overlooked the clusterid being in the 999_outputs.conf.

Are we good to mark this thread as resolved?

Re: No Logstash Config?

Posted: Wed Aug 31, 2016 2:34 pm
by StormTheGates
Yes indeed, we are good to mark resolved. Thanks!