While you can setup Nagios Log Server to be geographically distant from other instances, it is not recommended to do so. This would not fit your requirement anyway - as there is no way to stand up a Nagios Log Server box and have it *not* collect data.
In theory, you *could* make this work by using the following procedure:
1. Stand up a Nagios Log Server box at the remote site. DO NOT connect it to your existing cluster.
2. Get your license installed and default configuration done.
3. Configure Logstash via the Web GUI as appropriate.
4. Shut down elasticsearch and stop it from starting up on boot.
Logstash will die at this point.
5. Manually edit the /usr/local/nagioslogserver/logstash/etc/conf.d/999_outputs.conf file to look something like this:
6. On your actual cluster, receive all of the inbound data on a tcp port via logstash.
You could make multiple input/output definitions as required (though you would do this on the command line instead of the GUI).
For example:
Code: Select all
cat /usr/local/nagioslogserver/logstash/etc/conf.d/000_inputs.conf
Code: Select all
input {
tcp {
type => 'import_json'
tags => 'import_json'
port => 2057
codec => json
}
tcp {
type => 'import_raw'
tags => 'import_raw'
port => 2056
}
}
Code: Select all
cat /usr/local/nagioslogserver/logstash/etc/conf.d/999_outputs.conf
Code: Select all
output {
if [type] == 'import_json' {
tcp {
host => 192.168.1.1
port => 6666
}
}
if [type] == 'import_raw' {
tcp {
host => 192.168.1.1
port => 7777
}
}
}
In theory, the above procedure would work. Logstash would be acting as a relay - taking basic data in and sending basic data via TCP to your actual cluster. I have not attempted to set this up, but it is a feasible option.