How I use Internal IP and External IP for geoip on Maps
Posted: Wed Jul 17, 2019 4:15 pm
This is my story... Not for the technically challenged.
How I got Internal IP to work for geoip on NLS Maps
First I had to find a 'trusted source of information' for the IP Subnet to physical locations. I decided to use our MS Active Directory as a source. The Data there is kept current at my company by a very professional and dedicated employee. All building locations are defined in Sites, and all the subnet definitions tie to those sites using the Subnets container. I use perl to pull data. From everything.
Basically, identify data, pull it local, change it to look the way you need, and save it in a database. I do this for all things. The AD Sites and Subnets are refreshed in my DB once a day. New Subnets don't show up all that often so once a day works.
The perl code basics are:Lots of magic happens in the above section. The above stuff is run once a day.
So, the DB now has AD data with my Longitude and Latitude merged in. These fields are the db data. More specific code available if asked for.
The AD Subnet fields are
CN Description DistinguishedName Name Location
My additional fields - pulled from typical geoip information
IPStart IPEnd Building City Country Longitude Latitude CountryCode Continent LngLatLocation Time_Zone Country_Name IPasNumStart IPasNumEnd Area_Code Postal_Code Real_Region_Name Region_Name DMA_Code IP
The Database has 10,151 entries today.
Okay. So I have internal IP data and locations. Now I need to get it automated and used in NLS
I found a Logstash plugin that can read MySQL and other DB data and make it available to enhance the Elasticsearch data.
The plugin will run a query against the MySQL DB, and cache the data in RAM for whatever interval you set. Then, when Logstash processes the input data, the plugin filter will use the data stored in RAM and add to what is stored by Elasticsearch.
There are data type issues pulling from a static DB and storing it with the jdbc plugin. For example the MySQL int data type will hold the decimal IP address, but when it gets fed to the jdbc plugin, it fails as it is outside the allowed range of numbers. As such I had to change the field definition in MySQL from int to bigint for that data. Then the mapping went from MySQL int to jdbc long data type.
So, to install the required plugin. As the nagios user.
If you do not cd to the directory first you may not succeed. I did not at first. That was a hard lesson to learn.
The jdbc_static plugin install did not work at first as it was looking for a piece of code that NLS did not have in place. I needed to add it manually.Once this was in place, the plugin worked, and I could start the actual debugging.
Note: Special thanks to Scott Wilkerson for that solution.
Then set the proper file permissions
Next step was to make the configuration work from the command line. That is the only way I could see to do debugging.
Taking baby steps - can I see anything at all - does NLS see me.
Make the test*conf files. From the command line run the program.
/usr/local/nagioslogserver/logstash/bin/logstash -f [testing config files]
Paste in the following line as Logstash is waiting for standard input.
I just copy and paste to the command window the log data below.
02:36.01 /var/log/Service1/myapp.log Testing one two three
Once you see stuff use ctrl-d to exit. Or continue to paste in more lines to test.
So, this configuration file was where I ended up. I added to it slowly until I could get data. I went through 6 config files to get here. From this I knew I could get ruby to do things. I added slowly to see each item work. Played with syntax. Had to make things work in the 'old version' than NLS uses.
Once I had that working. I started to try IP stuff.
I added IP to the data we paste in simulating the log line.
02:36.01 /var/log/Service1/myapp.log 192.168.1.245 Ruby is great
With this config file - the IP should be show as a separate item in the output
From this once we have the IP converted to a decimal value we can use it in a SQL query and find if it is between the decimal start and end IP of the subnet definition.
Try this with local IP and then with External IP. I made up 199.198.197.196 and found out that is is real and in use in Canada.
Once this works for both types of IP, Internal and External, transplant the filter to NLS.
I did learn that NLS will not show the geoip data instantly. Until the daily index was recreated I did not see geoip in the NLS data. I did turn on logstash debug and watched the log file. In there I could see that the code was working.
I will send anyone my actual code that asks for it. I will have to change usernames, passwords, and IP's of course. There is a lot of detail I did not add here. The NLS filter in production use for example. I did provide enough data to get the filter to work from the command line. The database schema is not provided but maybe the filter configuration will give you enough information to figure it out. Maybe it will inspire you.
Suggestion. "Take Small Bites" If you want to do this, use very small incremental steps and make sure each part works before taking the next step. Add to configurations files in small logical pieces so you can test, and then test, and finally test.
Thanks
Steve B
How I got Internal IP to work for geoip on NLS Maps
First I had to find a 'trusted source of information' for the IP Subnet to physical locations. I decided to use our MS Active Directory as a source. The Data there is kept current at my company by a very professional and dedicated employee. All building locations are defined in Sites, and all the subnet definitions tie to those sites using the Subnets container. I use perl to pull data. From everything.
Basically, identify data, pull it local, change it to look the way you need, and save it in a database. I do this for all things. The AD Sites and Subnets are refreshed in my DB once a day. New Subnets don't show up all that often so once a day works.
The perl code basics are:
Code: Select all
# get AD information and populate the results array
print "get the AD information\n";
&get_ad();
# get Building Geo-Coordinates - Manually created file one line per unique site (about 200 lines)
print "get the Building geoip information\n";
&get_bldg("geoip.dat");
# Print out the hash for Debugging
#print Dumper(\%geodata) if $DEBUG;
# open the database for writing and merge the 2 data sources into a useful table.
print "open the database for write\n";
&db_connect();
# use the results array to populate the MySQL database
print "write the data\n";
&db_write();
# close the database after writing
print "close the database\n";
&db_disconnect();
So, the DB now has AD data with my Longitude and Latitude merged in. These fields are the db data. More specific code available if asked for.
The AD Subnet fields are
CN Description DistinguishedName Name Location
My additional fields - pulled from typical geoip information
IPStart IPEnd Building City Country Longitude Latitude CountryCode Continent LngLatLocation Time_Zone Country_Name IPasNumStart IPasNumEnd Area_Code Postal_Code Real_Region_Name Region_Name DMA_Code IP
The Database has 10,151 entries today.
Okay. So I have internal IP data and locations. Now I need to get it automated and used in NLS
I found a Logstash plugin that can read MySQL and other DB data and make it available to enhance the Elasticsearch data.
The plugin will run a query against the MySQL DB, and cache the data in RAM for whatever interval you set. Then, when Logstash processes the input data, the plugin filter will use the data stored in RAM and add to what is stored by Elasticsearch.
There are data type issues pulling from a static DB and storing it with the jdbc plugin. For example the MySQL int data type will hold the decimal IP address, but when it gets fed to the jdbc plugin, it fails as it is outside the allowed range of numbers. As such I had to change the field definition in MySQL from int to bigint for that data. Then the mapping went from MySQL int to jdbc long data type.
So, to install the required plugin. As the nagios user.
Code: Select all
cd /usr/local/nagioslogserver/logstash
bin/logstash-plugin install logstash-filter-jdbc_static
The jdbc_static plugin install did not work at first as it was looking for a piece of code that NLS did not have in place. I needed to add it manually.
Code: Select all
cd /usr/local/nagioslogserver/logstash
cd vendor/bundle/jruby/1.9/gems/logstash-core-2.4.1-java/lib/logstash/util
vi loggable.rb
# encoding: utf-8
require "logstash/logging"
require "logstash/namespace"
module LogStash module Util
module Loggable
def self.included(klass)
def klass.logger
@logger ||= Cabin::Channel.get(LogStash)
end
def logger
self.class.logger
end
end
end
end; end
Note: Special thanks to Scott Wilkerson for that solution.
Then set the proper file permissions
Code: Select all
chmod 775 loggable.rbTaking baby steps - can I see anything at all - does NLS see me.
Make the test*conf files. From the command line run the program.
/usr/local/nagioslogserver/logstash/bin/logstash -f [testing config files]
Paste in the following line as Logstash is waiting for standard input.
I just copy and paste to the command window the log data below.
02:36.01 /var/log/Service1/myapp.log Testing one two three
Once you see stuff use ctrl-d to exit. Or continue to paste in more lines to test.
Code: Select all
cat test01.conf
input {
stdin { }
}
filter {
# test things here
grok {
match => { "message" => "%{DATA:justtime} %{DATA:logsource} %{GREEDYDATA:msg}" }
} #
}
output {
stdout {
codec => rubydebug
}
}
Code: Select all
cat test02.conf
input {
stdin { }
}
filter {
# test things here
grok {
match => { "message" => "%{DATA:justtime} %{DATA:logsource} %{GREEDYDATA:msg}" }
} #
# incorrectly autopopulates to first day of year
date {
match => [ "justtime", "HH:mm.ss" ]
target => "incorrectfulldatetime"
timezone => "America/Los_Angeles"
} # date
# use ruby to augment with current day
ruby {
code => "
event['fulldatetime'] = Time.now.strftime('%Y-%m-%d') + ' ' + event['justtime']
"
}
date {
match => [ "fulldatetime", "YYYY-MM-dd HH:mm.ss" ]
target => "correctfulldatetime"
timezone => "America/Los_Angeles"
} # date
# split apart log source to extract service name
ruby {
code => "
fpath = event['logsource'].split('/')
event['serviceName'] = fpath[fpath.length-2].downcase
"
}
# append msg field to disk
ruby {
code => "
File.open('/tmp/mydebug.log','a') { |f| f.puts event['msg'] }
"
}
}
output {
stdout {
codec => rubydebug
}
}
I added IP to the data we paste in simulating the log line.
02:36.01 /var/log/Service1/myapp.log 192.168.1.245 Ruby is great
With this config file - the IP should be show as a separate item in the output
Code: Select all
input {
stdin { }
}
# The following data is the input - pasted in
# 02:36.01 /var/log/Service1/myapp.log 192.168.1.245 Ruby is great
filter {
# test things here
grok {
match => { "message" => "%{DATA:justtime} %{DATA:logsource} %{IPORHOST:c-ip} %{GREEDYDATA:msg}" }
} #
# incorrectly autopopulates to first day of year
date {
match => [ "justtime", "HH:mm.ss" ]
target => "incorrectfulldatetime"
timezone => "America/Los_Angeles"
} # date
# use ruby to augment with current day
ruby {
code => "
event['fulldatetime'] = Time.now.strftime('%Y-%m-%d') + ' ' + event['justtime']
"
}
date {
match => [ "fulldatetime", "YYYY-MM-dd HH:mm.ss" ]
target => "correctfulldatetime"
timezone => "America/Los_Angeles"
} # date
# split apart log source to extract service name
ruby {
code => "
fpath = event['logsource'].split('/')
event['serviceName'] = fpath[fpath.length-2].downcase
"
}
ruby {
code => "
require 'ipaddr'
decimalip = event['c-ip']
event['clientip'] = IPAddr.new(decimalip,Socket::AF_INET).to_i
"
}
}
output {
stdout {
codec => rubydebug {metadata => true}
}
}
Code: Select all
input {
stdin { }
}
# The following data is the input - pasted in - one line at a time
# - testing different IP addresses - internal and external
# 02:36.01 /var/log/Service1/myapp.log 172.22.86.206 Ruby is great
# 02:36.01 /var/log/Service1/myapp.log 10.249.91.41 Ruby is what
# 02:36.01 /var/log/Service1/myapp.log 199.198.197.196 Ruby is Ruby
#
filter {
# test things here
grok {
match => { "message" => "%{DATA:justtime} %{DATA:logsource} %{IPORHOST:c-ip} %{GREEDYDATA:msg}" }
} #
# incorrectly autopopulates to first day of year
date {
match => [ "justtime", "HH:mm.ss" ]
target => "incorrectfulldatetime"
timezone => "America/Los_Angeles"
} # date
# use ruby to augment with current day
ruby {
code => "
event['fulldatetime'] = Time.now.strftime('%Y-%m-%d') + ' ' + event['justtime']
"
}
date {
match => [ "fulldatetime", "YYYY-MM-dd HH:mm.ss" ]
target => "correctfulldatetime"
timezone => "America/Los_Angeles"
} # date
# split apart log source to extract service name
ruby {
code => "
fpath = event['logsource'].split('/')
event['serviceName'] = fpath[fpath.length-2].downcase
"
}
ruby {
code => "
require 'ipaddr'
decimalip = event['c-ip']
event['clientipdec'] = IPAddr.new(decimalip,Socket::AF_INET).to_i
event['clientip'] = event['c-ip']
event['clientipnew'] = decimalip
"
}
# where is a loopback address anyway. Don't bother
if [clientipnew] !~ /127\.0\.0\.1/ {
jdbc_static {
loaders => [
{
id => "remote_geoips"
query => "SELECT
startrange, endrange, building,
geoiplongitude, geoiplatitude, geoiplocation,
geoipcity, geoiptime_zone, geoipcontinent_code,
geoipcountry_code3, geoipcountry_code2, geoipcountry_name
FROM CMDB.ADSubnets ORDER BY startrange"
local_table => "local_geoips"
}
]
local_db_objects => [
{
name => "local_geoips"
index_columns => ["startrange"]
columns => [
["startrange", "bigint"],
["endrange", "bigint"],
["building", "varchar(8)"],
["geoiplongitude", "decimal(11,8)"],
["geoiplatitude", "decimal(10,8)"],
["geoiplocation", "varchar(64)"],
["geoipcity", "varchar(64)"],
["geoiptime_zone", "varchar(64)"],
["geoipcontinent_code", "varchar(2)"],
["geoipcountry_code3", "varchar(3)"],
["geoipcountry_code2", "varchar(2)"],
["geoipcountry_name", "varchar(64)"]
]
}
]
local_lookups => [
{
query => "SELECT
geoiplongitude longitude, geoiplatitude latitude, geoiplocation location,
geoipcity city, geoiptime_zone timezone, geoipcontinent_code continent_code,
geoipcountry_code3 country_code3, geoipcountry_code2 country_code2,
geoipcountry_name country_name, building FROM local_geoips
WHERE :clientipnumber BETWEEN startrange AND endrange"
parameters => {clientipnumber => "[clientipdec]"}
target => "pseudo_geoip"
}
]
# using add_field here to add & rename values to the event root
add_field => { "[geoip][longitude]" => "%{[pseudo_geoip][0][longitude]}" }
add_field => { "[geoip][latitude]" => "%{[pseudo_geoip][0][latitude]}" }
add_field => { "[geoip][location]" => [ "%{[pseudo_geoip][0][longitude]}", "%{[pseudo_geoip][0][latitude]}" ] }
add_field => { "[geoip][city]" => "%{[pseudo_geoip][0][city]}" }
add_field => { "[geoip][timezone]" => "%{[pseudo_geoip][0][timezone]}" }
add_field => { "[geoip][continent_code]" => "%{[pseudo_geoip][0][continent_code]}" }
add_field => { "[geoip][country_code3]" => "%{[pseudo_geoip][0][country_code3]}" }
add_field => { "[geoip][country_code2]" => "%{[pseudo_geoip][0][country_code2]}" }
add_field => { "[geoip][country_name]" => "%{[pseudo_geoip][0][country_name]}" }
add_field => { "[geoip][building]" => "%{[pseudo_geoip][0][building]}" }
#
# Cleanup fields
# remove_field => ["pseudo_geoip"]
staging_directory => "/tmp/logstash/jdbc_static/import_data"
# run loaders every 2 hours
loader_schedule => "* */2 * * *"
jdbc_user => "adsubnet"
jdbc_password => "adsubnetreader"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_driver_library => "/usr/share/java/mysql-connector-java.jar"
jdbc_connection_string => "jdbc:mysql://131.198.86.206:3306/CMDB?useUnicode=true&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC"
}
}
if [geoip][city] =~ /pseudo/ {
mutate {
# Custom fields added for Internal IP spaces
remove_field => ["[geoip][city]"]
remove_field => ["[geoip][building]"]
}
geoip {
database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
source => "clientip"
}
}
mutate {
# Cleanup fields
#remove_field => ["pseudo_geoip"]
#remove_field => ["clientip"]
}
}
output {
stdout {
codec => rubydebug {metadata => true}
}
}
Once this works for both types of IP, Internal and External, transplant the filter to NLS.
I did learn that NLS will not show the geoip data instantly. Until the daily index was recreated I did not see geoip in the NLS data. I did turn on logstash debug and watched the log file. In there I could see that the code was working.
I will send anyone my actual code that asks for it. I will have to change usernames, passwords, and IP's of course. There is a lot of detail I did not add here. The NLS filter in production use for example. I did provide enough data to get the filter to work from the command line. The database schema is not provided but maybe the filter configuration will give you enough information to figure it out. Maybe it will inspire you.
Suggestion. "Take Small Bites" If you want to do this, use very small incremental steps and make sure each part works before taking the next step. Add to configurations files in small logical pieces so you can test, and then test, and finally test.
Thanks
Steve B