Import Sources from Files
Posted: Thu Dec 15, 2016 12:34 pm
Hi All,
I have a particular scenario that I wanted to share with you in order to define a best practice for it.
- Nagios Log Server Version: Nagios Log Server • 1.4.4 •
- Cluster: No Cluster, just a Single Instance
My customer has purchased a license for Nagios Log Server in order to monitor a really mission critical application that is core for their busisness. They have identified some patterns that when they appear an action have to be taken asap.
The application is kind of legacy (can't remember the programming lenguage that it is developed on) and it runs on AIX. The problem here is that the application doesn't 'speak' with rsyslog protocol, so we have to import the log files by hand. For this I was thinking to use the 'shypper.py' script that comes with Nagios Log Server.
We have done some test on a lab environment and everything went ok, I wrote a query on json for the log pattern which detected the error and it was ok.
Now it's time to deploy Nagios log Server on prod environment and I am not really sure on which would be the better way or the best practice for importing the logs in order to be like a 'real time' monitoring. The customer's sysadmin told me that they can send me through scp or something like that the logs to Nagios Log Server, but what I was wondering is the following:
For ex:
1) They send me the logs every 5 minutes
1.a They send me the entire log from the app server to Nagios LS
1.b I upload it every 5 minutes
The question here is that if the log is the same but with the plus of the latest 5 minutes, Nagios LS realize that is the same information, discard that and import the new one only? Or I am having a duplicate log?
Which would be the better way to go through this? I know that this is not the best scenario. However, I have to think in some script that could resolve this.
Is there any best practice for scenarios like these?
Any kind of information will be thankfull.
Regards,
Juan
I have a particular scenario that I wanted to share with you in order to define a best practice for it.
- Nagios Log Server Version: Nagios Log Server • 1.4.4 •
- Cluster: No Cluster, just a Single Instance
My customer has purchased a license for Nagios Log Server in order to monitor a really mission critical application that is core for their busisness. They have identified some patterns that when they appear an action have to be taken asap.
The application is kind of legacy (can't remember the programming lenguage that it is developed on) and it runs on AIX. The problem here is that the application doesn't 'speak' with rsyslog protocol, so we have to import the log files by hand. For this I was thinking to use the 'shypper.py' script that comes with Nagios Log Server.
We have done some test on a lab environment and everything went ok, I wrote a query on json for the log pattern which detected the error and it was ok.
Now it's time to deploy Nagios log Server on prod environment and I am not really sure on which would be the better way or the best practice for importing the logs in order to be like a 'real time' monitoring. The customer's sysadmin told me that they can send me through scp or something like that the logs to Nagios Log Server, but what I was wondering is the following:
For ex:
1) They send me the logs every 5 minutes
1.a They send me the entire log from the app server to Nagios LS
1.b I upload it every 5 minutes
The question here is that if the log is the same but with the plus of the latest 5 minutes, Nagios LS realize that is the same information, discard that and import the new one only? Or I am having a duplicate log?
Which would be the better way to go through this? I know that this is not the best scenario. However, I have to think in some script that could resolve this.
Is there any best practice for scenarios like these?
Any kind of information will be thankfull.
Regards,
Juan