Ok, so I have one XI server fwding a few results to another. On the receiving server it is of course setup as a passive check. I don't want alerts on the first(soft) critical as you can see in the image. However, on the first result of critical an alert was sent out, what do i have set wrong?
Alerted when I shouldn't have been
Alerted when I shouldn't have been
You do not have the required permissions to view the files attached to this post.
2 of XI5.6.14 Prod/DR/DEV - Nagios LogServer 2 Nodes
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
Re: Alerted when I shouldn't have been
If I'm not mistaken, passive results are always considered hard states.
Former Nagios employee
Re: Alerted when I shouldn't have been
That's nuts! I want the retries just like with active.....results could be a fluke or something....hmmm...now this is making be totally rethink wanting to use NCPA...tmcdonald wrote:If I'm not mistaken, passive results are always considered hard states.
2 of XI5.6.14 Prod/DR/DEV - Nagios LogServer 2 Nodes
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
Re: Alerted when I shouldn't have been
Not so fastBanditBBS wrote:now this is making be totally rethink wanting to use NCPA...
http://nagios.sourceforge.net/docs/3_0/ ... s_are_soft
So I guess I should say "almost" always hard states.
Former Nagios employee
Re: Alerted when I shouldn't have been
tmcdonald wrote:Not so fastBanditBBS wrote:now this is making be totally rethink wanting to use NCPA...
http://nagios.sourceforge.net/docs/3_0/ ... s_are_soft
So I guess I should say "almost" always hard states.
See, now this can count towards your valid posts count
2 of XI5.6.14 Prod/DR/DEV - Nagios LogServer 2 Nodes
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
Re: Alerted when I shouldn't have been
Actually... That might be only for hosts. Gonna talk to Scott on this one. I swear I've seen this done for services.
Former Nagios employee
Re: Alerted when I shouldn't have been
You know...believe it or not...I might be an idiot. I think it's actually fine and did as it was supposed to, I apparently don't know how to properly look at the historytmcdonald wrote:Actually... That might be only for hosts. Gonna talk to Scott on this one. I swear I've seen this done for services.
I'm investigating and you can probably ignore this now....however, if Scott says anything interesting, let me know
2 of XI5.6.14 Prod/DR/DEV - Nagios LogServer 2 Nodes
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
Re: Alerted when I shouldn't have been
Scott was gone so I took Nick Scott in his place.
So, passive checks are weird:
Passive HOST checks I think always come in as HARD whether UP or DOWN
Passive SERVICE OK checks are HARD (so for those of you keeping score, all "good" passive results are HARD)
Passive SERVICE WARN/CRIT are SOFT
This is by default. We couldn't find a passive_service_checks_are_soft option because, unlike host results, they are soft if they are a warn/crit.
Nick and I both seem to think that it's likely your notification handler not properly checking the HARD/SOFT status of passive service checks specifically.
So, passive checks are weird:
Passive HOST checks I think always come in as HARD whether UP or DOWN
Passive SERVICE OK checks are HARD (so for those of you keeping score, all "good" passive results are HARD)
Passive SERVICE WARN/CRIT are SOFT
This is by default. We couldn't find a passive_service_checks_are_soft option because, unlike host results, they are soft if they are a warn/crit.
Nick and I both seem to think that it's likely your notification handler not properly checking the HARD/SOFT status of passive service checks specifically.
Former Nagios employee
Re: Alerted when I shouldn't have been
Ok, check this craziness out...
from the source server: from the destination server: Ummm, what the heck! This is why I got alerted, but why on earth does it show so many critical results when the source only had the one.
from the source server: from the destination server: Ummm, what the heck! This is why I got alerted, but why on earth does it show so many critical results when the source only had the one.
You do not have the required permissions to view the files attached to this post.
2 of XI5.6.14 Prod/DR/DEV - Nagios LogServer 2 Nodes
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
See my projects on the Exchange at BanditBBS - Also check out my Nagios stuff on my personal page at Bandit's Home and at github
Re: Alerted when I shouldn't have been
This is total speculation, but looking at the timestamps it looks like the source is sending the results twice every 10 seconds or so to the destination:
15:27:18 shows up twice for the destination, then you have...
15:27:28 followed by...
15:27:38 and 15:27:39 and...
15:27:48 and 15:27:49 and finally...
15:27:58 and 15:27:59
Pretty much follows the "every 10 seconds send out two syncs" pattern. Not sure why this would be really.
So the source is really only checking every check_interval minutes, but seems to be syncing those results much more frequently.
15:27:18 shows up twice for the destination, then you have...
15:27:28 followed by...
15:27:38 and 15:27:39 and...
15:27:48 and 15:27:49 and finally...
15:27:58 and 15:27:59
Pretty much follows the "every 10 seconds send out two syncs" pattern. Not sure why this would be really.
So the source is really only checking every check_interval minutes, but seems to be syncing those results much more frequently.
Former Nagios employee