Page 1 of 1

check_http issue

Posted: Wed Jan 08, 2014 2:45 pm
by the_Grinch
I am monitoring some public facing websites that I have no backend access too. I'm running into an issue where I am getting a HTTP/1.1 400 bad request error for one of the sites. I defined a service to add the -H flag which corrected the issue for one site, but not the other. If I do the following:

./check_http -H <www.address.com> I get the following output:

HTTP OK: HTTP/1.1 200 OK - 46319 bytes in 1.142 second response time |time=1.142137s;;;0.000000 size=46319B;;;0

But when this check runs automatically I get the bad request. Any ideas?

#Checks to see if http is up (Caesar's, Harrah's, and Golden_Nugget needs to us$
define service{
host_name <name of sites>
service_description HTTP
max_check_attempts 5
check_interval 5
retry_interval 3
check_period 24x7
notification_interval 10
notification_period 24x7
notification_options w,c,r,u
contact_groups admins
check_command check_http!-H
}

Re: check_http issue

Posted: Wed Jan 08, 2014 2:48 pm
by abrist
In your definition, is "<name of sites>" the same as the actual fqdn web url address for the site?

Re: check_http issue

Posted: Wed Jan 08, 2014 3:32 pm
by the_Grinch
Nah, I placed the hostnames I assigned in the hosts.cfg file there.

Re: check_http issue

Posted: Wed Jan 08, 2014 3:47 pm
by abrist
check_http will use the object's "hostname". You will need to specify the actual fqdn hostname to check in the actual check itself, or configure the hosts to use the fqdn . . .
We this when you have a web server serving multiple vhost sites. You may need to create a new command for check_http that changes the $HOSTNAME$/$HOSTADDRESS$ macro to an $ARGn$.