Re: [Nagios-devel] Nagios scalability issues
Posted: Mon Jul 19, 2004 11:36 am
--pe+tqlI1iYzVj1X/
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
The distributed monitoring setup in Nagios is designed to handle this probl=
em,
instad of pushing the scheduling to the remote hosts instead create a
second layer of Nagios hosts. You can create one central host to handle
all the notifications and have the distributed Nagios hosts push
their results up to it via the Obsessive Compulsive Service Processor.
-Jason Martin
On Mon, Jul 19, 2004 at 10:30:30PM +0300, Cristian M. Streng wrote:
> monitoring engine - it's a lightweight client-server system that moves th=
e=20
> scheduler part from the main server to the individual machines. This way=
=20
> each machine schedules its ~10-20 service checks, and reports back to the=
=20
> server the changes in the service status. And it fixes all of Nagios's=20
> problem - at least all that matter to me: the main server becomes less=20
> loaded, and the network load is also much reduced. The server part of my=
=20
> application just collects the results from client machines and writes the=
m=20
> to a database, so that adds practically no load to the server machine. I'=
m=20
> planning on writing another component that would take these results and=
=20
> send them to nagios - but I have a few questions. What's the best way to=
=20
> send check results to nagios? Will the external command interface work?=
=20
> I'm also interested in the scalability of this feature.
--=20
I'm so broke, I can't even pay attention.
This message is PGP/MIME signed.
--pe+tqlI1iYzVj1X/
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: --no-verbose
iD8DBQFA/CLJl2ODWuqVSBMRAnQnAKCFb2cxDT0YLlfKLjNT6kTC48zGngCfdRE+
quSdaG3YssJS1RxHd4hhGgk=
=7qQg
-----END PGP SIGNATURE-----
--pe+tqlI1iYzVj1X/--
This post was automatically imported from historical nagios-devel mailing list archives
Original poster: [email protected]
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
The distributed monitoring setup in Nagios is designed to handle this probl=
em,
instad of pushing the scheduling to the remote hosts instead create a
second layer of Nagios hosts. You can create one central host to handle
all the notifications and have the distributed Nagios hosts push
their results up to it via the Obsessive Compulsive Service Processor.
-Jason Martin
On Mon, Jul 19, 2004 at 10:30:30PM +0300, Cristian M. Streng wrote:
> monitoring engine - it's a lightweight client-server system that moves th=
e=20
> scheduler part from the main server to the individual machines. This way=
=20
> each machine schedules its ~10-20 service checks, and reports back to the=
=20
> server the changes in the service status. And it fixes all of Nagios's=20
> problem - at least all that matter to me: the main server becomes less=20
> loaded, and the network load is also much reduced. The server part of my=
=20
> application just collects the results from client machines and writes the=
m=20
> to a database, so that adds practically no load to the server machine. I'=
m=20
> planning on writing another component that would take these results and=
=20
> send them to nagios - but I have a few questions. What's the best way to=
=20
> send check results to nagios? Will the external command interface work?=
=20
> I'm also interested in the scalability of this feature.
--=20
I'm so broke, I can't even pay attention.
This message is PGP/MIME signed.
--pe+tqlI1iYzVj1X/
Content-Type: application/pgp-signature
Content-Disposition: inline
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
Comment: --no-verbose
iD8DBQFA/CLJl2ODWuqVSBMRAnQnAKCFb2cxDT0YLlfKLjNT6kTC48zGngCfdRE+
quSdaG3YssJS1RxHd4hhGgk=
=7qQg
-----END PGP SIGNATURE-----
--pe+tqlI1iYzVj1X/--
This post was automatically imported from historical nagios-devel mailing list archives
Original poster: [email protected]