Re: [Nagios-devel] Fixes for url_encode()
Posted: Wed Nov 03, 2004 10:46 am
> I'll have to think about this before adding the patch.
Hey, great. Thanks.
> Slightly
> inaccurate timestamps are a known issue for passive checks and
> distributed setups,
OK, good that it's a known issue. In the setup I used, our
our timestamps were off by hours and, potentially, days.
This, b/c I was working with disconnected hosts which would
monitor themselves and their subsystems, but which were polled
by the central Nagios server every few hours. Under the design,
those hosts could remain disconnected for up to 3 days before
human intervention would force them to be reconnected so their
data could be reaped by passive checks from Nagios. The
problem w/ the check results getting delivered after such
a long-ish delay lead not only to timestamp inaccuracies that
were large, it also meant that, eg, a day's worth of service
checks would all be clustered around the few seconds of time
it took for Nagios to process the checks. That was the real killer
for us that lead to the patch. We could live with the times
being off, but we wanted to see all the data spread out over
the correct time intervals, instead of all clustered in a 5
second spike.
> Logging an entry with an out-of-whack timestamp can cause
> problems when running reports.
The ones in 1.x that I checked were all careful to sort their
input by timestamp. For instance, the availability reports were
accurate and worked out just fine with the out-of-order
timestamping resulting from that patch.
> I'll keep this in mind, but I'm not
> sure what to do about this.
Well, thanks very much for considering the issue.
> It may make it into 2.0, but probably
> not the 1.x tree.
Understood.
Chris .
This post was automatically imported from historical nagios-devel mailing list archives
Original poster: [email protected]
Hey, great. Thanks.
> Slightly
> inaccurate timestamps are a known issue for passive checks and
> distributed setups,
OK, good that it's a known issue. In the setup I used, our
our timestamps were off by hours and, potentially, days.
This, b/c I was working with disconnected hosts which would
monitor themselves and their subsystems, but which were polled
by the central Nagios server every few hours. Under the design,
those hosts could remain disconnected for up to 3 days before
human intervention would force them to be reconnected so their
data could be reaped by passive checks from Nagios. The
problem w/ the check results getting delivered after such
a long-ish delay lead not only to timestamp inaccuracies that
were large, it also meant that, eg, a day's worth of service
checks would all be clustered around the few seconds of time
it took for Nagios to process the checks. That was the real killer
for us that lead to the patch. We could live with the times
being off, but we wanted to see all the data spread out over
the correct time intervals, instead of all clustered in a 5
second spike.
> Logging an entry with an out-of-whack timestamp can cause
> problems when running reports.
The ones in 1.x that I checked were all careful to sort their
input by timestamp. For instance, the availability reports were
accurate and worked out just fine with the out-of-order
timestamping resulting from that patch.
> I'll keep this in mind, but I'm not
> sure what to do about this.
Well, thanks very much for considering the issue.
> It may make it into 2.0, but probably
> not the 1.x tree.
Understood.
Chris .
This post was automatically imported from historical nagios-devel mailing list archives
Original poster: [email protected]