Strange behaviour in alerts

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
Post Reply
IT_LAS
Posts: 1
Joined: Mon Dec 30, 2019 5:34 am

Strange behaviour in alerts

Post by IT_LAS »

Hello,
We have detected some strange behaviour with the alerts we have configured in our NLS installation.
We currently have 2 alerts configured, one for kubernetes cluster logs and another for kafka cluster logs.
The queries we launch to obtain the traces for the alerts are totally different and yet, in the alert for the kafka cluster we have traces from the kubernetes logs mixed in.
I attach examples in screenshots.
We understand that the behaviour is not correct, the result of the query that we launch in the dashboard and the one configured in the alert should be the same.
Could it be a bug in the application?
We are using the latest version 2.1.15.
We had not detected this problem until now that we have added another alert in the system.....
We need help to understand what is going on.
If we need more information to debug the problem, please let us know.
Thanks
You do not have the required permissions to view the files attached to this post.
User avatar
danderson
Posts: 98
Joined: Wed Aug 09, 2023 10:05 am

Re: Strange behaviour in alerts

Post by danderson »

Thanks for reaching out @IT_LAS,

I'm not fulling understanding the screenshots you sent, as I don't see any kubernetes logs in the emails. If you're referring to the fact that there are no alerts in the dashboard, try removing the time filter that only shows logs that are at most 40 minutes old. Otherwise, you are right that this behavior is unexpected and it could be a bug.
Post Reply