Hello,
I have an issue with disk used % being read. I have C and T drives reading correctly but D and L are kicking back errors. I believe the issue is due to C and T being logical drives where D and L are mounted LUNS. D and L show as PhysicalDisks in the API where C and T show as disks. I have messed around and gotten some data to show for D and L but i need % used to show in the Nagios GUI for alerts and tracking. Is there a way this can be accomplished? I have attached a few images to help illustrate my situation.
Thanks.
Revise Query
Revise Query
You do not have the required permissions to view the files attached to this post.
Re: Revise Query
Assuming this is a Windows machine, can you post a screenshot of 'Disk Management' for us to look at? Also, run the following in powershell and send over the output -
Code: Select all
gdr -PSProvider 'FileSystem'
Former Nagios Employee
Re: Revise Query
Please see the attached.
Thanks.
Thanks.
You do not have the required permissions to view the files attached to this post.
Re: Revise Query
Please post the entire output of this command:
Code: Select all
./check_ncpa.py -H X.X.X.X -t apipass -M 'disk/logical/' -lRe: Revise Query
Below is the response to that command.
[root@ libexec]# ./check_ncpa.py -H -t -M 'disk/logical/' -l
logical/
T:|/
total_size: [21471625216, u'b']
used_percent: [0.80000000000000004, u'%']
used: [168099840, u'b']
free: [21303525376, u'b']
device_name: [[u'T:\\'], u'name']
C:|/
total_size: [107005079552, u'b']
used_percent: [23.0, u'%']
used: [24646361088, u'b']
free: [82358718464, u'b']
device_name: [[u'C:\\'], u'name']
[root@ libexec]# ./check_ncpa.py -H -t -M 'disk/logical/' -l
logical/
T:|/
total_size: [21471625216, u'b']
used_percent: [0.80000000000000004, u'%']
used: [168099840, u'b']
free: [21303525376, u'b']
device_name: [[u'T:\\'], u'name']
C:|/
total_size: [107005079552, u'b']
used_percent: [23.0, u'%']
used: [24646361088, u'b']
free: [82358718464, u'b']
device_name: [[u'C:\\'], u'name']
Re: Revise Query
Ok, I think I've figured it out. This is a node in a MS SQL cluster, right? I believe that you are looking at the inactive node. Think of those reserved disks as mount points, they exist but there is nothing mounted to them (the disks are mounted to the active cluster node).
You have a couple options:
1. Point the check at the cluster VIP DNS/IP
2. Use the check_cluster plugin (or BPI if you have the Enterprise license). You would need to setup the checks on all of the nodes and then they would fail on the non-active nodes but the check_cluster or BPI wizard would allow you to say "only go critical if all of the services checks on all nodes have a problem".
Let me know if you have any questions.
You have a couple options:
1. Point the check at the cluster VIP DNS/IP
2. Use the check_cluster plugin (or BPI if you have the Enterprise license). You would need to setup the checks on all of the nodes and then they would fail on the non-active nodes but the check_cluster or BPI wizard would allow you to say "only go critical if all of the services checks on all nodes have a problem".
Let me know if you have any questions.
Re: Revise Query
How can i point the service at the VIP of the cluster when I can't install a NCPA plugin since the cluster name doesn't exist in a physical sense?
Thanks.
Thanks.
Re: Revise Query
NCPA would be installed on each of the nodes, when the XI server reaches out to the VIP it would connect to the active node's NCPA agent.
Re: Revise Query
Would the command look like the below?
./check_ncpa.py -H CLUSTERIP -t NODEPASSCODE -M 'disk/logical/' -l
./check_ncpa.py -H CLUSTERIP -t NODEPASSCODE -M 'disk/logical/' -l
Re: Revise Query
Correct, that is just to list disk information though, I was using it for debugging.