check_disk returns 0% available on not full file system.
Posted: Wed Jul 30, 2014 10:34 am
I searched the forums and google and couldn't locate a similar problem. I have a RHEL5 system that I installed the Nagios Plugins on from EPEL, which contains version 1.4.15. On one of my file systems that isn't full, check_disk is reporting it as 100% full.
# df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rsig-rsig
120T 116T 4.7T 97% /data
# /usr/lib64/nagios/plugins/check_disk -V
check_disk v1.4.15 (nagios-plugins 1.4.15)
# /usr/lib64/nagios/plugins/check_disk -w 2 -c 1 -p /data
DISK CRITICAL - free space: /data 0 MB (0% inode=99%);| /data=121070864MB;125439998;125439999;0;125440000
I went ahead and downloaded the latest nagios-plugins tar package from the web site and compled from source, and it gives the same problem:
# /root/nagios-plugins-2.0.3/plugins/check_disk -V
check_disk v2.0.3 (nagios-plugins 2.0.3)
# /root/nagios-plugins-2.0.3/plugins/check_disk -w 2 -c 1 -p /data
DISK CRITICAL - free space: /data 0 MB (0% inode=99%);| /data=121070864MB;125439998;125439999;0;125440000
Both check_disk versions work fine on other file systems:
# df -h /work
Filesystem Size Used Avail Use% Mounted on
/dev/workfs 626T 535T 92T 86% /work
# /usr/lib64/nagios/plugins/check_disk -w 2 -c 1 -p /work
DISK OK - free space: /work 95478888 MB (14% inode=71%);| /work=560603184MB;656082070;656082071;0;656082072
The underlying file system that I am having problems with (/data) is an XFS formated file system. The device is a LVM2 device containing multiple, multipath LUNs:
# pvscan
PV /dev/mpath/360a9800064665861483465564f444c79 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a98000646e6c51566f655745313958 VG rsig lvm2 [10.16 TB / 876.00 MB free]
PV /dev/mpath/360a98000646658614834655743595739 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a98000646e6c51566f655747363374 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a980006466586148346557452d6a77 VG rsig lvm2 [10.16 TB / 620.00 MB free]
PV /dev/mpath/360a98000646e6c51566f655747356939 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a980006466586148346557452d5273 VG rsig lvm2 [10.16 TB / 620.00 MB free]
PV /dev/mpath/360a98000646e6c51566f655745326f7a VG rsig lvm2 [10.16 TB / 876.00 MB free]
PV /dev/mpath/360a980006466586148347047354e6339 VG rsig lvm2 [15.91 TB / 5.64 GB free]
PV /dev/mpath/360a98000646658614834704739354966 VG rsig lvm2 [15.91 TB / 536.00 MB free]
PV /dev/mpath/360a98000646658614834704739374735 VG rsig lvm2 [7.04 TB / 6.77 GB free]
# lvdisplay /dev/mapper/rsig-rsig
--- Logical volume ---
LV Name /dev/rsig/rsig
VG Name rsig
LV UUID 6I9rxz-8Qcb-cmvJ-x9Py-DvN8-fwAp-yjEnfc
LV Write Access read/write
LV Status available
# open 1
LV Size 120.09 TB
Current LE 31480364
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 2048
Block device 253:15
Are there any known issues with check_disk and XFS or LVM2 with multiple physical volumes? Thanks.
# df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rsig-rsig
120T 116T 4.7T 97% /data
# /usr/lib64/nagios/plugins/check_disk -V
check_disk v1.4.15 (nagios-plugins 1.4.15)
# /usr/lib64/nagios/plugins/check_disk -w 2 -c 1 -p /data
DISK CRITICAL - free space: /data 0 MB (0% inode=99%);| /data=121070864MB;125439998;125439999;0;125440000
I went ahead and downloaded the latest nagios-plugins tar package from the web site and compled from source, and it gives the same problem:
# /root/nagios-plugins-2.0.3/plugins/check_disk -V
check_disk v2.0.3 (nagios-plugins 2.0.3)
# /root/nagios-plugins-2.0.3/plugins/check_disk -w 2 -c 1 -p /data
DISK CRITICAL - free space: /data 0 MB (0% inode=99%);| /data=121070864MB;125439998;125439999;0;125440000
Both check_disk versions work fine on other file systems:
# df -h /work
Filesystem Size Used Avail Use% Mounted on
/dev/workfs 626T 535T 92T 86% /work
# /usr/lib64/nagios/plugins/check_disk -w 2 -c 1 -p /work
DISK OK - free space: /work 95478888 MB (14% inode=71%);| /work=560603184MB;656082070;656082071;0;656082072
The underlying file system that I am having problems with (/data) is an XFS formated file system. The device is a LVM2 device containing multiple, multipath LUNs:
# pvscan
PV /dev/mpath/360a9800064665861483465564f444c79 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a98000646e6c51566f655745313958 VG rsig lvm2 [10.16 TB / 876.00 MB free]
PV /dev/mpath/360a98000646658614834655743595739 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a98000646e6c51566f655747363374 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a980006466586148346557452d6a77 VG rsig lvm2 [10.16 TB / 620.00 MB free]
PV /dev/mpath/360a98000646e6c51566f655747356939 VG rsig lvm2 [10.81 TB / 669.61 GB free]
PV /dev/mpath/360a980006466586148346557452d5273 VG rsig lvm2 [10.16 TB / 620.00 MB free]
PV /dev/mpath/360a98000646e6c51566f655745326f7a VG rsig lvm2 [10.16 TB / 876.00 MB free]
PV /dev/mpath/360a980006466586148347047354e6339 VG rsig lvm2 [15.91 TB / 5.64 GB free]
PV /dev/mpath/360a98000646658614834704739354966 VG rsig lvm2 [15.91 TB / 536.00 MB free]
PV /dev/mpath/360a98000646658614834704739374735 VG rsig lvm2 [7.04 TB / 6.77 GB free]
# lvdisplay /dev/mapper/rsig-rsig
--- Logical volume ---
LV Name /dev/rsig/rsig
VG Name rsig
LV UUID 6I9rxz-8Qcb-cmvJ-x9Py-DvN8-fwAp-yjEnfc
LV Write Access read/write
LV Status available
# open 1
LV Size 120.09 TB
Current LE 31480364
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 2048
Block device 253:15
Are there any known issues with check_disk and XFS or LVM2 with multiple physical volumes? Thanks.