Solaris I/O monitoring - check_disk_detail.sh

This support forum board is for support questions relating to Nagios XI, our flagship commercial network monitoring solution.
Locked
rajasegar
Posts: 1018
Joined: Sun Mar 30, 2014 10:49 pm

Solaris I/O monitoring - check_disk_detail.sh

Post by rajasegar »

I am trying to monitor Solaris server I/O details but the plug in I got from nagios check_disk_details.sh is not working.

Code: Select all

-bash-3.2$ uname -a
SunOS RTBSITapp02 5.10 Generic_150400-26 sun4u sparc SUNW,SPARC-Enterprise
Link for plugin
https://exchange.nagios.org/directory/P ... il/details

Code: Select all

Parameters are:
check_disk_detail.sh (warn %) (critical %) (directory) (busy sample time, default is 10 sec)

Examples:
check_disk_detail.sh 80 90 /opt
check_disk_detail.sh 85 95 /
check_disk_detail.sh 80 90 /nfs_mounted_folder
check_disk_detail.sh 80 90 /db_files 15

Code: Select all

-bash-3.2$ ./check_disk_detail.sh 80 90 /aprisma
syntax error on line 1, teletype
syntax error on line 1, teletype
syntax error on line 1, teletype
syntax error on line 1, teletype
Warning-  Disk Space. /aprisma - total: 39G - used: 34G (88%) - free 4.9G (12%) - Statistics: AvgSvcTime:  WaitTime:  Wait: % Busy: % r/s:  w/s:  kr:  kw:

-bash-3.2$ df -h
Filesystem             size   used  avail capacity  Mounted on
/                       20G    11G   8.6G    56%    /
/aprisma                39G    34G   4.9G    88%    /aprisma
/cdrom                  30G    18G    11G    63%    /cdrom
/dev                    20G    11G   8.6G    56%    /dev
proc                     0K     0K     0K     0%    /proc
ctfs                     0K     0K     0K     0%    /system/contract
mnttab                   0K     0K     0K     0%    /etc/mnttab
objfs                    0K     0K     0K     0%    /system/object
swap                    20G    20G   379M    99%    /etc/svc/volatile
fd                       0K     0K     0K     0%    /dev/fd
swap                   1.0G   645M   379M    63%    /tmp
swap                    20G    20G   379M    99%    /var/run

-bash-3.2$



Here is the same thing with some debugging

Code: Select all

-bash-3.2$ bash -x ./check_disk_detail.sh 80 90 /aprisma
+ '[' 80 == -h ']'
+ '[' -n 80 ']'
+ WARNING_SIZE=80
+ '[' -n 90 ']'
+ CRITICAL_SIZE=90
+ '[' -n /aprisma ']'
+ DIR_NAME=/aprisma
+ '[' -n '' ']'
+ SAMPLE_TIME=10
++ df -h /aprisma
++ sed -e s/s4//g
++ sed -e s/s6//g
++ sed -e s/s5//g
++ sed -e s/ptcset//g
++ sed -e 's/\/dev//g'
++ sed -e 's/\/md//g'
++ sed -e 's/\/dsk\///g'
++ sed -e s/s0//g
++ tail -1
++ sed -e 's/\/dbset//g'
++ awk '{print $1}'
+ DEVICE_NAME=/aprisma
++ iostat -nxI
++ grep ' /aprisma'
+ RESULT=
++ echo
++ awk '{print $1}'
+ READS=
++ echo 'scale=0 ;  * 1000 / 1'
++ bc
syntax error on line 1, teletype
+ READS=
++ echo
++ awk '{print $2}'
+ WRITES=
++ echo 'scale=0 ;  * 1000 / 1'
++ bc
syntax error on line 1, teletype
+ WRITES=
++ echo
++ awk '{print $3}'
+ BRC=
++ echo 'scale=0 ;  * 1000 / 1'
++ bc
syntax error on line 1, teletype
+ BRC=
++ echo
++ awk '{print $4}'
+ BWC=
++ echo 'scale=0 ;  * 1000 / 1'
++ bc
syntax error on line 1, teletype
+ BWC=
++ iostat -nxM 10 2
++ tail -1
++ grep ' /aprisma'
+ RESULT=
++ echo
++ awk '{print $7}'
+ WSVC_TR=
++ echo
++ awk '{print $8}'
+ ASVC_TR=
++ echo
++ awk '{print $9}'
+ W_PC=
++ echo
++ awk '{print $10}'
+ B_PC=
++ df -hb /aprisma
++ tail -1
+ RESULT='/aprisma                39G    34G   4.9G    88%    /aprisma'
++ echo /aprisma 39G 34G 4.9G 88% /aprisma
++ awk '{print substr($5,1,length($5)-1)}'
+ USEDSPACEPER=88
+ FREESPACEPER=12
++ echo /aprisma 39G 34G 4.9G 88% /aprisma
++ awk '{print $2}'
+ TOTALSPACE=39G
++ echo /aprisma 39G 34G 4.9G 88% /aprisma
++ awk '{print $3}'
+ USEDSPACE=34G
++ echo /aprisma 39G 34G 4.9G 88% /aprisma
++ awk '{print $4}'
+ FREESPACE=4.9G
++ awk '{print $1}'
++ echo /aprisma 39G 34G 4.9G 88% /aprisma
+ MOUNT=/aprisma
+ RESULT='/aprisma - total: 39G - used: 34G (88%) - free 4.9G (12%) - Statistics: AvgSvcTime:  WaitTime:  Wait: % Busy: % r/s:  w/s:  kr:  kw: '
+ '[' -z 39G ']'
+ '[' 88 -ge 90 ']'
+ '[' 88 -ge 80 ']'
++ echo ' Disk Space'
+ WARNING=' Disk Space'
+ '[' -n '' ']'
+ '[' -n ' Disk Space' ']'
+ echo 'Warning-  Disk Space. /aprisma - total: 39G - used: 34G (88%) - free 4.9G (12%) - Statistics: AvgSvcTime:  WaitTime:  Wait: % Busy: % r/s:  w/s:  kr:  kw: '
Warning-  Disk Space. /aprisma - total: 39G - used: 34G (88%) - free 4.9G (12%) - Statistics: AvgSvcTime:  WaitTime:  Wait: % Busy: % r/s:  w/s:  kr:  kw:
+ exit 1
-bash-3.2$

The problem seems to be ++ iostat -nxI | grep ' /aprisma' which obviously wont work because this command does not return any mount point info.

Code: Select all

-bash-3.2$ iostat -nxI
                    extended device statistics
    r/i    w/i   kr/i   kw/i wait actv wsvc_t asvc_t  %w  %b device
 17814.0 678095.0 245195.0 17009419.5  0.0  0.0    0.0   14.8   0   0 md0
 13953545.0 14496160.0 111628360.0 115971056.0  0.0  1.7    0.0  142.9   0   0 md1
   16.0    4.0  203.5    5.5  0.0  0.0    0.0    6.9   0   0 md3
 193012.0 3911772.0 1930300.0 14488582.5  0.0  0.1    0.0   63.1   0   1 md4
 16798.0 12896.0 204092.0 87735.0  0.0  0.0    0.0   93.4   0   0 md5
 98818.0 1000629.0 2723531.5 23609957.0  0.0  0.0    0.0   28.3   0   0 md6
 972187.0 1677235.0 38431081.5 79861393.0  0.0  0.1    0.0   50.0   0   0 md7
 83978.0 851834.0 3473480.0 28873511.0  0.0  0.0    0.0   25.0   0   0 md8
 99875.0 1025409.0 3143538.0 29764472.0  0.0  0.0    0.0   30.9   0   0 md9
 17814.0 678095.0 245195.0 17009419.5  0.0  0.0    0.0   14.2   0   0 md10
 13953545.0 14496160.0 111628360.0 115971056.0  0.0  1.7    0.0  142.9   0   0 md11
   16.0    4.0  203.5    5.5  0.0  0.0    0.0    6.9   0   0 md13
 193012.0 3911772.0 1930300.0 14488582.5  0.0  0.1    0.0   63.1   0   1 md14
 16798.0 12896.0 204092.0 87735.0  0.0  0.0    0.0   93.4   0   0 md15
 98818.0 1000629.0 2723531.5 23609957.0  0.0  0.0    0.0   28.2   0   0 md16
 972187.0 1677235.0 38431081.5 79861393.0  0.0  0.1    0.0   49.9   0   0 md17
 83978.0 851834.0 3473480.0 28873511.0  0.0  0.0    0.0   24.6   0   0 md18
 99875.0 1025409.0 3143538.0 29764472.0  0.0  0.0    0.0   30.8   0   0 md19
 16286.0 424745.0 167306.5 486328.5  0.0  0.0    0.0   70.6   0   0 md41
 22699.0 441371.0 230982.5 569619.0  0.0  0.0    0.0   79.0   0   0 md42
 56460.0 612900.0 582940.5 9558121.0  0.0  0.0    0.0   60.4   0   0 md43
 17321.0 489948.0 175950.5 763137.5  0.0  0.0    0.0   55.7   0   0 md44
 22536.0 478628.0 213604.5 728580.0  0.0  0.0    0.0   54.4   0   0 md45
 20852.0 438459.0 211000.5 536779.5  0.0  0.0    0.0   75.4   0   0 md46
 22832.0 493156.0 202315.5 1017214.5  0.0  0.0    0.0   61.5   0   0 md47
 14010.0 524676.0 146191.5 786745.5  0.0  0.0    0.0   54.0   0   0 md48
 632622.0 3503276.0 13051218.5 66636691.5  0.0  0.2    0.0  132.0   0   1 md100
 632622.0 3503276.0 13051218.5 66636691.5  0.0  0.2    0.0  131.9   0   1 md101
   14.0    4.0  194.5    5.5  0.0  0.0    0.0    6.4   0   0 md200
   14.0    4.0  194.5    5.5  0.0  0.0    0.0    6.4   0   0 md201
  256.0   88.0  141.9  147.0  0.0  0.0    0.0    0.9   0   0 sd0
   12.0   14.0    0.6    0.0  0.0  0.0    0.0    0.0   0   0 sd1
 2101687.0 8732936.0 61027206.7 228833765.0  0.1  0.1   32.1   21.9   0   1 sd2
 14165678.0 19127005.0 113806225.0 147469210.5  0.2  0.1   12.6    7.2   0   1 sd3
   26.0    5.0    2.3    0.0  0.0  0.0    0.0    0.0   0   0 sd4
 24198.0    5.0 3322.3    0.0  0.0  0.0    0.0    0.0   0   0 ssd10
 24201.0    5.0 3322.7    0.0  0.0  0.0    0.0    0.0   0   0 ssd11
 234691.0 93873.0 16783.1    0.0  0.0  0.0    0.0    0.0   0   0 ssd12
 235211.0 94081.0 16820.3    0.0  0.0  0.0    0.0    0.0   0   0 ssd13
 234311.0 93721.0 16756.0    0.0  0.0  0.0    0.0    0.0   0   0 ssd14
 14696313.0 21608466.0 296856906.0 134613101.0  0.0  0.0    0.1    0.8   0   1 ssd15
 233806.0 93519.0 16719.9    0.0  0.0  0.0    0.0    0.0   0   0 ssd16
 159814172.0 5208736.0 35427600251.5 65056597.5  0.0  0.3    0.5    4.2   0   9 ssd17
 18255965.0 16244182.0 5822902202.0 136216933.0  0.0  0.1    0.1    6.2   0   2 ssd18
 126526.0 12115934.0 1365664.0 107588986.0  0.0  0.0    0.0    1.4   0   1 ssd19
 769320.0 307725.0 55000.8    0.0  0.0  0.0    0.0    0.0   0   0 ssd20
 25251652.0 10691224.0 4191068844.0 107340676.5  0.0  0.0    0.0    1.7   0   2 ssd21
-bash-3.2$

Anybody have any plugin that does work for Solaris environment?

Thanks
5 x Nagios 5.6.9 Enterprise Edition
RHEL 6 & 7
rrdcached & ramdisk optimisation
User avatar
Box293
Too Basu
Posts: 5126
Joined: Sun Feb 07, 2010 10:55 pm
Location: Deniliquin, Australia
Contact:

Re: Solaris I/O monitoring - check_disk_detail.sh

Post by Box293 »

What is /aprisma ? Is it a local disk or something like a mounted NFS path ?
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
rajasegar
Posts: 1018
Joined: Sun Mar 30, 2014 10:49 pm

Re: Solaris I/O monitoring - check_disk_detail.sh

Post by rajasegar »

Box293 wrote:What is /aprisma ? Is it a local disk or something like a mounted NFS path ?
It is a local disk. I have included the df -h output in the earlier post.
5 x Nagios 5.6.9 Enterprise Edition
RHEL 6 & 7
rrdcached & ramdisk optimisation
User avatar
Box293
Too Basu
Posts: 5126
Joined: Sun Feb 07, 2010 10:55 pm
Location: Deniliquin, Australia
Contact:

Re: Solaris I/O monitoring - check_disk_detail.sh

Post by Box293 »

Run this:

Code: Select all

mount -p
This will then tell you what device is /aprisma

Then use that in the command, something like:

Code: Select all

./check_disk_detail.sh 80 90 /dev/dsk/c0d0s0
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
rajasegar
Posts: 1018
Joined: Sun Mar 30, 2014 10:49 pm

Re: Solaris I/O monitoring - check_disk_detail.sh

Post by rajasegar »

I dont see any devices.

Code: Select all

-bash-3.2$ /usr/sbin/mount -p
/ - / ufs - no rw,intr,largefiles,logging,xattr,onerror=panic
/aprisma - /aprisma ufs - no rw,intr,largefiles,logging,xattr,onerror=panic
/dev - /dev lofs - no zonedevfs
proc - /proc proc - no nodevices,zone=rtbappmy
ctfs - /system/contract ctfs - no nodevices,zone=rtbappmy
mnttab - /etc/mnttab mntfs - no nodevices,zone=rtbappmy
objfs - /system/object objfs - no nodevices,zone=rtbappmy
swap - /etc/svc/volatile tmpfs - no nodevices,xattr,zone=rtbappmy
fd - /dev/fd fd - no rw,nodevices,zone=rtbappmy
swap - /tmp tmpfs - no nodevices,xattr,zone=rtbappmy
swap - /var/run tmpfs - no nodevices,xattr,zone=rtbappmy
qfsmy1-2 - /aprisma/download samfs - no rw,suid,intr,largefiles,onerror=panic,nologging,noxattr


5 x Nagios 5.6.9 Enterprise Edition
RHEL 6 & 7
rrdcached & ramdisk optimisation
User avatar
Box293
Too Basu
Posts: 5126
Joined: Sun Feb 07, 2010 10:55 pm
Location: Deniliquin, Australia
Contact:

Re: Solaris I/O monitoring - check_disk_detail.sh

Post by Box293 »

I don't really know to be honest, thats as much help I can give. You could try emailing the developer of the plugin.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
rajasegar
Posts: 1018
Joined: Sun Mar 30, 2014 10:49 pm

Re: Solaris I/O monitoring - check_disk_detail.sh

Post by rajasegar »

Box293 wrote:I don't really know to be honest, thats as much help I can give. You could try emailing the developer of the plugin.
Its ok, I will just write a simple script, too much headache fixing this code.

Thanks
5 x Nagios 5.6.9 Enterprise Edition
RHEL 6 & 7
rrdcached & ramdisk optimisation
tmcdonald
Posts: 9117
Joined: Mon Sep 23, 2013 8:40 am

Re: Solaris I/O monitoring - check_disk_detail.sh

Post by tmcdonald »

I'll be closing this thread now, but feel free to open another if you need anything in the future!
Former Nagios employee
Locked