Nagios server memory use
Nagios server memory use
My primary Nagios servers have been up since the 14th and one passed the 75% memory usage threshold and another is right behind.
The 2 envs are ~150 hosts and ~1100 services running on 4 core 32G RAM VMs. The OS is RHEL 6.6 and Nagios is 2014R2.6 installed manually on minimal systems.
RAM usage (custom_check_mem/free/top/htop) steadily increases until the systems are restarted, but I don't see much using memory. Then again, actually checking what's using memory seems to be not so easy. for comparison, my failover servers are using 2G RAM and have checks and notifications disabled.
I restarted nagios, nagiosxi, ndo2db, mysql, postgresql, gearmand, mod_gearman_worker, pe-puppet, snmptrapd, snmptt, and xinetd as well as restarting things from the admin tab. No change in memory usage.
Any suggestions on how to track down what's using the memory? The RAM at 32G is at least double the recommendation...
The 2 envs are ~150 hosts and ~1100 services running on 4 core 32G RAM VMs. The OS is RHEL 6.6 and Nagios is 2014R2.6 installed manually on minimal systems.
RAM usage (custom_check_mem/free/top/htop) steadily increases until the systems are restarted, but I don't see much using memory. Then again, actually checking what's using memory seems to be not so easy. for comparison, my failover servers are using 2G RAM and have checks and notifications disabled.
I restarted nagios, nagiosxi, ndo2db, mysql, postgresql, gearmand, mod_gearman_worker, pe-puppet, snmptrapd, snmptt, and xinetd as well as restarting things from the admin tab. No change in memory usage.
Any suggestions on how to track down what's using the memory? The RAM at 32G is at least double the recommendation...
Re: Nagios server memory use
Are you using a ton of script checks? I know scripts can use a lot of resources versus running binaries. What's the output of a top command look like?
Former Nagios Employee.
me.
me.
Re: Nagios server memory use
There may be a lot of script checks--I never considered a relation between a script/binary running on another server via NRPE as a cause of memory usage on a Nagios server. Can you explain that?
Code: Select all
# top -n 1
top - 19:52:14 up 9 days, 3:13, 2 users, load average: 0.78, 0.81, 1.57
Tasks: 244 total, 3 running, 241 sleeping, 0 stopped, 0 zombie
Cpu(s): 6.8%us, 1.2%sy, 0.0%ni, 91.8%id, 0.1%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 32880880k total, 29718272k used, 3162608k free, 330392k buffers
Swap: 16777212k total, 0k used, 16777212k free, 5624888k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24713 apache 20 0 447m 37m 5360 R 5.9 0.1 1:07.16 httpd
24715 apache 20 0 447m 37m 5348 R 5.9 0.1 1:04.17 httpd
21458 root 20 0 15168 1252 840 R 2.0 0.0 0:00.01 top
1 root 20 0 19364 1540 1232 S 0.0 0.0 0:00.95 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root RT 0 0 0 0 S 0.0 0.0 2:35.31 migration/0
4 root 20 0 0 0 0 S 0.0 0.0 3:13.83 ksoftirqd/0
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/0
6 root RT 0 0 0 0 S 0.0 0.0 0:09.32 watchdog/0
7 root RT 0 0 0 0 S 0.0 0.0 0:18.85 migration/1
8 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/1
9 root 20 0 0 0 0 S 0.0 0.0 0:06.50 ksoftirqd/1
10 root RT 0 0 0 0 S 0.0 0.0 0:00.62 watchdog/1
11 root RT 0 0 0 0 S 0.0 0.0 0:38.01 migration/2
12 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/2
13 root 20 0 0 0 0 S 0.0 0.0 0:30.12 ksoftirqd/2
14 root RT 0 0 0 0 S 0.0 0.0 0:03.58 watchdog/2
15 root RT 0 0 0 0 S 0.0 0.0 1:07.40 migration/3
16 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/3
17 root 20 0 0 0 0 S 0.0 0.0 1:19.91 ksoftirqd/3
18 root RT 0 0 0 0 S 0.0 0.0 0:06.37 watchdog/3
19 root 20 0 0 0 0 S 0.0 0.0 2:16.19 events/0
20 root 20 0 0 0 0 S 0.0 0.0 0:30.72 events/1
21 root 20 0 0 0 0 S 0.0 0.0 0:46.48 events/2
22 root 20 0 0 0 0 S 0.0 0.0 1:28.56 events/3
23 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cgroup
24 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper
25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 netns
26 root 20 0 0 0 0 S 0.0 0.0 0:00.00 async/mgr
27 root 20 0 0 0 0 S 0.0 0.0 0:00.00 pm
28 root 20 0 0 0 0 S 0.0 0.0 0:02.46 sync_supers
29 root 20 0 0 0 0 S 0.0 0.0 0:04.61 bdi-default
30 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kintegrityd/0
31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kintegrityd/1
32 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kintegrityd/2
33 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kintegrityd/3
34 root 20 0 0 0 0 S 0.0 0.0 7:27.97 kblockd/0
35 root 20 0 0 0 0 S 0.0 0.0 1:28.16 kblockd/1
36 root 20 0 0 0 0 S 0.0 0.0 4:27.97 kblockd/2
37 root 20 0 0 0 0 S 0.0 0.0 6:30.97 kblockd/3
38 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpid
39 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_notify
40 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kacpi_hotplug
41 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_aux
42 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_sff/0
43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_sff/1
44 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_sff/2
45 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ata_sff/3
46 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksuspend_usbd
47 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khubd
48 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kseriod
Re: Nagios server memory use
It's not the relationship between NRPE and Nagios that takes resources, but rather the actual plugins that fuel checks. The compiled Nagios plugins that come with XI use far less resources over other plugins.
See this slideshow for a bit more information about how the plugins use resources (starts on page #27) -
https://assets.nagios.com/presentations ... ins.pdf#27
As check_nrpe is compiled I don't foresee that as the issue, are you running additional plugins that are custom at all?
See this slideshow for a bit more information about how the plugins use resources (starts on page #27) -
https://assets.nagios.com/presentations ... ins.pdf#27
As check_nrpe is compiled I don't foresee that as the issue, are you running additional plugins that are custom at all?
Former Nagios Employee
Re: Nagios server memory use
Most checks are running from check_nrpe/check_nt, but there are various ones that get info via snmpget in scripts, a few SSH, and some ESXi checks that are known for using resources.
That said, this has been an issue for a long time and the active SNMP, and the SSH checks started a few months ago.
I looked at the slideshow. I think the next step is to stop active checks via the admin page or start them on a failover server and see what happens...
That said, this has been an issue for a long time and the active SNMP, and the SSH checks started a few months ago.
I looked at the slideshow. I think the next step is to stop active checks via the admin page or start them on a failover server and see what happens...
Re: Nagios server memory use
What's the output of a ps_mem command? If it's not installed, yum install it.
Can we also take a look at sar -r and free -m ?
Shift + M in top will also sort it by memory, which could be useful.
Thanks.
Can we also take a look at sar -r and free -m ?
Shift + M in top will also sort it by memory, which could be useful.
Thanks.
Former Nagios Employee.
me.
me.
Re: Nagios server memory use
ps_mem can't be installed.
Time is UTC.
The output of sar shows a steady increase in memory use.
Top output is after shift-m.
Time is UTC.
The output of sar shows a steady increase in memory use.
Code: Select all
# sar -r
Linux 2.6.32-504.8.1.el6.x86_64 (txslm2mlnag001) 12/23/2015 _x86_64_ (4 CPU)
12:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit
12:10:01 AM 3919888 28960992 88.08 330304 5618532 1860876 3.75
12:20:01 AM 3897796 28983084 88.15 330304 5619252 1871132 3.77
12:30:01 AM 3899364 28981516 88.14 330304 5620056 1856272 3.74
12:40:01 AM 3858752 29022128 88.26 330304 5621036 1882064 3.79
12:50:01 AM 3850176 29030704 88.29 330304 5621908 1875316 3.78
01:00:01 AM 3823296 29057584 88.37 330304 5622292 1899076 3.82
01:10:01 AM 3823268 29057612 88.37 330308 5623164 1878744 3.78
01:20:01 AM 3807616 29073264 88.42 330312 5623956 1878548 3.78
01:30:01 AM 3811548 29069332 88.41 330316 5624728 1860400 3.75
01:40:01 AM 3783636 29097244 88.49 330316 5625520 1870832 3.77
01:50:01 AM 3756948 29123932 88.57 330316 5626292 1887308 3.80
02:00:01 AM 3748116 29132764 88.60 330316 5627328 1887720 3.80
02:10:01 AM 3749992 29130888 88.60 330316 5628240 1863192 3.75
02:20:01 AM 3717888 29162992 88.69 330316 5629096 1883776 3.79
02:30:01 AM 3708208 29172672 88.72 330316 5627360 1883840 3.79
02:40:01 AM 3718976 29161904 88.69 330316 5616744 1865396 3.76
02:50:01 AM 3687384 29193496 88.79 330316 5617668 1886184 3.80
03:00:01 AM 3661572 29219308 88.86 330316 5618476 1903472 3.83
03:10:01 AM 3661604 29219276 88.86 330316 5619568 1891016 3.81
03:20:01 AM 3643540 29237340 88.92 330316 5620388 1893028 3.81
03:30:01 AM 3630148 29250732 88.96 330316 5621160 1900588 3.83
03:40:01 AM 3635604 29245276 88.94 330320 5621900 1868828 3.76
03:50:01 AM 3623656 29257224 88.98 330320 5622740 1881048 3.79
04:00:01 AM 3577332 29303548 89.12 330320 5623600 1910408 3.85
04:10:01 AM 3590752 29290128 89.08 330320 5624756 1876952 3.78
04:20:01 AM 3573080 29307800 89.13 330324 5625240 1881488 3.79
04:30:01 AM 3534664 29346216 89.25 330328 5626004 1914464 3.86
04:40:01 AM 3543988 29336892 89.22 330328 5626784 1882932 3.79
04:50:01 AM 3533516 29347364 89.25 330328 5625528 1879492 3.78
05:00:01 AM 3498316 29382564 89.36 330328 5625952 1907452 3.84
05:10:01 AM 3471480 29409400 89.44 330328 5621000 1924840 3.88
05:20:01 AM 3498344 29382536 89.36 330328 5622012 1881012 3.79
05:30:01 AM 3479272 29401608 89.42 330328 5622004 1890444 3.81
05:40:01 AM 3463592 29417288 89.47 330332 5622764 1886300 3.80
05:50:01 AM 3450912 29429968 89.50 330332 5623492 1889596 3.81
06:00:01 AM 3428668 29452212 89.57 330332 5623912 1899988 3.83
06:10:01 AM 3393792 29487088 89.68 330332 5624516 1925804 3.88
06:20:01 AM 3409708 29471172 89.63 330332 5625592 1886664 3.80
06:30:01 AM 3377824 29503056 89.73 330332 5614616 1928684 3.88
06:40:01 AM 3363636 29517244 89.77 330332 5615352 1923484 3.87
06:50:01 AM 3340076 29540804 89.84 330332 5616120 1935336 3.90
07:00:01 AM 3343316 29537564 89.83 330332 5616560 1919832 3.87
07:10:01 AM 3310368 29570512 89.93 330332 5617316 1937472 3.90
07:20:01 AM 3321880 29559000 89.90 330332 5618124 1905068 3.84
07:30:01 AM 3291720 29589160 89.99 330332 5619128 1932520 3.89
07:40:01 AM 3303380 29577500 89.95 330332 5619920 1892180 3.81
07:50:01 AM 3254828 29626052 90.10 330332 5620740 1937648 3.90
08:00:01 AM 3296360 29584520 89.97 330336 5616664 1876024 3.78
08:10:01 AM 3254096 29626784 90.10 330336 5617884 1903720 3.83
08:20:01 AM 3233932 29646948 90.16 330336 5616452 1920744 3.87
08:30:01 AM 3217648 29663232 90.21 330336 5617160 1931848 3.89
08:40:01 AM 3214728 29666152 90.22 330336 5618276 1909172 3.84
08:50:01 AM 3192060 29688820 90.29 330340 5619064 1919316 3.87
09:00:01 AM 3151684 29729196 90.41 330348 5619760 1955692 3.94
09:10:01 AM 3145144 29735736 90.43 330348 5620608 1945752 3.92
09:20:01 AM 3144180 29736700 90.44 330348 5621356 1928940 3.88
09:20:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit
09:30:01 AM 3120512 29760368 90.51 330348 5622176 1945180 3.92
09:40:01 AM 3119108 29761772 90.51 330348 5623188 1933768 3.89
09:50:01 AM 3090180 29790700 90.60 330348 5623972 1951000 3.93
10:00:01 AM 3075900 29804980 90.65 330348 5624592 1952420 3.93
10:10:01 AM 3069840 29811040 90.66 330352 5625472 1939188 3.91
10:20:01 AM 3047224 29833656 90.73 330352 5626336 1949676 3.93
10:30:01 AM 3055088 29825792 90.71 330352 5626736 1925948 3.88
10:40:01 AM 3043124 29837756 90.75 330352 5628072 1925164 3.88
10:50:01 AM 3005332 29875548 90.86 330352 5628832 1952248 3.93
11:00:01 AM 3007100 29873780 90.85 330352 5629232 1938352 3.90
11:10:01 AM 2983852 29897028 90.93 330352 5630392 1942004 3.91
11:20:01 AM 2986160 29894720 90.92 330352 5628136 1930368 3.89
11:30:01 AM 2944092 29936788 91.05 330352 5628656 1968864 3.96
11:40:01 AM 2960752 29920128 91.00 330352 5629740 1925208 3.88
11:50:01 AM 2939556 29941324 91.06 330352 5630944 1941892 3.91
12:00:01 PM 2924984 29955896 91.10 330352 5631388 1942628 3.91
12:10:01 PM 2898928 29981952 91.18 330352 5632592 1950952 3.93
12:20:01 PM 2880296 30000584 91.24 330352 5633340 1961628 3.95
12:30:01 PM 2865360 30015520 91.29 330352 5634208 1962136 3.95
12:40:01 PM 2871988 30008892 91.27 330356 5635048 1936580 3.90
12:50:01 PM 2829320 30051560 91.40 330356 5636284 1973944 3.98
01:00:01 PM 2830172 30050708 91.39 330360 5636952 1960128 3.95
01:10:01 PM 2818596 30062284 91.43 330360 5637560 1952068 3.93
01:20:01 PM 2816372 30064508 91.43 330360 5621928 1956736 3.94
01:30:01 PM 2789112 30091768 91.52 330360 5622380 1978128 3.98
01:40:01 PM 2768652 30112228 91.58 330360 5623484 1981888 3.99
01:50:01 PM 2788180 30092700 91.52 330360 5618412 1957920 3.94
02:00:01 PM 2766308 30114572 91.59 330360 5619128 1966496 3.96
02:10:01 PM 2757644 30123236 91.61 330360 5619956 1957864 3.94
02:20:01 PM 2724388 30156492 91.71 330360 5620720 1980016 3.99
02:30:01 PM 2713684 30167196 91.75 330360 5621464 1977760 3.98
02:40:01 PM 2700760 30180120 91.79 330360 5618568 1978048 3.98
02:50:01 PM 2697332 30183548 91.80 330360 5619384 1965772 3.96
03:00:01 PM 2663824 30217056 91.90 330360 5620324 1999416 4.03
03:10:01 PM 2674272 30206608 91.87 330364 5621216 1961632 3.95
03:20:01 PM 2639656 30241224 91.97 330364 5617924 1998096 4.02
03:30:01 PM 2639692 30241188 91.97 330364 5618692 1981204 3.99
03:40:01 PM 2610568 30270312 92.06 330368 5619460 1993712 4.01
03:50:01 PM 2609836 30271044 92.06 330368 5619924 1978372 3.98
04:00:01 PM 2588584 30292296 92.13 330376 5621240 1994816 4.02
04:10:02 PM 3576556 29304324 89.12 330376 5616228 990420 1.99
04:20:01 PM 3494064 29386816 89.37 330376 5620432 1209644 2.44
04:30:01 PM 3586676 29294204 89.09 330376 5621588 1055204 2.12
04:40:01 PM 3592252 29288628 89.07 330376 5624848 1022100 2.06
04:50:01 PM 3528412 29352468 89.27 330376 5626236 1087868 2.19
05:00:01 PM 3515672 29365208 89.31 330384 5627460 1083404 2.18
05:10:01 PM 3473000 29407880 89.44 330384 5626272 1123268 2.26
05:20:01 PM 3481016 29399864 89.41 330388 5626676 1107736 2.23
05:30:01 PM 3435200 29445680 89.55 330388 5627432 1150776 2.32
05:40:01 PM 3406172 29474708 89.64 330388 5629060 1168068 2.35
05:50:01 PM 3364048 29516832 89.77 330388 5630368 1208516 2.43
06:00:01 PM 3360940 29519940 89.78 330388 5631476 1205756 2.43
06:10:01 PM 3351656 29529224 89.81 330388 5630968 1201640 2.42
06:20:01 PM 3376272 29504608 89.73 330388 5619392 1171832 2.36
06:30:01 PM 3317832 29563048 89.91 330388 5622092 1205592 2.43
06:40:01 PM 3266976 29613904 90.06 330388 5617212 1260296 2.54
06:40:01 PM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit
06:50:01 PM 3225492 29655388 90.19 330388 5618424 1329760 2.68
07:00:01 PM 3235880 29645000 90.16 330388 5619488 1295144 2.61
07:10:01 PM 3194040 29686840 90.29 330388 5619960 1323944 2.67
07:20:01 PM 3206268 29674612 90.25 330388 5621156 1300980 2.62
07:30:01 PM 3183972 29696908 90.32 330392 5622276 1318100 2.65
07:40:01 PM 3155848 29725032 90.40 330392 5623492 1330544 2.68
07:50:01 PM 3129120 29751760 90.48 330392 5624584 1345068 2.71
08:00:01 PM 3127288 29753592 90.49 330392 5625852 1335828 2.69
08:10:01 PM 3120616 29760264 90.51 330392 5627124 1323136 2.66
08:20:01 PM 3094968 29785912 90.59 330392 5629464 1340920 2.70
08:30:01 PM 3086320 29794560 90.61 330392 5630596 1334244 2.69
08:40:01 PM 3082184 29798696 90.63 330392 5631816 1313896 2.65
08:50:01 PM 3098376 29782504 90.58 330392 5633164 1290352 2.60
09:00:01 PM 3061212 29819668 90.69 330392 5634316 1323416 2.67
09:10:01 PM 3079492 29801388 90.63 330392 5635544 1287636 2.59
09:20:01 PM 3061324 29819556 90.69 330392 5637972 1291660 2.60
09:30:01 PM 3081504 29799376 90.63 330392 5638748 1256676 2.53
09:40:01 PM 3003720 29877160 90.86 330392 5640324 1346440 2.71
09:50:01 PM 3006688 29874192 90.86 330392 5641460 1314984 2.65
10:00:01 PM 2999144 29881736 90.88 330392 5642628 1315464 2.65
Average: 3252271 29628609 90.11 330351 5624468 1736435 3.50
Code: Select all
# free -m
total used free shared buffers cached
Mem: 32110 29168 2941 15 322 5509
-/+ buffers/cache: 23336 8774
Swap: 16383 0 16383
Code: Select all
# top
top - 21:55:46 up 9 days, 5:17, 2 users, load average: 0.19, 0.15, 0.16
Tasks: 246 total, 1 running, 245 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.7%us, 0.6%sy, 0.0%ni, 95.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 32880880k total, 29865900k used, 3014980k free, 330392k buffers
Swap: 16777212k total, 0k used, 16777212k free, 5642148k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
17122 nagios 20 0 180m 52m 2684 S 0.0 0.2 1:29.74 nagios
16827 mysql 20 0 2175m 46m 4788 S 0.3 0.1 2:30.39 mysqld
11734 root 30 10 581m 40m 3776 S 0.0 0.1 2:11.40 mcollectived
24714 apache 20 0 449m 39m 5400 S 4.0 0.1 1:38.07 httpd
24719 apache 20 0 447m 37m 5472 S 0.0 0.1 1:35.52 httpd
24718 apache 20 0 447m 37m 5392 S 4.3 0.1 1:41.57 httpd
24713 apache 20 0 447m 37m 5364 S 4.0 0.1 1:39.82 httpd
24712 apache 20 0 447m 37m 5380 S 0.0 0.1 1:37.65 httpd
24715 apache 20 0 447m 37m 5356 S 0.0 0.1 1:36.42 httpd
25662 apache 20 0 447m 37m 5336 S 0.0 0.1 1:37.09 httpd
24717 apache 20 0 447m 37m 5300 S 0.0 0.1 1:40.36 httpd
20752 apache 20 0 447m 37m 5268 S 0.0 0.1 1:29.85 httpd
12436 apache 20 0 447m 37m 5292 S 4.0 0.1 1:29.93 httpd
19558 apache 20 0 447m 37m 5324 S 0.0 0.1 1:07.07 httpd
24716 apache 20 0 446m 36m 5380 S 0.0 0.1 1:33.88 httpd
11740 root 20 0 229m 35m 2268 S 0.0 0.1 0:00.46 puppet
15572 nagios 20 0 321m 31m 7856 S 0.0 0.1 0:00.26 php
15576 nagios 20 0 313m 23m 7924 S 0.0 0.1 0:00.19 php
15579 nagios 20 0 313m 23m 7484 S 0.0 0.1 0:00.17 php
15578 nagios 20 0 313m 23m 7456 S 0.0 0.1 0:00.18 php
15571 nagios 20 0 313m 22m 7440 S 0.0 0.1 0:00.17 php
31363 snmptt 20 0 167m 15m 1872 S 0.0 0.0 0:00.93 snmptt
Last edited by gormank on Mon Dec 28, 2015 1:19 pm, edited 1 time in total.
Re: Nagios server memory use
I've not seen this behavior reported recently from others, but I did notice you are running puppet. Puppet has a long history of having memory leak reports, but they usually turn out to be something else:
http://www.masterzen.fr/2009/01/19/pupp ... ks-or-not/
What ruby version do you have on the system? Not trying to point the finger of blame away from Nagios, but this is not something we see often at all on stock systems.
http://www.masterzen.fr/2009/01/19/pupp ... ks-or-not/
What ruby version do you have on the system? Not trying to point the finger of blame away from Nagios, but this is not something we see often at all on stock systems.
Former Nagios employee
Re: Nagios server memory use
Puppet is in the list of things I restarted.
# rpm -qa | grep ruby
pe-rubygem-net-ssh-2.1.4-2.pe.el5.noarch
pe-ruby-shadow-2.2.0-3.pe.el5.x86_64
pe-ruby-1.9.3.484-17.pe.el5.x86_64
pe-ruby-augeas-0.5.0-4.pe.el5.x86_64
pe-ruby-rgen-0.6.5-1.pe.el5.noarch
pe-rubygem-deep-merge-1.0.0-3.pe.el5.noarch
pe-ruby-selinux-1.33.4-4.pe.el5.x86_64
pe-ruby-ldap-0.9.12-7.pe.el5.x86_64
pe-ruby-stomp-1.3.3-1.pe.el5.noarch
# rpm -qa | grep ruby
pe-rubygem-net-ssh-2.1.4-2.pe.el5.noarch
pe-ruby-shadow-2.2.0-3.pe.el5.x86_64
pe-ruby-1.9.3.484-17.pe.el5.x86_64
pe-ruby-augeas-0.5.0-4.pe.el5.x86_64
pe-ruby-rgen-0.6.5-1.pe.el5.noarch
pe-rubygem-deep-merge-1.0.0-3.pe.el5.noarch
pe-ruby-selinux-1.33.4-4.pe.el5.x86_64
pe-ruby-ldap-0.9.12-7.pe.el5.x86_64
pe-ruby-stomp-1.3.3-1.pe.el5.noarch
Re: Nagios server memory use
What about mcollectived? That's part of the puppet stack I believe and also runs on Ruby. Did you explicitly restart that or is that restarted as part of restarting another service?
Former Nagios employee