Re: newlines in JSON
Posted: Tue Aug 12, 2014 2:23 pm
So I ended up having a PM conversation with Eric and I noticed that my last reply is still sitting in the Outbox, so I don't think it ever sent. I've pasted it below, but it's going to seem out of context to everyone else reading it!
Hi Eric,
Oddly, the output of the plugin ended up in the performance data part of it. It's not entirely odd ... the way that things are currently handled (pre-nrdp/pre-APIs) is that the performance data portion of nagios was hijacked for data purposes. Since I'm still having to use that setup while testing the new nrdp/API setup, this kind of thing happens.
Also, if I use the API to only print out this service, there is no problem. It only happens when I ask for all services.
I also found out that if I change the line in cgi/jsonutils.c from:
to:
it no longer crashes ... but we also lose newlines (obviously).
All that being said here is the first portion of what is in the perf_data (it's crazy long and will need to be sanitized if I give you any more than this part ... but it seems to be failing in this first portion, so I hope this much is helpful):
thanks,
-janice
Hi Eric,
Oddly, the output of the plugin ended up in the performance data part of it. It's not entirely odd ... the way that things are currently handled (pre-nrdp/pre-APIs) is that the performance data portion of nagios was hijacked for data purposes. Since I'm still having to use that setup while testing the new nrdp/API setup, this kind of thing happens.
Also, if I use the API to only print out this service, there is no problem. It only happens when I ask for all services.
I also found out that if I change the line in cgi/jsonutils.c from:
Code: Select all
{ L"\n", L"\\n" },Code: Select all
{ L"\n", L" "},All that being said here is the first portion of what is in the perf_data (it's crazy long and will need to be sanitized if I give you any more than this part ... but it seems to be failing in this first portion, so I hope this much is helpful):
Code: Select all
pbspl1.nas.nasa.gov: Mon Aug 11 14:59:35 2014\\n Server reports 1391 jobs total (T:0 Q:982 H:31 W:0 R:378 E:0 B:0)\\n\\n Host CPUs Tasks Jobs Info\\n ----------- ----- ----- ---- ----------------------------\\n 40 hosts 800 0 -- ivy\\n 3 hosts 60 0 -- ivy bigmem\\n 160 hosts 2980 0 -- ivy down\\n 1798 hosts 35960 34166 -- ivy in-use\\n 2618 hosts 2080 0 -- ivy offline\\n 30 hosts 600 30 -- ivy q=alphatst in-use\\n 72 hosts 1440 0 -- ivy q=devel\\n 50 hosts 940 0 -- ivy q=devel down\\n 501 hosts 10020 10005 -- ivy q=devel in-use\\n 9 hosts 180 0 -- ivy q=devel offline\\n 3 hosts 60 0 -- ivy q=devel {offline down}\\n 18 hosts 360 0 -- ivy q=diags\\n 5 hosts 20 0 -- ivy q=diags offline\\n 3 hosts 60 0 -- ivy q=orbit_spl\\n r417i0n1 0 0 0 ivy q=orbit_spl offline\\n r417i0n6 0 0 0 ivy q=orbit_spl offline\\n r401i7n3 20 0 0 ivy q=resize_test offline\\n 33 hosts 660 660 -- ivy q=smd_ops in-use\\n 10 hosts 200 0 -- ivy q=smd_ops offline\\n r469i5n0 20 0 0 ivy q=smd_ops {offline down}\\n 37 hosts 740 0 -- ivy {offline down}\\n r405i6n17 20 20 1 ivy {offline in-use}\\n r447i5n15 20 20 1 ivy {offline in-use}\\n 4 hosts 80 0 -- ivy_exa bigmem\\n ldan7 16 0 0 ldan\\n ldan8 16 0 0 ldan\\n ldan4 16 16 1 ldan in-use\\n ldan5 16 16 1 ldan in-use\\n 3 hosts 48 0 -- ldan offline\\n ldan6 16 16 1 ldan {offline in-use}\\n 4 hosts 16 0 -- san down\\n 520 hosts 8320 7906 -- san in-use\\n 996 hosts 496 0 -- san offline\\n 36 hosts 576 0 -- san q=datasciences\\n 36 hosts 576 0 -- san q=devel\\n 26 hosts 400 0 -- san q=devel down\\n 225 hosts 3600 3600 -- san q=devel in-use\\n 10 hosts 160 0 -- san q=devel offline\\n r303i1n14 16 0 0 san q=diags\\n 5 hosts 0 0 -- san q=diags offline\\n r303i2n6 16 0 0 san q=resize_test offline\\n 9 hosts 144 0 -- san q=smd_ops offline\\n r303i5n0 16 0 0 san {offline down}\\n r305i3n15 16 0 0 san {offline down}\\n 53 hosts 1272 0 -- wes\\n 13 hosts 312 0 -- wes bigmem\\n r138i3n15 24 0 0 wes bigmem down\\n r152i3n15 0 0 0 wes bigmem down\\n 5 hosts 120 34 -- wes bigmem in-use\\n r151i3n15 24 0 0 wes bigmem q=diags\\n 5 hosts 0 0 -- wes down\\n 1420 hosts 34080 14296 -- wes in-use\\n 1866 hosts 2304 0 -- wes offline\\n 157 hosts 3768 0 -- wes q=devel\\n r130i2n1 0 0 0 wes q=devel down\\n r138i1n0 0 0 0 wes q=devel down\\n 282 hosts 6768 3294 -- wes q=devel in-use\\n 11 hosts 168 0 -- wes q=devel offline\\n r129i1n10 24 0 0 wes q=diags\\n 3 hosts 24 0 -- wes q=diags down\\n 12 hosts 0 0 -- wes q=diags offline\\n 4 hosts 96 0 -- wes q=resize_test\\n r144i3n0 24 0 0 wes q=smd_ops offline\\n r222i3n15 0 0 0 wes q=smd_ops offline\\n r203i3n3 24 0 0 wes {offline down}\\n 61 hosts 1464 0 -- wes_gpu m2090\\n r219i0n2 0 0 0 wes_gpu m2090 down\\n r219i0n7 0 0 0 wes_gpu m2090 down\\n r219i3n12 24 0 0 wes_gpu m2090 offline\\n\\nGroup Share% Use% Share Exempt Use Avail Borrowed Ratio Waiting\\n------- ------ ---- ----- ------ ----- ----- -------- ----- -------\\nOverall 100 0 86390 2 124 86266 0 0.00 2932\\n ARMD 27 30 23722 0 25944 0 2222 1.09 67288\\n HEOMD 26 19 22859 0 17032 5827 0 0.75 37156\\n SMD 43 38 37093 601 33468 3625 0 0.90 129344\\n NEX 1 0 862 0 772 90 0 0.90 3016\\n NAS 2 0 1725 0 80 1645 0 0.05 8256\\n\\n -janice