Problem Creating Repository

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
User avatar
mrochelle
Posts: 238
Joined: Fri May 04, 2012 11:20 am
Location: Heart of America

Re: Problem Creating Repository

Post by mrochelle »

What OS does your Nagios LogServer runs on? Distribution: RedHat 8.3
What version of LogServer? Nagios Log Server 2.1.8

grep apache /etc/group
nagios:x:1001:nagios,apache
apache:x:48:nagios

ls -laZ /usr/local/nagioslogserver/archives/snapshots_repository
total 0
drwxrwxr-x. 2 nagios nagios unconfined_u:object_r:usr_t:s0 22 Jun 18 12:44 .
drwxrwxr-x. 3 nagios nagios unconfined_u:object_r:usr_t:s0 34 Jun 18 12:37 ..
-rw-r--r-- 1 nagios users ? 0 Jun 18 12:44 test.out

rpm -qa | grep openssh
openssh-clients-8.0p1-5.el8.x86_64
openssh-8.0p1-5.el8.x86_64
openssh-server-8.0p1-5.el8.x86_64

top - 14:42:00 up 0 min, 1 user, load average: 2.90, 0.89, 0.31

ls -la /usr/local/nagioslogserver
total 4
drwxrwxr-x. 11 nagios nagios 154 Jun 15 09:39 .
drwxr-xr-x. 14 root root 166 May 25 09:50 ..
drwxrwxr-x. 3 nagios nagios 34 Jun 18 12:37 archives
drwxr-xr-x. 7 nagios nagios 128 May 25 09:50 elasticsearch
drwxrwxr-x. 4 nagios nagios 37 May 25 09:49 etc
-rw-r--r--. 1 root root 0 May 25 09:50 .installed
drwxr-xr-x. 6 nagios nagios 171 May 25 09:49 logstash
drwxrwxr-x. 2 nagios nagios 62 May 25 09:49 mibs
drwxrwxr-x. 2 nagios nagios 4096 May 25 09:49 scripts
drwxrwxr-x. 2 nagios nagios 6 May 25 09:49 snapshots
drwxrwxr-x. 3 nagios apache 27 May 25 09:54 tmp
drwxrwxr-x. 2 nagios nagios 158 May 25 10:00 var

ls -la /usr/local/nagioslogserver/archives
total 0
drwxrwxr-x. 3 nagios nagios 34 Jun 18 12:37 .
drwxrwxr-x. 11 nagios nagios 154 Jun 15 09:39 ..
drwxrwxr-x. 2 nagios nagios 22 Jun 18 12:44 snapshots_repository

cd /usr/local/nagioslogserver/archives/
[******* archives]# ls
snapshots_repository
[******* archives]# rm -rf snapshots_repository

Problem continues:
NLS_Error_Capture.JPG
You do not have the required permissions to view the files attached to this post.
User avatar
vtrac
Posts: 903
Joined: Tue Oct 27, 2020 1:35 pm

Re: Problem Creating Repository

Post by vtrac »

Hi,
How are you doing?
I have tested this on my Nagios LogServer and it completed just fine.
M1.png
In your case, you already have all the folders (directories) defined and permission are all correct.

Looks like the system is not able to read it.

Please try adding a different location, first we will manually create the directories:

Code: Select all

mkdir -p /store/snapshot_repository
chmod 775 /store/snapshot_repository
chown -R nagios.nagios /store/snapshot_repository
Now, please use the GUI to add the new repository:
Nagios LogServer GUI > Admin > Snapshots & Maintenance > please select "Maintense & Repository Settings" tab > click "Create Repository"
M3.png
If the above still not working, please reboot your machine (OS) and see if it helps.

Code: Select all

systemctl stop logstash
systemctl stop elasticsearch
reboot
Best Regards,
Vinh
You do not have the required permissions to view the files attached to this post.
User avatar
mrochelle
Posts: 238
Joined: Fri May 04, 2012 11:20 am
Location: Heart of America

Re: Problem Creating Repository

Post by mrochelle »

NLS_Error_Capture2.JPG
Well, I completed all your requests with the same results. I even tried as the nagiosadmin user. I have a NLS test server which works without problems. I even deleted and recreated several times the repository without any problems.

Question: Is there a command line version to create the repository so we could see exactly what the error or problem is?

NLS Production
[root@<NLS Production Server> archives]# ls -laZ /usr/local/nagioslogserver/archives/snapshots_repository
total 0
drwxrwxr-x 2 nagios nagios ? 6 Jun 21 19:29 .
drwxrwxr-x. 3 nagios nagios unconfined_u:object_r:usr_t:s0 34 Jun 21 19:29 ..

NLS Test
[root@<NLS Test Server> archives]# ls -laZ /usr/local/nagioslogserver/archives/snapshots_repository
total 16
drwxrwxr-x 3 nagios nagios ? 112 Jun 21 12:51 .
drwxrwxr-x 3 nagios nagios ? 34 Jun 21 07:17 ..
-rw-r--r-- 1 nagios users ? 45 Jun 21 12:51 index
drwxr-xr-x 31 nagios users ? 4096 Jun 21 12:48 indices
-rw-r--r-- 1 nagios users ? 521 Jun 21 12:48 metadata-curator-20210621174823
-rw-r--r-- 1 nagios users ? 358 Jun 21 12:51 snapshot-curator-20210621174823
You do not have the required permissions to view the files attached to this post.
User avatar
vtrac
Posts: 903
Joined: Tue Oct 27, 2020 1:35 pm

Re: Problem Creating Repository

Post by vtrac »

Hi,
How are you doing?
Did you reboot the machine?

Code: Select all

systemctl stop logstash
systemctl stop elasticsearch
reboot
Once reboot, please try adding repository again.


Best Regards,
Vinh
User avatar
vtrac
Posts: 903
Joined: Tue Oct 27, 2020 1:35 pm

Re: Problem Creating Repository

Post by vtrac »

Hi,
I have just brought this issue up in our team meeting today and here are what they suggested.

Since you LogServer (LS) is clustered:
usiadoap773.us.landisgyr.io
usiadoap774.us.landisgyr.io

You can only create the "snapshot_repository" on the shared mount point between your cluster (2 machines).

So, please do the "df -h" on both of your machines and find out the shared mount point on both machines.
Once you have identified the "shared" mount point, you can go ahead and manually create the folder then use the GUI to add the snapshot repository after.

Please let me know how that goes.


Best Regards,
Vinh
User avatar
mrochelle
Posts: 238
Joined: Fri May 04, 2012 11:20 am
Location: Heart of America

Re: Problem Creating Repository

Post by mrochelle »

Tried using the shared mount point between machines. Same results.
NLS_Error_Capture3.JPG
You do not have the required permissions to view the files attached to this post.
User avatar
mrochelle
Posts: 238
Joined: Fri May 04, 2012 11:20 am
Location: Heart of America

Re: Problem Creating Repository

Post by mrochelle »

Found this error in the elasticsearch log:
[2021-06-22 16:40:23,050][WARN ][repositories ] [e7201f4e-1999-4427-9e00-d2aebd3fa67b] [snapshot_repository] failed to verify repository
org.elasticsearch.repositories.RepositoryVerificationException: [snapshot_repository] a file written by master to the store [/usr/local/nagioslogserver] cannot be accessed on the node [[e7201f4e-1999-4427-9e00-d2aebd3fa67b][rZd9eP8MS8yJapk5IQAmXQ][usiadoap773.us.landisgyr.io][inet[/10.7.7.73:9300]]{max_local_storage_nodes=1}]. This might indicate that the store [/usr/local/nagioslogserver] is not shared between this node and the master node or that permissions on the store don't allow reading files written by the master node

I'm reviewing in an attempt to figure out what this really means.
User avatar
vtrac
Posts: 903
Joined: Tue Oct 27, 2020 1:35 pm

Re: Problem Creating Repository

Post by vtrac »

Hi,
We need to find out the same cluster mount point on both of your two machines.
The "/usr/local/nagioslogserver" might NOT be the right mount point as it is where the logserver being installed.

Please run the below commands on BOTH of your cluster machines and upload outputs:

Code: Select all

df -h

mount

cat /etc/fstab
Please also upload the following files on BOTH of your cluster two machines, so I can take a look.

/usr/local/nagioslogserver/elasticsearch/config/elasticsearch.yml



Best Regards,
Vinh
User avatar
mrochelle
Posts: 238
Joined: Fri May 04, 2012 11:20 am
Location: Heart of America

Re: Problem Creating Repository

Post by mrochelle »

There also appears to be an hour difference in the log times between cluster /var/log/elasticsearch/* log data ?
User avatar
mrochelle
Posts: 238
Joined: Fri May 04, 2012 11:20 am
Location: Heart of America

Re: Problem Creating Repository

Post by mrochelle »

  • [root@usiadoap773 elasticsearch]# df -h
    Filesystem Size Used Avail Use% Mounted on
    devtmpfs 7.8G 0 7.8G 0% /dev
    tmpfs 7.8G 224K 7.8G 1% /dev/shm
    tmpfs 7.8G 1.1M 7.8G 1% /run
    tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
    /dev/mapper/cl-root 43G 2.9G 40G 7% /
    tmpfs 7.8G 176K 7.8G 1% /tmp
    /dev/sdb 250G 6.4G 244G 3% /usr/local/nagioslogserver
    /dev/sda2 976M 587M 323M 65% /boot
    /dev/sda1 599M 6.9M 592M 2% /boot/efi
    /dev/mapper/cl-var 8.0G 557M 7.5G 7% /var
    tmpfs 7.8G 0 7.8G 0% /var/tmp
    /dev/mapper/cl-var_log 6.0G 1.4G 4.7G 23% /var/log
    /dev/mapper/cl-var_log_audit 2.0G 84M 2.0G 5% /var/log/audit
    10.7.20.33:/home/nagios 495G 267G 229G 54% /home/nagios
    tmpfs 1.6G 0 1.6G 0% /run/user/1017
    tmpfs 1.6G 0 1.6G 0% /run/user/3555
    10.7.20.33:/home/mrochelle 495G 267G 229G 54% /home/mrochelle
    tmpfs 1.6G 0 1.6G 0% /run/user/48

    [root@usiadoap774 elasticsearch]# df -h
    Filesystem Size Used Avail Use% Mounted on
    devtmpfs 7.8G 0 7.8G 0% /dev
    tmpfs 7.8G 224K 7.8G 1% /dev/shm
    tmpfs 7.8G 1.2M 7.8G 1% /run
    tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
    /dev/mapper/cl-root 43G 2.9G 40G 7% /
    tmpfs 7.8G 192K 7.8G 1% /tmp
    /dev/mapper/cl-var 8.0G 694M 7.4G 9% /var
    tmpfs 7.8G 0 7.8G 0% /var/tmp
    /dev/sda2 976M 587M 323M 65% /boot
    /dev/sda1 599M 6.9M 592M 2% /boot/efi
    /dev/mapper/cl-var_log 6.0G 1.3G 4.8G 22% /var/log
    /dev/mapper/cl-var_log_audit 2.0G 80M 2.0G 4% /var/log/audit
    /dev/sdb 250G 6.3G 244G 3% /usr/local/nagioslogserver
    10.7.20.33:/home/nagios 495G 267G 229G 54% /home/nagios
    tmpfs 1.6G 0 1.6G 0% /run/user/1017
    tmpfs 1.6G 0 1.6G 0% /run/user/3555
    10.7.20.32:/home/mrochelle 495G 267G 229G 54% /home/mrochelle
    tmpfs 1.6G 0 1.6G 0% /run/user/48
[root@usiadoap773 elasticsearch]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=8075988k,nr_inodes=2018997,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
none on /sys/kernel/tracing type tracefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/cl-root on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=43,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=16734)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,relatime)
/dev/sdb on /usr/local/nagioslogserver type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/sda2 on /boot type ext4 (rw,relatime)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
/dev/mapper/cl-var on /var type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/cl-var_log on /var/log type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/cl-var_log_audit on /var/log/audit type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/etc/auto.master.d/home.conf on /home type autofs (rw,relatime,fd=6,pgrp=1676,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=34634)
10.7.20.33:/home/nagios on /home/nagios type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.7.20.33,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.7.20.33)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1017 type tmpfs (rw,nosuid,nodev,relatime,size=1618616k,mode=700,uid=1017,gid=100)
tmpfs on /run/user/3555 type tmpfs (rw,nosuid,nodev,relatime,size=1618616k,mode=700,uid=3555,gid=100)
10.7.20.33:/home/mrochelle on /home/mrochelle type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.7.20.33,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.7.20.33)
tmpfs on /run/user/48 type tmpfs (rw,nosuid,nodev,relatime,size=1618616k,mode=700,uid=48,gid=48)

[root@usiadoap774 elasticsearch]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8075988k,nr_inodes=2018997,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime,seclabel)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
none on /sys/kernel/tracing type tracefs (rw,relatime,seclabel)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/cl-root on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=43,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=25089)
debugfs on /sys/kernel/debug type debugfs (rw,relatime,seclabel)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime,seclabel)
/dev/mapper/cl-var on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
tmpfs on /var/tmp type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel)
/dev/sda2 on /boot type ext4 (rw,relatime,seclabel)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
/dev/mapper/cl-var_log on /var/log type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/cl-var_log_audit on /var/log/audit type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/etc/auto.master.d/home.conf on /home type autofs (rw,relatime,fd=6,pgrp=7407,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=79075)
/dev/sdb on /usr/local/nagioslogserver type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
10.7.20.33:/home/nagios on /home/nagios type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.7.20.33,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.7.20.33)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
tmpfs on /run/user/1017 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=1618616k,mode=700,uid=1017,gid=100)
tmpfs on /run/user/3555 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=1618616k,mode=700,uid=3555,gid=100)
10.7.20.32:/home/mrochelle on /home/mrochelle type nfs (rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.7.20.32,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=10.7.20.32)
tmpfs on /run/user/48 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=1618616k,mode=700,uid=48,gid=48)


[root@usiadoap773 elasticsearch]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue May 12 14:38:16 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root / xfs defaults 0 0
UUID=f8f32cec-8bcf-41c8-a258-3c2dfde90ea1 /boot ext4 defaults 1 2
UUID=D113-0335 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/cl-var /var xfs defaults 0 0
/dev/mapper/cl-var_log /var/log xfs defaults 0 0
/dev/mapper/cl-var_log_audit /var/log/audit xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /var/tmp tmpfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime 0 0
UUID="58609f34-0502-49b0-a23d-ee2e33f093f5" /usr/local/nagioslogserver xfs defaults 0 0

[root@usiadoap774 elasticsearch]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue May 12 14:38:16 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root / xfs defaults 0 0
UUID=f8f32cec-8bcf-41c8-a258-3c2dfde90ea1 /boot ext4 defaults 1 2
UUID=D113-0335 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/cl-var /var xfs defaults 0 0
/dev/mapper/cl-var_log /var/log xfs defaults 0 0
/dev/mapper/cl-var_log_audit /var/log/audit xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /var/tmp tmpfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,noexec,relatime 0 0
UUID="f2eebcdb-a9ef-49eb-be42-bee4e280df44" /usr/local/nagioslogserver xfs defaults 0 0

[root@usiadoap773 elasticsearch]# cat /usr/local/nagioslogserver/elasticsearch/config/elasticsearch.yml
##################### Elasticsearch Configuration Example #####################

# This file contains an overview of various configuration settings,
# targeted at operations staff. Application developers should
# consult the guide at <http://elasticsearch.org/guide>.
#
# The installation procedure is covered at
# <http://elasticsearch.org/guide/en/elast ... setup.html>.
#
# Elasticsearch comes with reasonable defaults for most settings,
# so you can try it out without bothering with configuration.
#
# Most of the time, these defaults are just fine for running a production
# cluster. If you're fine-tuning your cluster, or wondering about the
# effect of certain configuration option, please _do ask_ on the
# mailing list or IRC channel [http://elasticsearch.org/community].

# Any element in the configuration can be replaced with environment variables
# by placing them in ${...} notation. For example:
#
# node.rack: ${RACK_ENV_VAR}

# For information on supported formats and syntax for the config file, see
# <http://elasticsearch.org/guide/en/elast ... ation.html>


################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: nagios_elasticsearch


#################################### Node #####################################

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
# node.name: "Franz Kafka"

# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
#
# Allow this node to be eligible as a master node (enabled by default):
#
# node.master: true
#
# Allow this node to store data (enabled by default):
#
# node.data: true

# You can exploit these settings to design advanced cluster topologies.
#
# 1. You want this node to never become a master node, only to hold data.
# This will be the "workhorse" of your cluster.
#
# node.master: false
# node.data: true
#
# 2. You want this node to only serve as a master: to not store any data and
# to have free resources. This will be the "coordinator" of your cluster.
#
# node.master: true
# node.data: false
#
# 3. You want this node to be neither master nor data node, but
# to act as a "search load balancer" (fetching data from nodes,
# aggregating results, etc.)
#
# node.master: false
# node.data: false

# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
# Node Info API [http://localhost:9200/_nodes] or GUI tools
# such as <http://www.elasticsearch.org/overview/marvel/>,
# <http://github.com/karmi/elasticsearch-paramedic>,
# <http://github.com/lukas-vlcek/bigdesk> and
# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.

# A node can have generic attributes associated with it, which can later be used
# for customized shard allocation filtering, or allocation awareness. An attribute
# is a simple key value pair, similar to node.key: value, here is an example:
#
# node.rack: rack314

# By default, multiple nodes are allowed to start from the same installation location
# to disable it, set the following:
node.max_local_storage_nodes: 1


#################################### Index ####################################

# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See <http://elasticsearch.org/guide/en/elast ... dules.html> and
# <http://elasticsearch.org/guide/en/elast ... index.html>
# for more information.

# Set the number of shards (splits) of an index (5 by default):
#
# index.number_of_shards: 5

# Set the number of replicas (additional copies) of an index (1 by default):
#
# index.number_of_replicas: 1

# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
#
# index.number_of_shards: 1
# index.number_of_replicas: 0

# These settings directly affect the performance of index and search operations
# in your cluster. Assuming you have enough machines to hold shards and
# replicas, the rule of thumb is:
#
# 1. Having more *shards* enhances the _indexing_ performance and allows to
# _distribute_ a big index across machines.
# 2. Having more *replicas* enhances the _search_ performance and improves the
# cluster _availability_.
#
# The "number_of_shards" is a one-time setting for an index.
#
# The "number_of_replicas" can be increased or decreased anytime,
# by using the Index Update Settings API.
#
# Elasticsearch takes care about load balancing, relocating, gathering the
# results from nodes, etc. Experiment with different settings to fine-tune
# your setup.

# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect
# the index status.


#################################### Paths ####################################

# Path to directory containing configuration (this file and logging.yml):
#
# path.conf: /path/to/conf

# Path to directory where to store index data allocated for this node.
#
# path.data: /path/to/data
#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
# path.data: /path/to/data1,/path/to/data2

# Path to temporary files:
#
# path.work: /path/to/work

# Path to log files:
#
# path.logs: /path/to/logs

# Path to where plugins are installed:
#
# path.plugins: /path/to/plugins


#################################### Plugin ###################################

# If a plugin listed here is not installed for current node, the node will not start.
#
# plugin.mandatory: mapper-attachments,lang-groovy


################################### Memory ####################################

# Elasticsearch performs poorly when JVM starts swapping: you should ensure that
# it _never_ swaps.
#
# Set this property to true to lock the memory:
#
bootstrap.mlockall: true

# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for Elasticsearch, leaving enough memory for the operating system itself.
#
# You should also make sure that the Elasticsearch process is allowed to lock
# the memory, eg. by using `ulimit -l unlimited`.


############################## Network And HTTP ###############################

# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

# Set the bind address specifically (IPv4 or IPv6):
#
# network.bind_host: 192.168.0.1

# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
# network.publish_host: 192.168.0.1

# Set both 'bind_host' and 'publish_host':
#
# network.host: 192.168.0.1

# Set a custom port for the node to node communication (9300 by default):
#
# transport.tcp.port: 9300

# Enable compression for all communication between nodes (disabled by default):
#
transport.tcp.compress: true

# Set a custom port to listen for HTTP traffic:
#
# http.port: 9200

# Set a custom allowed content length:
#
# http.max_content_length: 100mb

# Disable HTTP completely:
#
# http.enabled: false

# Set the HTTP host to listen to
#
http.host: "localhost"

################################### Gateway ###################################

# The gateway allows for persisting the cluster state between full cluster
# restarts. Every change to the state (such as adding an index) will be stored
# in the gateway, and when the cluster starts up for the first time,
# it will read its state from the gateway.

# There are several types of gateway implementations. For more information, see
# <http://elasticsearch.org/guide/en/elast ... teway.html>.

# The default gateway type is the "local" gateway (recommended):
#
# gateway.type: local

# Settings below control how and when to start the initial recovery process on
# a full cluster restart (to reuse as much local data as possible when using shared
# gateway).

# Allow recovery process after N nodes in a cluster are up:
#
# gateway.recover_after_nodes: 1

# Set the timeout to initiate the recovery process, once the N nodes
# from previous setting are up (accepts time value):
#
# gateway.recover_after_time: 5m

# Set how many nodes are expected in this cluster. Once these N nodes
# are up (and recover_after_nodes is met), begin recovery process immediately
# (without waiting for recover_after_time to expire):
#
# gateway.expected_nodes: 2


############################# Recovery Throttling #############################

# These settings allow to control the process of shards allocation between
# nodes during initial recovery, replica allocation, rebalancing,
# or when adding and removing nodes.

# Set the number of concurrent recoveries happening on a node:
#
# 1. During the initial recovery
#
# cluster.routing.allocation.node_initial_primaries_recoveries: 4
#
# 2. During adding/removing nodes, rebalancing, etc
#
# cluster.routing.allocation.node_concurrent_recoveries: 2

# Set to throttle throughput when recovering (eg. 100mb, by default 20mb):
#
# indices.recovery.max_bytes_per_sec: 20mb

# Set to limit the number of open concurrent streams when
# recovering a shard from a peer:
#
# indices.recovery.concurrent_streams: 5


################################## Discovery ##################################

# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.

# Set to ensure a node sees N other master eligible nodes to be considered
# operational within the cluster. Its recommended to set it to a higher value
# than 1 when running more than 2 nodes in the cluster.
#
# discovery.zen.minimum_master_nodes: 1

# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
#
# discovery.zen.ping.timeout: 3s

# For more information, see
# <http://elasticsearch.org/guide/en/elast ... y-zen.html>

# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
discovery.zen.ping.unicast.hosts: ["localhost"]

# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
#
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
#
# For more information, see
# <http://elasticsearch.org/guide/en/elast ... y-ec2.html>
#
# See <http://elasticsearch.org/tutorials/elas ... ch-on-ec2/>
# for a step-by-step tutorial.

# GCE discovery allows to use Google Compute Engine API in order to perform discovery.
#
# You have to install the cloud-gce plugin for enabling the GCE discovery.
#
# For more information, see <https://github.com/elasticsearch/elasti ... -cloud-gce>.

# Azure discovery allows to use Azure API in order to perform discovery.
#
# You have to install the cloud-azure plugin for enabling the Azure discovery.
#
# For more information, see <https://github.com/elasticsearch/elasti ... loud-azure>.

################################## Slow Log ##################################

# Shard level query and fetch threshold logging.

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms

#index.indexing.slowlog.threshold.index.warn: 10s
#index.indexing.slowlog.threshold.index.info: 5s
#index.indexing.slowlog.threshold.index.debug: 2s
#index.indexing.slowlog.threshold.index.trace: 500ms

################################## GC Logging ################################

#monitor.jvm.gc.young.warn: 1000ms
#monitor.jvm.gc.young.info: 700ms
#monitor.jvm.gc.young.debug: 400ms

#monitor.jvm.gc.old.warn: 10s
#monitor.jvm.gc.old.info: 5s
#monitor.jvm.gc.old.debug: 2s




[root@usiadoap774 elasticsearch]# cat /usr/local/nagioslogserver/elasticsearch/config/elasticsearch.yml
##################### Elasticsearch Configuration Example #####################

# This file contains an overview of various configuration settings,
# targeted at operations staff. Application developers should
# consult the guide at <http://elasticsearch.org/guide>.
#
# The installation procedure is covered at
# <http://elasticsearch.org/guide/en/elast ... setup.html>.
#
# Elasticsearch comes with reasonable defaults for most settings,
# so you can try it out without bothering with configuration.
#
# Most of the time, these defaults are just fine for running a production
# cluster. If you're fine-tuning your cluster, or wondering about the
# effect of certain configuration option, please _do ask_ on the
# mailing list or IRC channel [http://elasticsearch.org/community].

# Any element in the configuration can be replaced with environment variables
# by placing them in ${...} notation. For example:
#
# node.rack: ${RACK_ENV_VAR}

# For information on supported formats and syntax for the config file, see
# <http://elasticsearch.org/guide/en/elast ... ation.html>


################################### Cluster ###################################

# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: nagios_elasticsearch


#################################### Node #####################################

# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
# node.name: "Franz Kafka"

# Every node can be configured to allow or deny being eligible as the master,
# and to allow or deny to store the data.
#
# Allow this node to be eligible as a master node (enabled by default):
#
# node.master: true
#
# Allow this node to store data (enabled by default):
#
# node.data: true

# You can exploit these settings to design advanced cluster topologies.
#
# 1. You want this node to never become a master node, only to hold data.
# This will be the "workhorse" of your cluster.
#
# node.master: false
# node.data: true
#
# 2. You want this node to only serve as a master: to not store any data and
# to have free resources. This will be the "coordinator" of your cluster.
#
# node.master: true
# node.data: false
#
# 3. You want this node to be neither master nor data node, but
# to act as a "search load balancer" (fetching data from nodes,
# aggregating results, etc.)
#
# node.master: false
# node.data: false

# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
# Node Info API [http://localhost:9200/_nodes] or GUI tools
# such as <http://www.elasticsearch.org/overview/marvel/>,
# <http://github.com/karmi/elasticsearch-paramedic>,
# <http://github.com/lukas-vlcek/bigdesk> and
# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.

# A node can have generic attributes associated with it, which can later be used
# for customized shard allocation filtering, or allocation awareness. An attribute
# is a simple key value pair, similar to node.key: value, here is an example:
#
# node.rack: rack314

# By default, multiple nodes are allowed to start from the same installation location
# to disable it, set the following:
node.max_local_storage_nodes: 1


#################################### Index ####################################

# You can set a number of options (such as shard/replica options, mapping
# or analyzer definitions, translog settings, ...) for indices globally,
# in this file.
#
# Note, that it makes more sense to configure index settings specifically for
# a certain index, either when creating it or by using the index templates API.
#
# See <http://elasticsearch.org/guide/en/elast ... dules.html> and
# <http://elasticsearch.org/guide/en/elast ... index.html>
# for more information.

# Set the number of shards (splits) of an index (5 by default):
#
# index.number_of_shards: 5

# Set the number of replicas (additional copies) of an index (1 by default):
#
# index.number_of_replicas: 1

# Note, that for development on a local machine, with small indices, it usually
# makes sense to "disable" the distributed features:
#
# index.number_of_shards: 1
# index.number_of_replicas: 0

# These settings directly affect the performance of index and search operations
# in your cluster. Assuming you have enough machines to hold shards and
# replicas, the rule of thumb is:
#
# 1. Having more *shards* enhances the _indexing_ performance and allows to
# _distribute_ a big index across machines.
# 2. Having more *replicas* enhances the _search_ performance and improves the
# cluster _availability_.
#
# The "number_of_shards" is a one-time setting for an index.
#
# The "number_of_replicas" can be increased or decreased anytime,
# by using the Index Update Settings API.
#
# Elasticsearch takes care about load balancing, relocating, gathering the
# results from nodes, etc. Experiment with different settings to fine-tune
# your setup.

# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect
# the index status.


#################################### Paths ####################################

# Path to directory containing configuration (this file and logging.yml):
#
# path.conf: /path/to/conf

# Path to directory where to store index data allocated for this node.
#
# path.data: /path/to/data
#
# Can optionally include more than one location, causing data to be striped across
# the locations (a la RAID 0) on a file level, favouring locations with most free
# space on creation. For example:
#
# path.data: /path/to/data1,/path/to/data2

# Path to temporary files:
#
# path.work: /path/to/work

# Path to log files:
#
# path.logs: /path/to/logs

# Path to where plugins are installed:
#
# path.plugins: /path/to/plugins


#################################### Plugin ###################################

# If a plugin listed here is not installed for current node, the node will not start.
#
# plugin.mandatory: mapper-attachments,lang-groovy


################################### Memory ####################################

# Elasticsearch performs poorly when JVM starts swapping: you should ensure that
# it _never_ swaps.
#
# Set this property to true to lock the memory:
#
bootstrap.mlockall: true

# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
# to the same value, and that the machine has enough memory to allocate
# for Elasticsearch, leaving enough memory for the operating system itself.
#
# You should also make sure that the Elasticsearch process is allowed to lock
# the memory, eg. by using `ulimit -l unlimited`.


############################## Network And HTTP ###############################

# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).

# Set the bind address specifically (IPv4 or IPv6):
#
# network.bind_host: 192.168.0.1

# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
# network.publish_host: 192.168.0.1

# Set both 'bind_host' and 'publish_host':
#
# network.host: 192.168.0.1

# Set a custom port for the node to node communication (9300 by default):
#
# transport.tcp.port: 9300

# Enable compression for all communication between nodes (disabled by default):
#
transport.tcp.compress: true

# Set a custom port to listen for HTTP traffic:
#
# http.port: 9200

# Set a custom allowed content length:
#
# http.max_content_length: 100mb

# Disable HTTP completely:
#
# http.enabled: false

# Set the HTTP host to listen to
#
http.host: "localhost"

################################### Gateway ###################################

# The gateway allows for persisting the cluster state between full cluster
# restarts. Every change to the state (such as adding an index) will be stored
# in the gateway, and when the cluster starts up for the first time,
# it will read its state from the gateway.

# There are several types of gateway implementations. For more information, see
# <http://elasticsearch.org/guide/en/elast ... teway.html>.

# The default gateway type is the "local" gateway (recommended):
#
# gateway.type: local

# Settings below control how and when to start the initial recovery process on
# a full cluster restart (to reuse as much local data as possible when using shared
# gateway).

# Allow recovery process after N nodes in a cluster are up:
#
# gateway.recover_after_nodes: 1

# Set the timeout to initiate the recovery process, once the N nodes
# from previous setting are up (accepts time value):
#
# gateway.recover_after_time: 5m

# Set how many nodes are expected in this cluster. Once these N nodes
# are up (and recover_after_nodes is met), begin recovery process immediately
# (without waiting for recover_after_time to expire):
#
# gateway.expected_nodes: 2


############################# Recovery Throttling #############################

# These settings allow to control the process of shards allocation between
# nodes during initial recovery, replica allocation, rebalancing,
# or when adding and removing nodes.

# Set the number of concurrent recoveries happening on a node:
#
# 1. During the initial recovery
#
# cluster.routing.allocation.node_initial_primaries_recoveries: 4
#
# 2. During adding/removing nodes, rebalancing, etc
#
# cluster.routing.allocation.node_concurrent_recoveries: 2

# Set to throttle throughput when recovering (eg. 100mb, by default 20mb):
#
# indices.recovery.max_bytes_per_sec: 20mb

# Set to limit the number of open concurrent streams when
# recovering a shard from a peer:
#
# indices.recovery.concurrent_streams: 5


################################## Discovery ##################################

# Discovery infrastructure ensures nodes can be found within a cluster
# and master node is elected. Multicast discovery is the default.

# Set to ensure a node sees N other master eligible nodes to be considered
# operational within the cluster. Its recommended to set it to a higher value
# than 1 when running more than 2 nodes in the cluster.
#
# discovery.zen.minimum_master_nodes: 1

# Set the time to wait for ping responses from other nodes when discovering.
# Set this option to a higher value on a slow or congested network
# to minimize discovery failures:
#
# discovery.zen.ping.timeout: 3s

# For more information, see
# <http://elasticsearch.org/guide/en/elast ... y-zen.html>

# Unicast discovery allows to explicitly control which nodes will be used
# to discover the cluster. It can be used when multicast is not present,
# or to restrict the cluster communication-wise.
#
# 1. Disable multicast discovery (enabled by default):
#
discovery.zen.ping.multicast.enabled: false
#
# 2. Configure an initial list of master nodes in the cluster
# to perform discovery when new nodes (master or data) are started:
#
discovery.zen.ping.unicast.hosts: ["localhost"]

# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
#
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
#
# For more information, see
# <http://elasticsearch.org/guide/en/elast ... y-ec2.html>
#
# See <http://elasticsearch.org/tutorials/elas ... ch-on-ec2/>
# for a step-by-step tutorial.

# GCE discovery allows to use Google Compute Engine API in order to perform discovery.
#
# You have to install the cloud-gce plugin for enabling the GCE discovery.
#
# For more information, see <https://github.com/elasticsearch/elasti ... -cloud-gce>.

# Azure discovery allows to use Azure API in order to perform discovery.
#
# You have to install the cloud-azure plugin for enabling the Azure discovery.
#
# For more information, see <https://github.com/elasticsearch/elasti ... loud-azure>.

################################## Slow Log ##################################

# Shard level query and fetch threshold logging.

#index.search.slowlog.threshold.query.warn: 10s
#index.search.slowlog.threshold.query.info: 5s
#index.search.slowlog.threshold.query.debug: 2s
#index.search.slowlog.threshold.query.trace: 500ms

#index.search.slowlog.threshold.fetch.warn: 1s
#index.search.slowlog.threshold.fetch.info: 800ms
#index.search.slowlog.threshold.fetch.debug: 500ms
#index.search.slowlog.threshold.fetch.trace: 200ms

#index.indexing.slowlog.threshold.index.warn: 10s
#index.indexing.slowlog.threshold.index.info: 5s
#index.indexing.slowlog.threshold.index.debug: 2s
#index.indexing.slowlog.threshold.index.trace: 500ms

################################## GC Logging ################################

#monitor.jvm.gc.young.warn: 1000ms
#monitor.jvm.gc.young.info: 700ms
#monitor.jvm.gc.young.debug: 400ms

#monitor.jvm.gc.old.warn: 10s
#monitor.jvm.gc.old.info: 5s
#monitor.jvm.gc.old.debug: 2s
Locked