While they were created from each node, it appears they were created with the root user. The backups, are going to run as the nagios user. You'll need to change the ownership, and also make sure that the machines have rw access to /backups as well.jspink wrote:Path is /backups
I have 10 nodes total that all mount that at the root.
the nag01 to nag10 folders inside /backup were created from each node to ensure I had write access
Backup Jobs
Re: Backup Jobs
Former Nagios Employee
Re: Backup Jobs
I was able to re-create my fstab entry, giving read/write to all accounts
Backup is running correctly now
Thanks for pointing me in the right direction on this.
Backup is running correctly now
Thanks for pointing me in the right direction on this.
Nagios Log Server: 10 Instances - 3,916,302,797 documents last check in 180 shards
Re: Backup Jobs
Awesome! I figured it was just the nagios user after not seeing the read flag.
Since you have a larger environment, specifically doing backups via NFS - I wanted to let you know about this document which might help you. http://nfs.sourceforge.net/nfs-howto/ar01s05.html
One of our developers mentioned it to me. Take a look at the 5.1 section. Setting your rsize and wsize will help to optimize the traffic to backups. You should be able to send 32K block size at a time, versus what might be 8K currently.
Since you have a larger environment, specifically doing backups via NFS - I wanted to let you know about this document which might help you. http://nfs.sourceforge.net/nfs-howto/ar01s05.html
One of our developers mentioned it to me. Take a look at the 5.1 section. Setting your rsize and wsize will help to optimize the traffic to backups. You should be able to send 32K block size at a time, versus what might be 8K currently.
Former Nagios Employee
Re: Backup Jobs
Perfect - reading through now - thank you.rkennedy wrote:Awesome! I figured it was just the nagios user after not seeing the read flag.
Since you have a larger environment, specifically doing backups via NFS - I wanted to let you know about this document which might help you. http://nfs.sourceforge.net/nfs-howto/ar01s05.html
One of our developers mentioned it to me. Take a look at the 5.1 section. Setting your rsize and wsize will help to optimize the traffic to backups. You should be able to send 32K block size at a time, versus what might be 8K currently.
Being a larger enviro, and just getting into the backups, can I expect a one for one in existing cluster data size to backup or is there compression?
Can I specify "only backup days x-y"?
If there is a doc somewhere I can read that provides all this data, I'd be happy to read up on my own.
For obvious reasons, I don't want a one for one backup of these data sizes
You do not have the required permissions to view the files attached to this post.
Nagios Log Server: 10 Instances - 3,916,302,797 documents last check in 180 shards
Re: Backup Jobs
Sure, take a look at this link (specifically the 3rd page) - https://assets.nagios.com/downloads/nag ... enance.pdf
It won't let you select certain days, but you can set it to delete backups after x days.
As for compression, it will compress the mapping + settings, but the data files are not.
It won't let you select certain days, but you can set it to delete backups after x days.
As for compression, it will compress the mapping + settings, but the data files are not.
Former Nagios Employee