Mounting NAS storage device
Mounting NAS storage device
Hello,
My logserver is already full, so I want mount a network shared storage in nagioslogserver.
I cant seem to get my elasticsearch on. and smbclient is not present in the OS.
Please advice.
My logserver is already full, so I want mount a network shared storage in nagioslogserver.
I cant seem to get my elasticsearch on. and smbclient is not present in the OS.
Please advice.
Re: Mounting NAS storage device
Can you expand your current storage? It isn't advised to run NLS on a NAS.
What is the output of lsblk?
What is the output of lsblk?
Former Nagios Employee
Re: Mounting NAS storage device
There is the output of that.
The thing is I receive logs of 12 GB every single day to this server.
So even I keep increase in the size of the hard disk, there is no use since its gonna get full.
Thats why i changed.
The thing is I receive logs of 12 GB every single day to this server.
So even I keep increase in the size of the hard disk, there is no use since its gonna get full.
Thats why i changed.
You do not have the required permissions to view the files attached to this post.
Re: Mounting NAS storage device
I think what you'll want to do is only keep the logs you want to be able to search within NLS, and have it archive the older logs to your NAS. Then you can have your backup job move those archived logs to tape or external disk (or whatever your long-term archival solution is) on a periodic basis. Alternatively you can just have the NLS maintenance tasks delete backups older than N days.
We are using a CIFS share on our backup server as our archive location; I am not advising this as a 100% correct/recommended way to manage your logs, but it's working reasonably well for us, so in case it helps you, this is what I did:
1. Created a 'hidden' share on the server called NLS$ and gave read/write permissions to a local user account I created on the Windows server itself for the purpose (called NLS).
2. On the Nagios Log Server host, I edited /etc/fstab and added the following (note: this should all be on one long line):
3. I created the file referenced above (cred=/etc/cred.bkup-01) with the following content:
(and made sure it was only readable by root; the 'domain' name is the name of server, so probably will just be the name of your NAS, unless it's joined to a domain and you're using a domain account for authentication).
4. Made an empty directory with mkdir /repo1, as referenced in the /etc/fstab entry.
5. Entered mount /repo1 to test it worked. I don't recall offhand if I had to install any additional modules for the needed functionality, but you can always yum install cifs-utils or similar if needed.
6. Rebooted the server and made sure it automatically mounted the repository at startup.
7. Now that the server has a bunch of space available under a local path, go to the administration part of the NLS UI, Backup/Maintenance section, and use the "Create Repository" button at the top right to add a new repository at your mounted path (i.e. /repo1).
8. Select the repository you created using the dropdown next to "Repository to store backups in" and configure the other maintenance settings as you wish. There is a (slightly out of date) document that describes the options: Managing Backups and Maintenance.
The main things are: "close indexes older than" to reduce the amount of memory you need - closed indexes only consume disk space, not memory, but cannot be searched. "Delete indexes older than" can be used to automatically delete your old, closed, indexes to free up the disk space. Backups are automatically made of all your indexes daily (to the repository location) so when they get old enough to be deleted, there will already be a backup snapshot in your repository (i.e. on your NAS). In the event you need to restore it, you can do so from the snapshots list in the right column.
Our indexes are approximately 16-17GB per day, so we're trying to keep around 4 days online, and deleting the local copy after 5 days. Realistically, we rarely do searches over more than the current and previous index.
Also, if your NAS supports exporting via NFS, you could consider using that rather than CIFS. Probably won't make much/any difference though, as it's only used during backup/restore operations, so probably best to just use whatever you're most comfortable administering.
We are using a CIFS share on our backup server as our archive location; I am not advising this as a 100% correct/recommended way to manage your logs, but it's working reasonably well for us, so in case it helps you, this is what I did:
1. Created a 'hidden' share on the server called NLS$ and gave read/write permissions to a local user account I created on the Windows server itself for the purpose (called NLS).
2. On the Nagios Log Server host, I edited /etc/fstab and added the following (note: this should all be on one long line):
Code: Select all
# share on bkup-01 for archiving data
//10.xx.yy.zz/NLS$ /repo1 cifs rw,sec=ntlm,cred=/etc/cred.bkup-01,uid=500,gid=500,noforceuid,noforcegid,file_mode=0770,dir_mode=0770,serverino,rsize=16384,wsize=65536 0 0
Code: Select all
username=NLS
password=xxxxxxxxx
domain=BKUP-014. Made an empty directory with mkdir /repo1, as referenced in the /etc/fstab entry.
5. Entered mount /repo1 to test it worked. I don't recall offhand if I had to install any additional modules for the needed functionality, but you can always yum install cifs-utils or similar if needed.
6. Rebooted the server and made sure it automatically mounted the repository at startup.
Code: Select all
[root@logserver /]# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 187G 124G 62G 67% /
devtmpfs 7.9G 152K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/sda1 187G 124G 62G 67% /
//10.xx.yy.zz/NLS$/ 16T 9.8T 6.3T 62% /repo18. Select the repository you created using the dropdown next to "Repository to store backups in" and configure the other maintenance settings as you wish. There is a (slightly out of date) document that describes the options: Managing Backups and Maintenance.
The main things are: "close indexes older than" to reduce the amount of memory you need - closed indexes only consume disk space, not memory, but cannot be searched. "Delete indexes older than" can be used to automatically delete your old, closed, indexes to free up the disk space. Backups are automatically made of all your indexes daily (to the repository location) so when they get old enough to be deleted, there will already be a backup snapshot in your repository (i.e. on your NAS). In the event you need to restore it, you can do so from the snapshots list in the right column.
Our indexes are approximately 16-17GB per day, so we're trying to keep around 4 days online, and deleting the local copy after 5 days. Realistically, we rarely do searches over more than the current and previous index.
Also, if your NAS supports exporting via NFS, you could consider using that rather than CIFS. Probably won't make much/any difference though, as it's only used during backup/restore operations, so probably best to just use whatever you're most comfortable administering.
Re: Mounting NAS storage device
@malger, thank you for your very detailed post. We really appreciate it!
@muleyl, did this answer your question?
@muleyl, did this answer your question?
Former Nagios Employee.
me.
me.
Re: Mounting NAS storage device
yes @hsmith this sounds like a good plan that I can work with,
Will let you know.
Will let you know.
Re: Mounting NAS storage device
@malger, Question.
Did you change the data store path ( DATA_DIR="/new/path/data" ) to the NAS ? or the data store path is locally within the system ? and only the backups/repos are mounted on NAS ?
Re: Mounting NAS storage device
It's worth noting that when we're discussing data that you will be querying (i.e. active data); it is always best to keep as much of that data as possible on your _local_ disk.
Your remote disk should only be used for backups. This is a recommendation that comes from the development team of Elasticsearch:
Your remote disk should only be used for backups. This is a recommendation that comes from the development team of Elasticsearch:
Do not place the index on a remotely mounted filesystem (e.g. NFS or SMB/CIFS); use storage local to the machine instead.
Re: Mounting NAS storage device
So I did according to @malger and I now I'm having a different error.
Everytime I start logstash it crashes.
When I check the status of the logstash, it says "logstash daemon dead but pid file exists"
Please help.
thank you
Everytime I start logstash it crashes.
When I check the status of the logstash, it says "logstash daemon dead but pid file exists"
Please help.
thank you
Re: Mounting NAS storage device
Does Logstash run for any amount of time? I'm interested in the following:
The most common cause of this problem is insufficient memory on the cluster. You might try doubling the memory of your nodes to see whether or not that makes a difference.
Code: Select all
cat /var/log/logstash/logstash.log
free -m