Mounting NAS storage device

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
muleyl
Posts: 11
Joined: Mon Dec 21, 2015 4:37 am

Mounting NAS storage device

Post by muleyl »

Hello,

My logserver is already full, so I want mount a network shared storage in nagioslogserver.
I cant seem to get my elasticsearch on. and smbclient is not present in the OS.
Please advice.
rkennedy
Posts: 6579
Joined: Mon Oct 05, 2015 11:45 am

Re: Mounting NAS storage device

Post by rkennedy »

Can you expand your current storage? It isn't advised to run NLS on a NAS.

What is the output of lsblk?
Former Nagios Employee
muleyl
Posts: 11
Joined: Mon Dec 21, 2015 4:37 am

Re: Mounting NAS storage device

Post by muleyl »

There is the output of that.
The thing is I receive logs of 12 GB every single day to this server.
So even I keep increase in the size of the hard disk, there is no use since its gonna get full.
Thats why i changed.
You do not have the required permissions to view the files attached to this post.
malger
Posts: 1
Joined: Tue Dec 01, 2015 4:57 am

Re: Mounting NAS storage device

Post by malger »

I think what you'll want to do is only keep the logs you want to be able to search within NLS, and have it archive the older logs to your NAS. Then you can have your backup job move those archived logs to tape or external disk (or whatever your long-term archival solution is) on a periodic basis. Alternatively you can just have the NLS maintenance tasks delete backups older than N days.

We are using a CIFS share on our backup server as our archive location; I am not advising this as a 100% correct/recommended way to manage your logs, but it's working reasonably well for us, so in case it helps you, this is what I did:

1. Created a 'hidden' share on the server called NLS$ and gave read/write permissions to a local user account I created on the Windows server itself for the purpose (called NLS).

2. On the Nagios Log Server host, I edited /etc/fstab and added the following (note: this should all be on one long line):

Code: Select all

 # share on bkup-01 for archiving data
 //10.xx.yy.zz/NLS$   /repo1   cifs   rw,sec=ntlm,cred=/etc/cred.bkup-01,uid=500,gid=500,noforceuid,noforcegid,file_mode=0770,dir_mode=0770,serverino,rsize=16384,wsize=65536    0 0
3. I created the file referenced above (cred=/etc/cred.bkup-01) with the following content:

Code: Select all

 username=NLS
 password=xxxxxxxxx
 domain=BKUP-01
(and made sure it was only readable by root; the 'domain' name is the name of server, so probably will just be the name of your NAS, unless it's joined to a domain and you're using a domain account for authentication).

4. Made an empty directory with mkdir /repo1, as referenced in the /etc/fstab entry.

5. Entered mount /repo1 to test it worked. I don't recall offhand if I had to install any additional modules for the needed functionality, but you can always yum install cifs-utils or similar if needed.

6. Rebooted the server and made sure it automatically mounted the repository at startup.

Code: Select all

 [root@logserver /]# df -h
 Filesystem           Size  Used Avail Use% Mounted on
 rootfs               187G  124G   62G  67% /
 devtmpfs             7.9G  152K  7.9G   1% /dev
 tmpfs                7.9G     0  7.9G   0% /dev/shm
 /dev/sda1            187G  124G   62G  67% /
 //10.xx.yy.zz/NLS$/   16T  9.8T  6.3T  62% /repo1
7. Now that the server has a bunch of space available under a local path, go to the administration part of the NLS UI, Backup/Maintenance section, and use the "Create Repository" button at the top right to add a new repository at your mounted path (i.e. /repo1).

8. Select the repository you created using the dropdown next to "Repository to store backups in" and configure the other maintenance settings as you wish. There is a (slightly out of date) document that describes the options: Managing Backups and Maintenance.

The main things are: "close indexes older than" to reduce the amount of memory you need - closed indexes only consume disk space, not memory, but cannot be searched. "Delete indexes older than" can be used to automatically delete your old, closed, indexes to free up the disk space. Backups are automatically made of all your indexes daily (to the repository location) so when they get old enough to be deleted, there will already be a backup snapshot in your repository (i.e. on your NAS). In the event you need to restore it, you can do so from the snapshots list in the right column.

Our indexes are approximately 16-17GB per day, so we're trying to keep around 4 days online, and deleting the local copy after 5 days. Realistically, we rarely do searches over more than the current and previous index.

Also, if your NAS supports exporting via NFS, you could consider using that rather than CIFS. Probably won't make much/any difference though, as it's only used during backup/restore operations, so probably best to just use whatever you're most comfortable administering.
User avatar
hsmith
Agent Smith
Posts: 3539
Joined: Thu Jul 30, 2015 11:09 am
Location: 127.0.0.1
Contact:

Re: Mounting NAS storage device

Post by hsmith »

@malger, thank you for your very detailed post. We really appreciate it!

@muleyl, did this answer your question?
Former Nagios Employee.
me.
muleyl
Posts: 11
Joined: Mon Dec 21, 2015 4:37 am

Re: Mounting NAS storage device

Post by muleyl »

yes @hsmith this sounds like a good plan that I can work with,
Will let you know.
muleyl
Posts: 11
Joined: Mon Dec 21, 2015 4:37 am

Re: Mounting NAS storage device

Post by muleyl »

hsmith wrote:@malger, thank you for your very detailed post. We really appreciate it!

@muleyl, did this answer your question?

@malger, Question.

Did you change the data store path ( DATA_DIR="/new/path/data" ) to the NAS ? or the data store path is locally within the system ? and only the backups/repos are mounted on NAS ?
jolson
Attack Rabbit
Posts: 2560
Joined: Thu Feb 12, 2015 12:40 pm

Re: Mounting NAS storage device

Post by jolson »

It's worth noting that when we're discussing data that you will be querying (i.e. active data); it is always best to keep as much of that data as possible on your _local_ disk.

Your remote disk should only be used for backups. This is a recommendation that comes from the development team of Elasticsearch:
Do not place the index on a remotely mounted filesystem (e.g. NFS or SMB/CIFS); use storage local to the machine instead.
Twits Blog
Show me a man who lives alone and has a perpetually clean kitchen, and 8 times out of 9 I'll show you a man with detestable spiritual qualities.
muleyl
Posts: 11
Joined: Mon Dec 21, 2015 4:37 am

Re: Mounting NAS storage device

Post by muleyl »

So I did according to @malger and I now I'm having a different error.
Everytime I start logstash it crashes.
When I check the status of the logstash, it says "logstash daemon dead but pid file exists"
Please help.

thank you
jolson
Attack Rabbit
Posts: 2560
Joined: Thu Feb 12, 2015 12:40 pm

Re: Mounting NAS storage device

Post by jolson »

Does Logstash run for any amount of time? I'm interested in the following:

Code: Select all

cat /var/log/logstash/logstash.log
free -m
The most common cause of this problem is insufficient memory on the cluster. You might try doubling the memory of your nodes to see whether or not that makes a difference.
Twits Blog
Show me a man who lives alone and has a perpetually clean kitchen, and 8 times out of 9 I'll show you a man with detestable spiritual qualities.
Locked