At the Cluster status page , i got the following info
1,768,892,710 Documents
1.3TB Primary Size
1.3TB Total Size
1 Data Instances
332 Total Shards
34 Indices
but my linux system
total used free
/dev/mapper/centos-root 6.9T 6.5T 123G 99% /
is it normal or i have to archive the log to clean up space?
Thanks
cluster status and actual size
Re: cluster status and actual size
Closed indices will take up space on the hard drive but will not be counted on the Cluster Status page. I would recommend setting up a repo to store old data if you need to save it and configuring NLS to delete older indices to free up space on the local drive.
The delete option is found under Admin > System > Snapshots & Maintenance > Maintenance Settings > Delete indexes older than.
Setting up a repo is covered in:
https://assets.nagios.com/downloads/nag ... enance.pdf
https://assets.nagios.com/downloads/nag ... ations.pdf
The delete option is found under Admin > System > Snapshots & Maintenance > Maintenance Settings > Delete indexes older than.
Setting up a repo is covered in:
https://assets.nagios.com/downloads/nag ... enance.pdf
https://assets.nagios.com/downloads/nag ... ations.pdf
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
-
- Posts: 55
- Joined: Tue Dec 04, 2018 9:52 pm
Re: cluster status and actual size
now the indices folder take up 6.6TB and continue to grow
can I gzip it after put it in snapshot respository?
or just gzip logstash-2019-xxx folder and delete the original one?
can I gzip it after put it in snapshot respository?
or just gzip logstash-2019-xxx folder and delete the original one?
Re: cluster status and actual size
The deletion process should happen automatically. You can also force it from the command line by running:
Replacing DAYS with a number.
Code: Select all
curator delete indices --older-than DAYS --time-unit days --timestring %Y.%m.%d
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.