Page 1 of 1

Multiple retention periods

Posted: Wed Sep 15, 2021 6:21 am
by jgsupport
Hi,

We have a requirement of keeping most logs for 90 days which I can set in indicies retention section.

There are some logs from a specific log file on multiple hosts that are very large and using a lot of space. We dont need these logs for more than 21 days. Is there anyway we can set a seperate retention just for those log files on the specific hosts so we can save space on the log server.

Re: Multiple retention periods

Posted: Wed Sep 15, 2021 4:40 pm
by pbroste
Hello @jgsupport

Thanks for reaching out on inquiry about retention and will forward from previous post the following.
This is a fairly common request but not one that has been implemented yet. If one were inclined though something could be set up as a cron job. The command to delete all of 192.168.55.2's logs from yesterday's index would like:

curl -XDELETE 'http://localhost:9200/logstash-2018.11. ... 2.168.55.2'

As far as getting the size of data in bytes that an host sends, there isn't a good way to do this on the NLS end. You can however see the number of events a host sends which should give you an idea of how much data it is sending. This can be done by applying a filter like "host:192.168.55.2" on the dashboard.
Please let us know if you have further questions,
Perry

Re: Multiple retention periods

Posted: Wed Sep 15, 2021 5:29 pm
by jgsupport
Hi Perry,

Thank you for your reply.

That is a good idea, but I want to know if there is a way we can pick and choose what to delete based on SourceModuleName.
For example on any server we would have a Windows server host=192.168.1.1 where Windows Event Logs, IIS logs, and a few specific other log files (C:\temp\log1.log, C:\temp\log2.log, C:\temp\log3.log) all pushing data into NLS from the single server.

Is there a way we could only delete all messages coming from C:\temp\log1.log when the timestamp is 21 days old. But I want to keep everything else from this host 192.168.1.1 for 60 days.

And the below cron job looks manual as in you will have to go in every day and change the logstash file name based on date. Any way to automate that?

Thanks

Re: Multiple retention periods

Posted: Thu Sep 16, 2021 2:53 pm
by pbroste
Hello @jgsupport

It does not appear that it would be possible for the elasticsearch data to be captured for the entire host in one spastic data cluster in '/usr/local/nagioslogserver/elasticsearch/data/'.

Thanks,
Perry

Re: Multiple retention periods

Posted: Thu Sep 23, 2021 12:19 am
by jgsupport
Would it be possible to setup a different repository or instance on the same server and have some hosts dumping logs into that repository instead of the default in NLS?
You almost need a separate server by the sounds of it?

Re: Multiple retention periods

Posted: Thu Sep 23, 2021 6:00 pm
by ssax
It can be done now that we you've added the File to the message in the other post but you would need to do it via a cron job.

You can add this to /root/log_cleanup.sh:

Code: Select all

#!/bin/bash
# This script will delete all records from your logserver indices based on this query:
#     host:192.168.1.1 AND File:"C:\\temp\\logfile.log" AND @timestamp:[* TO XXXX-XX-XXT20:00:00]
# The XXXX-XX-XX will be replaced with today's current day minus 21 days
# urlencode from https://gist.github.com/cdown/1163649

urlencode() {
    # urlencode <string>

    old_lc_collate=$LC_COLLATE
    LC_COLLATE=C

    local length="${#1}"
    for (( i = 0; i < length; i++ )); do
        local c="${1:$i:1}"
        case $c in
            [a-zA-Z0-9.~_-]) printf '%s' "$c" ;;
            *) printf '%%%02X' "'$c" ;;
        esac
    done

    LC_COLLATE=$old_lc_collate
}

# Setup the querystring
QUERYSTRING='host:192.168.1.1 AND File:"C:\\temp\\logfile.log" AND @timestamp:[* TO '`date +%Y-%m-%d -d "-21 days"`'T20:00:00]'

# URL encode the QUERYSTRING to be used by curl
ENCODED=$(urlencode "$QUERYSTRING")

curl -k -L --silent --max-time 300 -XDELETE "http://localhost:9200/logstash-*/_query?q=$ENCODED"
exit 0
Then add this to /etc/cron.d/custom:

Code: Select all

0 20 * * * root /bin/bash log_cleanup.sh
That would run the cleanup every day at 8pm.

Re: Multiple retention periods

Posted: Sun Sep 26, 2021 6:11 pm
by jgsupport
Thank you for that, I will try it out today.
Is there a way to add multiple hosts to the same job.

Could we do something like below or do we need to setup a separate .sh and cron job for each server.
I am not the best at Linux so dont really know the right syntax.

Thanks.

I am guessing nothing will need to change in urlencode() { }

# Setup the querystring
SERVER1='host:192.168.1.1 AND File:"C:\\temp\\logfile1.log" AND @timestamp:[* TO '`date +%Y-%m-%d -d "-21 days"`'T20:00:00]'
SERVER2='host:192.168.1.2 AND File:"C:\\temp\\logfile2.log" AND @timestamp:[* TO '`date +%Y-%m-%d -d "-21 days"`'T20:00:00]'
SERVER3='host:192.168.1.3 AND File:"C:\\temp\\logfile3.log" AND @timestamp:[* TO '`date +%Y-%m-%d -d "-21 days"`'T20:00:00]'

# URL encode the QUERYSTRING to be used by curl
ENCODED_SERVER1=$(urlencode "$SERVER1")
ENCODED_SERVER2=$(urlencode "$SERVER2")
ENCODED_SERVER3=$(urlencode "$SERVER3")

curl -k -L --silent --max-time 300 -XDELETE "http://localhost:9200/logstash-*/_query ... ED_SERVER1" (http://localhost:9200/logstash-*/_query ... ED_SERVER1')

curl -k -L --silent --max-time 300 -XDELETE "http://localhost:9200/logstash-*/_query ... ED_SERVER2" (http://localhost:9200/logstash-*/_query ... ED_SERVER2')

curl -k -L --silent --max-time 300 -XDELETE "http://localhost:9200/logstash-*/_query ... ED_SERVER3" (http://localhost:9200/logstash-*/_query ... ED_SERVER3')
exit 0

Re: Multiple retention periods

Posted: Mon Sep 27, 2021 3:54 pm
by ssax
That should work.