I'm unable to manually delete indices in NLS 2.0.2. It looks like it deletes and then some five minutes later, the index is back. I've tried re-deleting and then it just comes back. This also is not an isolated index that won't delete. There are some thirty of these that just won't go away.
Same thing with closing an index (or multiples), it just re-opens 5+ minutes later.
How do I get rid of these indices?
Jonathan
Unable to delete indices
Re: Unable to delete indices
Are you trying to delete/close the indices with the actions found under Admin > System > Index Status or elsewhere/some other method? I'd be curious to see if anything is logged in the elasticsearch log when an index is recreated. Try running:
tail -f /var/log/eleasticsearch/<CLUSTER_ID>.log
Then delete an index and watch the log to see if anything interesting is logged when the index reappears. If you have multiple nodes in a cluster, run the tailcommand on all of them at the same time. Also, does the size of the recreated indices small or grow?
You could also go the manual route of deleting an index:
curl -XDELETE 'http://localhost:9200/logstash-YYYY.MM.DD/'
Replacing YYYY.MM.DD with the date you'd like to delete.
tail -f /var/log/eleasticsearch/<CLUSTER_ID>.log
Then delete an index and watch the log to see if anything interesting is logged when the index reappears. If you have multiple nodes in a cluster, run the tailcommand on all of them at the same time. Also, does the size of the recreated indices small or grow?
You could also go the manual route of deleting an index:
curl -XDELETE 'http://localhost:9200/logstash-YYYY.MM.DD/'
Replacing YYYY.MM.DD with the date you'd like to delete.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Unable to delete indices
Yes, I'm using the Admin > System > Index Status to delete these indices. (or at least trying to delete)
I deleted a bunch of indices from 2017 and had the tail command running on the nodes. Nodes 1,2,3,4,6 didn't show any changes. Node 5 showed this:
[2018-04-18 15:01:28,277][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.28] deleting index
[2018-04-18 15:01:28,624][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.26] deleting index
[2018-04-18 15:01:28,976][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.20] deleting index
[2018-04-18 15:01:29,115][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.21] deleting index
[2018-04-18 15:01:29,249][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.20] deleting index
[2018-04-18 15:01:29,544][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.17] deleting index
[2018-04-18 15:01:29,702][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.30] deleting index
[2018-04-18 15:01:29,841][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.29] deleting index
[2018-04-18 15:01:30,126][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.28] deleting index
[2018-04-18 15:01:30,285][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.27] deleting index
[2018-04-18 15:01:30,937][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.26] deleting index
[2018-04-18 15:01:31,126][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.05] deleting index
[2018-04-18 15:01:31,270][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.04] deleting index
[2018-04-18 15:01:31,445][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.31] deleting index
[2018-04-18 15:01:31,582][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.29] deleting index
[2018-04-18 15:01:31,766][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.25] deleting index
[2018-04-18 15:01:31,964][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.24] deleting index
[2018-04-18 15:01:32,143][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.23] deleting index
[2018-04-18 15:01:32,324][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.16] deleting index
[2018-04-18 15:01:32,477][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.15] deleting index
[2018-04-18 15:01:49,990][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.27] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:01:50,831][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.27] update_mapping [syslog] (dynamic)
[2018-04-18 15:02:06,463][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.28] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:02:07,516][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.28] update_mapping [syslog] (dynamic)
[2018-04-18 15:03:33,905][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.28] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:03:34,941][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.28] update_mapping [syslog] (dynamic)
[2018-04-18 15:03:54,642][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.20] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:03:55,485][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.20] update_mapping [syslog] (dynamic)
[2018-04-18 15:04:34,380][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.30] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:04:35,063][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.30] update_mapping [syslog] (dynamic)
[2018-04-18 15:04:49,076][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.30] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:04:49,774][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.30] update_mapping [syslog] (dynamic)
[2018-04-18 15:05:12,515][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.17] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:05:13,557][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.17] update_mapping [syslog] (dynamic)
[2018-04-18 15:06:52,505][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.21] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:06:53,296][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.21] update_mapping [syslog] (dynamic)
[2018-04-18 15:07:24,646][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2018.02.17] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:07:25,527][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2018.02.17] update_mapping [syslog] (dynamic)
[2018-04-18 15:13:50,348][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.29] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:13:50,971][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.29] update_mapping [syslog] (dynamic)
I deleted a bunch of indices from 2017 and had the tail command running on the nodes. Nodes 1,2,3,4,6 didn't show any changes. Node 5 showed this:
[2018-04-18 15:01:28,277][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.28] deleting index
[2018-04-18 15:01:28,624][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.26] deleting index
[2018-04-18 15:01:28,976][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.20] deleting index
[2018-04-18 15:01:29,115][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.21] deleting index
[2018-04-18 15:01:29,249][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.20] deleting index
[2018-04-18 15:01:29,544][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.17] deleting index
[2018-04-18 15:01:29,702][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.30] deleting index
[2018-04-18 15:01:29,841][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.29] deleting index
[2018-04-18 15:01:30,126][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.28] deleting index
[2018-04-18 15:01:30,285][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.27] deleting index
[2018-04-18 15:01:30,937][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.26] deleting index
[2018-04-18 15:01:31,126][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.05] deleting index
[2018-04-18 15:01:31,270][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.04] deleting index
[2018-04-18 15:01:31,445][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.31] deleting index
[2018-04-18 15:01:31,582][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.29] deleting index
[2018-04-18 15:01:31,766][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.25] deleting index
[2018-04-18 15:01:31,964][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.24] deleting index
[2018-04-18 15:01:32,143][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.23] deleting index
[2018-04-18 15:01:32,324][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.16] deleting index
[2018-04-18 15:01:32,477][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.15] deleting index
[2018-04-18 15:01:49,990][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.27] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:01:50,831][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.27] update_mapping [syslog] (dynamic)
[2018-04-18 15:02:06,463][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.28] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:02:07,516][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.28] update_mapping [syslog] (dynamic)
[2018-04-18 15:03:33,905][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.28] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:03:34,941][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.28] update_mapping [syslog] (dynamic)
[2018-04-18 15:03:54,642][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.20] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:03:55,485][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.20] update_mapping [syslog] (dynamic)
[2018-04-18 15:04:34,380][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.30] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:04:35,063][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.12.30] update_mapping [syslog] (dynamic)
[2018-04-18 15:04:49,076][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.30] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:04:49,774][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.10.30] update_mapping [syslog] (dynamic)
[2018-04-18 15:05:12,515][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.17] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:05:13,557][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.17] update_mapping [syslog] (dynamic)
[2018-04-18 15:06:52,505][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.21] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:06:53,296][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.11.21] update_mapping [syslog] (dynamic)
[2018-04-18 15:07:24,646][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2018.02.17] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:07:25,527][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2018.02.17] update_mapping [syslog] (dynamic)
[2018-04-18 15:13:50,348][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.29] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [syslog, _default_]
[2018-04-18 15:13:50,971][INFO ][cluster.metadata ] [1482f241-7e6a-44f0-9e88-cba38cfb2a7f] [logstash-2017.08.29] update_mapping [syslog] (dynamic)
Re: Unable to delete indices
This can happen if there are clients configured with the wrong date. Use the dashboards to find the clients sending the data for those days and verify the time on those machines.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Unable to delete indices
Also tried to delete with the -XDELETE. Same result in the logs. Shows it deleting and then getting re-created at the same size.
Re: Unable to delete indices
How large are they? Have you used the dashboard to check for devices that may have the incorrect time?
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Unable to delete indices
These indices seem to have anywhere from 1 to 10 documents in them. But when I use the dashboard and narrow down the time to that day, nothing shows up. and shows empty. I think these indices as some point had a regular amount of data in them, but now only show the handful of documents because of a delete attempt. I deleted a more recent index that had a few thousand documents and when the index came back it only had one document.
FWIW, I had one index with about 200M documents (~50GB) that I deleted earlier today. It has not come back.
FWIW, I had one index with about 200M documents (~50GB) that I deleted earlier today. It has not come back.
Re: Unable to delete indices
To find the documents, try running:
curl -XGET 'http://localhost:9200/logstash-YYYY.MM. ... rch?pretty'
Again replacing the YYYY.MM.DD with the date of one of the reappearing indices.
curl -XGET 'http://localhost:9200/logstash-YYYY.MM. ... rch?pretty'
Again replacing the YYYY.MM.DD with the date of one of the reappearing indices.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Unable to delete indices
From the 2017.08.15 log that shows 1 document in it. This was the output:
This cluster logs only ESX hosts and all our hosts time are synchronized with NTP servers on UTC.
{
"took" : 31,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [ {
"_index" : "logstash-2017.08.15",
"_type" : "syslog",
"_id" : "AWLanGbs0aBdBa72xvjB",
"_score" : 1.0,
"_source":{"message":"Section for VMware ESX, pid=66849, version=6.5.0, build=5969303, option=Release\n","@version":"1","@timestamp":"2017-08-15T06:06:51.019Z","type":"syslog","host":"172.20.75.241","priority":13,"timestamp8601":"2017-08-15T06:06:51.019Z","logsource":"xxxxx.xxxxx.xxx","program":"Rhttpproxy","severity":5,"facility":1,"timestamp":"2017-08-15T06:06:51.019Z","facility_label":"user-level","severity_label":"Notice"}
} ]
}
}
This cluster logs only ESX hosts and all our hosts time are synchronized with NTP servers on UTC.
{
"took" : 31,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [ {
"_index" : "logstash-2017.08.15",
"_type" : "syslog",
"_id" : "AWLanGbs0aBdBa72xvjB",
"_score" : 1.0,
"_source":{"message":"Section for VMware ESX, pid=66849, version=6.5.0, build=5969303, option=Release\n","@version":"1","@timestamp":"2017-08-15T06:06:51.019Z","type":"syslog","host":"172.20.75.241","priority":13,"timestamp8601":"2017-08-15T06:06:51.019Z","logsource":"xxxxx.xxxxx.xxx","program":"Rhttpproxy","severity":5,"facility":1,"timestamp":"2017-08-15T06:06:51.019Z","facility_label":"user-level","severity_label":"Notice"}
} ]
}
}
Re: Unable to delete indices
It looks like 172.20.75.241 has a bad time setting and sending times from the past. I would track it down and check the settings/logs on that machine.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.