So for some reason some nodes went offline and put the cluster at red. I had recovered the cluster but now it looks like one of the shards is 'missing' and I have the replica stuck at initializing state. Is this data pretty much lost? March 3rd was the day I discovered the issue.
# curl localhost:9200/_cat/shards | grep logstash-2016.03.03
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0logstash-2016.03.03 2 p STARTED 47762680 27.5gb 10.242.102.107 4521585a-88af-47c9-81e5-c4d13cffb148
logstash-2016.03.03 2 r STARTED 47762680 27.5gb 10.242.102.125 2db4ce89-4c01-4a30-9bc8-66e987b7d613
logstash-2016.03.03 0 p STARTED 47735942 27.5gb 10.242.102.124 c424515a-16b3-43f9-866e-19daedef8a63
logstash-2016.03.03 0 r STARTED 47735942 27.5gb 10.242.102.107 4521585a-88af-47c9-81e5-c4d13cffb148
[b]logstash-2016.03.03 3 p STARTED 10.242.102.124 c424515a-16b3-43f9-866e-19daedef8a63
logstash-2016.03.03 3 r INITIALIZING 10.242.102.125 2db4ce89-4c01-4a30-9bc8-66e987b7d613[/b]
logstash-2016.03.03 1 r STARTED 47750367 27.5gb 10.242.102.124 c424515a-16b3-43f9-866e-19daedef8a63
logstash-2016.03.03 1 p STARTED 47750367 27.5gb 10.242.102.108 30ab2b2c-439f-4bcc-977d-7c0e9a90f3a5
logstash-2016.03.03 4 r STARTED 47776629 27.5gb 10.242.102.108 30ab2b2c-439f-4bcc-977d-7c0e9a90f3a5
logstash-2016.03.03 4 p STARTED 47776629 27.5gb 10.242.102.109 e63648a3-d912-4f5d-a867-1b99282a5e7c
100 82604 100 82604 0 0 1142k 0 --:--:-- --:--:-- --:--:-- 1152k
it looks like one of the shards is 'missing' and I have the replica stuck at initializing state. Is this data pretty much lost?
Possibly. If you don't mind losing the data, feel free to delete the associated index - no harm. However, if the data is potentially important, it's likely worth attempting to recover it by following the steps detailed in the above KB article. Thank you!
TwitsBlog Show me a man who lives alone and has a perpetually clean kitchen, and 8 times out of 9 I'll show you a man with detestable spiritual qualities.
In this case, since the primary shard #3 looks to be missing. I am not sure if this re-assigning the shard will help. It isn't showing any data for both primary and replica?
I ran the command anyways, but it gave me
ElasticsearchIllegalArgumentException[[allocate] failed to find [logstash-2016.03.03][3] on the list of unassigned shards
Looks like this might mean that those shards are 'missing'?
Looks like this might mean that those shards are 'missing'?
It sounds like it. Do you have any backups in place that would allow you to restore that index? Otherwise we could try flushing your logs to disk and giving elasticsearch a restart - but it sounds like the shard may be lost unless you have a backup in place.
TwitsBlog Show me a man who lives alone and has a perpetually clean kitchen, and 8 times out of 9 I'll show you a man with detestable spiritual qualities.
There is a backup in place but says partial, so I am thinking the backup might've been missing that shard as well.
logstash-2016.03.03 PARTIAL logstash-2016.03.03
I guess this means the data is most likely lost.
That sounds correct. We could verify in a remote session if you'd prefer to take every action possible - but to me it sounds like the data is lost and the index will require deletion.
TwitsBlog Show me a man who lives alone and has a perpetually clean kitchen, and 8 times out of 9 I'll show you a man with detestable spiritual qualities.