cluster status is red
cluster status is red
we have three instances and after migrate the data to another drive due the old drive is almost full, the cluster status is red and there are two Initializing Shards and two unassigned shards. how can i solve it?
-
scottwilkerson
- DevOps Engineer
- Posts: 19396
- Joined: Tue Nov 15, 2011 3:11 pm
- Location: Nagios Enterprises
- Contact:
Re: cluster status is red
How long have you had two "Initializing Shards"?
Can you run the following and post the results
Can you run the following and post the results
Code: Select all
curl -XGET http://localhost:9200/_cat/shards | grep INITRe: cluster status is red
it lasted for a few days.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 82604 100 82604 0 0 106k 0 --:--:-- --:--:-- --:--:-- 106k
logstash-2015.01.28 2 p INITIALIZING 192.168.21.10 181841a1-d717-437c-bd36-6d4a8344abe6
logstash-2015.01.28 0 p INITIALIZING 192.168.20.10 33c658b5-db74-480c-bb71-32c430f77b00
it runned a few days.
IP Hostname Port 1m, 5m, 15m Load CPU % Memory Used Memory Free Storage Total Storage Available Elasticsearch Logstash Actions
192.168.20.10 nls3-tmh2.it.pccwglobal.com 9300 1.32, 1.20, 1.27 34% 62% 37% 249.8GB 92.9GB Elasticsearch is running... Logstash is running... -
192.168.21.10 nls1-hht5.it.pccwglobal.com 9300 0.68, 0.70, 0.70 20% 44% 55% 249.8GB 249.2GB Elasticsearch is running... Logstash is running... -
192.168.22.240 nls1-tmh2.it.pccwglobal.com 9300 0.26, 0.25, 0.30 18% 66% 33% 249.8GB 93.4GB Elasticsearch is running... Logstash is running...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 82604 100 82604 0 0 106k 0 --:--:-- --:--:-- --:--:-- 106k
logstash-2015.01.28 2 p INITIALIZING 192.168.21.10 181841a1-d717-437c-bd36-6d4a8344abe6
logstash-2015.01.28 0 p INITIALIZING 192.168.20.10 33c658b5-db74-480c-bb71-32c430f77b00
it runned a few days.
IP Hostname Port 1m, 5m, 15m Load CPU % Memory Used Memory Free Storage Total Storage Available Elasticsearch Logstash Actions
192.168.20.10 nls3-tmh2.it.pccwglobal.com 9300 1.32, 1.20, 1.27 34% 62% 37% 249.8GB 92.9GB Elasticsearch is running... Logstash is running... -
192.168.21.10 nls1-hht5.it.pccwglobal.com 9300 0.68, 0.70, 0.70 20% 44% 55% 249.8GB 249.2GB Elasticsearch is running... Logstash is running... -
192.168.22.240 nls1-tmh2.it.pccwglobal.com 9300 0.26, 0.25, 0.30 18% 66% 33% 249.8GB 93.4GB Elasticsearch is running... Logstash is running...
-
scottwilkerson
- DevOps Engineer
- Posts: 19396
- Joined: Tue Nov 15, 2011 3:11 pm
- Location: Nagios Enterprises
- Contact:
Re: cluster status is red
I would recommend closing the index for logstash-2015.01.28 by going to Administration -> Index Status and click the close icon next to logstash-2015.01.28
If you need data from that index you can verify you have a backup of the index in Administration -> Backup & Maintenance, and if so, you can do the following:
Go to Administration -> Index Status and delete the index logstash-2015.01.28
Go to Administration -> Backup & Maintenance and click restore for index logstash-2015.01.28
Either of these should get you back to green status
If you need data from that index you can verify you have a backup of the index in Administration -> Backup & Maintenance, and if so, you can do the following:
Go to Administration -> Index Status and delete the index logstash-2015.01.28
Go to Administration -> Backup & Maintenance and click restore for index logstash-2015.01.28
Either of these should get you back to green status
Re: cluster status is red
When i close it, it becomes green.
does that mean we lost the data on that day?
thanks
does that mean we lost the data on that day?
thanks
-
scottwilkerson
- DevOps Engineer
- Posts: 19396
- Joined: Tue Nov 15, 2011 3:11 pm
- Location: Nagios Enterprises
- Contact:
Re: cluster status is red
This is normal.hlyeung wrote:When i close it, it becomes green.
No, you can restore the index from backup (provided you have one)hlyeung wrote:does that mean we lost the data on that day?
or
If you re-open the Index you will be able to query the 3 shards that are active, but the other 2 are likely missing.