Page 3 of 3

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Tue May 26, 2020 3:26 pm
by rferebee
I understand why you would want that to be able to happen, but is there any reason one node would try to take the master role from another node if it's already assigned? That's what I can't figure out.

It seems like I'll get the environment to a stable point, but then all of a sudden one node will elect itself master and drop out of the cluster to essentially form it's own or something.

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Tue May 26, 2020 4:45 pm
by cdienger
It will usually happen when there's a communication failure with the current master. This can occur if there is heavy heap memory usage on the machine for example. The behavior that we addressed in the PM definitely impacted memory. I think what may have happened was the upgrade installed a default elasticsearch.yml which removed customization we've done previously.

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Wed May 27, 2020 9:31 am
by rferebee
We're looking good this morning. No issues overnight and the snapshot finished without issue.

However, we did lose the "Logs Per 15 Minutes" graph again. Not sure what's going on there. That has never happened prior to last week.

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Wed May 27, 2020 12:49 pm
by cdienger
Check the memory limit set in /etc/php.ini and increase it to 1024M if it isn't already. https://support.nagios.com/kb/article.php?id=132 has steps. Please send a fresh profile if there are still problems loading the graph after that.

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Thu May 28, 2020 1:13 pm
by rferebee
I made that change you suggested and a restarted apache on each node. The graph is still not displaying. Also, I was made aware of a different issue that I was not seeing earlier in the week. The dashboards section is no longer displaying logs and I cannot pull up any of my queries or custom dashboards.

It's strange because I can see we are collecting logs, but nothing else is displaying except for the 'Disk Usage - Current Index' and the 'Total Elasticsearch Disk Usage' graphs.

I have uploaded fresh system profiles from all three nodes to my SFTP that I shared with you via PM. Can you please take another look and let me know if you see anything odd?

Thank you!

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Thu May 28, 2020 1:40 pm
by rferebee
Maybe we're not collecting. The my unique hosts is showing 0.

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Thu May 28, 2020 4:40 pm
by cdienger
Data is coming in and getting stored it looks like there are some queries loading a lot of data and hitting the circuit breaker that we adjusted previously and it looks like we need to adjust it a bit more. Try setting this in the elasticsearch.yml of all the machines and restart elasticsearch:

Code: Select all

indices.breaker.fielddata.limit: 70%

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Fri May 29, 2020 10:56 am
by rferebee
The nodes attempted to pass the master role again and caused the environment to become unresponsive. I made the change you suggested to the elasticsearch.yml file and we're now only allowing one node to be eligible to have the master role. We'll see what happens in the future.

I think for now we can mark this issue resolved. If I have any issues going forward, I'll just create a new support thread. Thank you!

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Posted: Fri May 29, 2020 11:16 am
by scottwilkerson
rferebee wrote:The nodes attempted to pass the master role again and caused the environment to become unresponsive. I made the change you suggested to the elasticsearch.yml file and we're now only allowing one node to be eligible to have the master role. We'll see what happens in the future.

I think for now we can mark this issue resolved. If I have any issues going forward, I'll just create a new support thread. Thank you!
Great!

Locking thread