Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

This support forum board is for support questions relating to Nagios Log Server, our solution for managing and monitoring critical log data.
User avatar
cdienger
Support Tech
Posts: 5045
Joined: Tue Feb 07, 2017 11:26 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by cdienger »

Can you PM me the profiles? Do you know which index causes it to crash again? I'd run through the same thing but hold off on reopening the problem index if possible at least for over the weekend.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
rferebee
Posts: 733
Joined: Wed Jul 11, 2018 11:37 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by rferebee »

I don't think it's an index that's causing it, because yesterday I ended with the oldest indexes and today I ended with the week after the oldest indexes.

I'll send you the profiles.
rferebee
Posts: 733
Joined: Wed Jul 11, 2018 11:37 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by rferebee »

This morning I'm able to access the console, but I had a snapshot stall out last night and even though logstash says it's running and active on all three nodes are barely collecting any logs.

Also, the environment is very very unresponsive. More than usually.
rferebee
Posts: 733
Joined: Wed Jul 11, 2018 11:37 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by rferebee »

Also, the graphs on the homepage aren't working.
User avatar
cdienger
Support Tech
Posts: 5045
Joined: Tue Feb 07, 2017 11:26 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by cdienger »

Are the graphs throwing an error? Can you provide screenshots of these? I'd also like to get a fresh profile from the machines to see the state it is in now.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
rferebee
Posts: 733
Joined: Wed Jul 11, 2018 11:37 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by rferebee »

No errors, just blank.

The 'disk usage' graph just started working again, but as you can see the 'Logs Per 15 Minutes' graph is blank.
You do not have the required permissions to view the files attached to this post.
rferebee
Posts: 733
Joined: Wed Jul 11, 2018 11:37 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by rferebee »

My PMs are not leaving the outbox again, not sure why that happens. Can you please use the FTP credentials I sent you on Friday to access the System Profiles? Thank you!
rferebee
Posts: 733
Joined: Wed Jul 11, 2018 11:37 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by rferebee »

I'm also not getting any results in my default dashboard. It's setup to show me all logs in the last 15 minutes and right now it's completely blank.
rferebee
Posts: 733
Joined: Wed Jul 11, 2018 11:37 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by rferebee »

My PMs don't seem to be going through for some reason.

In response to your last PM. I wasn't sure where to add the line you suggested, so I decided to add it to the 'Index' section. I'm not sure if adding that or simply restarting the elasticsearch service did the trick, but the graphs are working again and it appears we are collecting logs.

It looks like my nodes keep trying to take the master role from each other. If there is a way to prevent this I would love to hear it. I think at this point that might be causing the biggest issue in my environment.
User avatar
cdienger
Support Tech
Posts: 5045
Joined: Tue Feb 07, 2017 11:26 am

Re: Update from 2.1.4 to 2.1.6 stuck after Kibana upgrade

Post by cdienger »

Thanks for the update. I just responded to your PM. NLS by default allows all nodes to become master with this config in /usr/local/nagioslogserver/elasticsearch/config/elasticsearch.yml:

Code: Select all

...
# Allow this node to be eligible as a master node (enabled by default):
#
# node.master: true
...
You can change this to:

Code: Select all

...
# Allow this node to be eligible as a master node (enabled by default):
#
 node.master: false
...
or just add this to the bottom of the file:

Code: Select all

node.master: false
You'll need to do this on each machine in the cluster you wish to exclude from being the master and restart the elasticsearch service.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Locked