Log Crashes every few hours
Re: Log Crashes every few hours
Do you have an alternative output configuration? I see the message:
org.elasticsearch.indices.InvalidIndexNameException: [_export] Invalid index name [_export], must not start with '_'
Check the config under Configure > Global > Global Config > Show Outputs.
Attached is also a script to gather a profile from the command line. Copy this to the machine and from the command line run:
chmod 755 profile.sh
./profile.sh
This will generate a file called system-profile.tar.gz in /tmp that you can PM me.
org.elasticsearch.indices.InvalidIndexNameException: [_export] Invalid index name [_export], must not start with '_'
Check the config under Configure > Global > Global Config > Show Outputs.
Attached is also a script to gather a profile from the command line. Copy this to the machine and from the command line run:
chmod 755 profile.sh
./profile.sh
This will generate a file called system-profile.tar.gz in /tmp that you can PM me.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Log Crashes every few hours
I was unable to find the attachment.
but I looked and I dont have any outputs
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Tue, 30 Jan 2018 11:23:28 -0500
#
#
# Global outputs
#
#
# Local outputs
#
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Tue, 30 Jan 2018 11:23:49 -0500
#
#
# Global inputs
#
input {
syslog {
type => 'syslog'
port => 5544
}
tcp {
type => 'eventlog'
port => 3515
codec => json {
charset => 'CP1252'
}
}
tcp {
type => 'import_raw'
tags => 'import_raw'
port => 2056
}
tcp {
type => 'import_json'
tags => 'import_json'
port => 2057
codec => json
}
}
#
# Local inputs
#
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Tue, 30 Jan 2018 11:24:13 -0500
#
#
# Global filters
#
filter {
if [program] == 'apache_access' {
grok {
match => [ 'message', '%{COMBINEDAPACHELOG}']
}
date {
match => [ 'timestamp', 'dd/MMM/yyyy:HH:mm:ss Z', 'MMM dd HH:mm:ss', 'ISO8601' ]
}
mutate {
replace => [ 'type', 'apache_access' ]
convert => [ 'bytes', 'integer' ]
convert => [ 'response', 'integer' ]
}
}
if [program] == 'apache_error' {
grok {
match => [ 'message', '\[(?<timestamp>%{DAY:day} %{MONTH:month} %{MONTHDAY} %{TIME} %{YEAR})\] \[%{WORD:class}\] \[%{WORD:originator} %{IP:clientip}\] %{GREEDYDATA:errmsg}']
}
mutate {
replace => [ 'type', 'apache_error' ]
}
}
}
#
# Local filters
#
but I looked and I dont have any outputs
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Tue, 30 Jan 2018 11:23:28 -0500
#
#
# Global outputs
#
#
# Local outputs
#
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Tue, 30 Jan 2018 11:23:49 -0500
#
#
# Global inputs
#
input {
syslog {
type => 'syslog'
port => 5544
}
tcp {
type => 'eventlog'
port => 3515
codec => json {
charset => 'CP1252'
}
}
tcp {
type => 'import_raw'
tags => 'import_raw'
port => 2056
}
tcp {
type => 'import_json'
tags => 'import_json'
port => 2057
codec => json
}
}
#
# Local inputs
#
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Tue, 30 Jan 2018 11:24:13 -0500
#
#
# Global filters
#
filter {
if [program] == 'apache_access' {
grok {
match => [ 'message', '%{COMBINEDAPACHELOG}']
}
date {
match => [ 'timestamp', 'dd/MMM/yyyy:HH:mm:ss Z', 'MMM dd HH:mm:ss', 'ISO8601' ]
}
mutate {
replace => [ 'type', 'apache_access' ]
convert => [ 'bytes', 'integer' ]
convert => [ 'response', 'integer' ]
}
}
if [program] == 'apache_error' {
grok {
match => [ 'message', '\[(?<timestamp>%{DAY:day} %{MONTH:month} %{MONTHDAY} %{TIME} %{YEAR})\] \[%{WORD:class}\] \[%{WORD:originator} %{IP:clientip}\] %{GREEDYDATA:errmsg}']
}
mutate {
replace => [ 'type', 'apache_error' ]
}
}
}
#
# Local filters
#
Re: Log Crashes every few hours
The output file should contain at least:
Make this change and restart the elasticsearch service with:
service elasticsearch restart
Code: Select all
#
# Logstash Configuration File
# Dynamically created by Nagios Log Server
#
# DO NOT EDIT THIS FILE. IT WILL BE OVERWRITTEN.
#
# Created Fri, 26 Jan 2018 15:42:36 -0500
#
#
# Required output for Nagios Log Server
#
output {
elasticsearch {
hosts => ['localhost']
document_type => '%{type}'
workers => 4
}
}
#
# Global outputs
#
#
# Local outputs
#
service elasticsearch restart
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Log Crashes every few hours
ran fine without problems I after I made that last change you suggested, but last night I ran out space and now I cant even log in
Re: Log Crashes every few hours
Has the machine been given more space or were files removed to clear up space?
https://support.nagios.com/kb/article/n ... h-469.html
https://support.nagios.com/kb/article/n ... th-90.html
are helpful for resolving diskspace and bringing a node/cluster back up after running out of space. The second one specifically deals with unassigned shards which can prevent the the nagioslogserver index(responsible for logins) from loading.
https://support.nagios.com/kb/article/n ... h-469.html
https://support.nagios.com/kb/article/n ... th-90.html
are helpful for resolving diskspace and bringing a node/cluster back up after running out of space. The second one specifically deals with unassigned shards which can prevent the the nagioslogserver index(responsible for logins) from loading.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Log Crashes every few hours
the cluster is in red
But like I said earlier, I can even log in
and Yes we increase the space
But like I said earlier, I can even log in
and Yes we increase the space
Re: Log Crashes every few hours
Not able to login at the command line or the web UI? The articles provided have steps to troubleshoot from the command line.
If you cannot access the command line or the web UI then I would suggest a reboot of the machine.
If you cannot access the command line or the web UI then I would suggest a reboot of the machine.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Log Crashes every few hours
not able to log in from the GUI
I get this error
The username specified does not exist.
I did follow the steps in the guide you sent me, but still I am unable to log in
also tried to reboot the machine already
I get this error
The username specified does not exist.
I did follow the steps in the guide you sent me, but still I am unable to log in
also tried to reboot the machine already
Re: Log Crashes every few hours
Attached is a script to gather a profile from the command line if the web UI isn't available. Copy this to the machine and from the command line run:
chmod 755 profile.sh
./profile.sh
This will generate a file called system-profile.tar.gz in /tmp. Please PM this as well as the recent logs in /var/log/elasticsearch/ and /var/log/logstash/.
chmod 755 profile.sh
./profile.sh
This will generate a file called system-profile.tar.gz in /tmp. Please PM this as well as the recent logs in /var/log/elasticsearch/ and /var/log/logstash/.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
Re: Log Crashes every few hours
end up unistalling Log Server. I am back up and running