Caused by: org.elasticsearch.ElasticsearchException: org.elasticsearch.ElasticsearchIllegalStateException: Field data loading is forbidden on Home
having exceptions thrown so frequently definitely isn't ideal and can cause some unexpected behavior. It seems like there may be a bad query somewhere so I'd like to get a copy of a NLS backup so that I can review the configuration of everything. Run:
Hello,
Thing is, we have multiple logs streams to parse and we want to use NLS for all our applications to have visibility. However It may take us a while to figure out all possible variants of log patterns and field data changes. So as a practice to avoid parse failures, we use type as DATA in grok filters. Is there an easy way to avoid these errors or some automated script which can read the log file and give us filter definitions.
In fact, I feel that you guys shall have feasibility with wizard to configure delimiters, field name, types of logs from same source etc. Even better if you guys could introduce ML to populate data types automatically.
Anyway for now, I took backup and backup size is 21M. Max size for attachment is 20M. If I split, it says this extension not allowed. How do I upload these ?
-rwxrwxrwx 1 root root 10000000 Feb 10 07:23 system-backup-00
-rwxrwxrwx 1 root root 10000000 Feb 10 07:23 system-backup-01
-rwxrwxrwx 1 root root 1045854 Feb 10 07:23 system-backup-02
Hi,
Have sent all three parts. It did not allow to upload all three parts in in message. So have sent attachments in 3 messages. Kindly check and share your feedback asap.
I will message you the names, but the error is caused by three global dashboards configured with a panel called 'Home'. If you edit this panel(on all three dashboards) you can see under the Panel tab that it is using a field called 'Home'. This should instead be Home.raw.
We followed your recommendation and changed it from Home to Home.raw in all three dashboard. However It still went red saying system has problems. we also found that some logs are missed reading the live feed. Could you help us understand why some of the logs are missed ?
I have attached fresh backup files in your PM. Please review and let us know what we can further do to fix the issues.
We received the backup but it would be better to grab a fresh profile. Roughly when did it go red and where exactly did you see the red status? Red can mean a problem with syncing shards or it can be a service isn't running depending on where you see it.
As of May 25th, 2018, all communications with Nagios Enterprises and its employees are covered under our new Privacy Policy.
I have sent the profile in two parts. Kindly remove .zip from file name.
It goes red after couple of hours upon fixing it. Then it stays red and doesn't change. Services when checked in Servers, are running and active normally. It's the GUI, where it stays red and we don't have option to restart elastic Search and log stash online.
Another thing, do you know of any known issues where out of a file, some logs are not pushed from running file (for example, logs from a constantly rotating file )?
I am also facing this issue that we have too many files generated continuously and shipper script can't match upto the speed of input files. Is there any other way to push files faster ?
What address are you putting in the browser to access the NLS interface? Are you using a VIP or one of IPs of the nodes? I've seen customers run into issues with the status appearing to go red as a result of using a VIP. You should instead be accessing the interface using the actual IP of one of the nodes.
Regarding the shipper behavior, how exactly are you running this script? There is a couple ways of doing which are described under the http://NLS_IP/nagioslogserver/configure/source/import section of NLS. If one shipper process isn't able to keep up then you may be able to have multiple processes. For example:
Hello,
We are using direct IP of node in all places. Could you find out reason for GUI reporting red from System profile and backup files ? And also other potential issues. Do you have procedure for tuning/optimizing the system based on our current usage ?
It's crucial for us to have a stable and reliable system and reporting.