Not recieving logs after 2.0 upgrade
Posted: Tue Nov 21, 2017 2:24 pm
It was working fine until that. Not sure what's going on, a lot of java errors in the ElasticSearch logs. Log files attached.
Support for Nagios products and services
https://support.nagios.com/forum/
Code: Select all
[2017-11-21 06:23:33,051][DEBUG][action.bulk ] [a986f886-0c32-4cd2-9b56-95654f734914] [logstash-2017.11.21][1] failed to execute bulk item (index) index {[logstash-2017.11.21][eventlog][AV_eUaM61aUgoBl5yKfr], source[{ ... "ErrorCode":"0x0" ... }]}
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [ErrorCode]
Code: Select all
curl -XGET 'http://localhost:9200/logstash-2017.11.21/_mapping'
Are you able to share the day's index mappings from when this occurred? Like so if the issue occurred on May 11th:
Code: Select all
curl -XGET 'http://localhost:9200/logstash-2017.05.11/_mapping'
Can you also tell us which values/fields specifically you're referring to?
gsl_ops_practice wrote:
So it looks like the conversion to INT isn't happening properly.
%{INT} represents a grok pattern, not a field type (not explicitly, anyway). So if I say %{INT:some_field}, then some_field will match the INT grok pattern but not necessarily be stored as an int/integer variable. If you wanted some field to be a specific data type (we'll use long because it's easy) your pattern match in the grok filter would have to look like %{INT:some_field:long} to properly type the field in that instant.
gsl_ops_practice wrote:
As per your code I am not seeing any white spaces anymore and it all looks good. Until I try to display those values over time. When I do, I get this error in the GUI:
I assume this to mean that you are trying to "Sort By" a specific field in the GUI? Here's an example event:
curl -XGET 'http://localhost:9200/logstash-2017.05. ... rch?size=1'
https://pastebin.com/YV40958z
Code: Select all
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 16801,
"max_score": 1.0,
"hits": [{
"_index": "logstash-2017.05.11",
"_type": "eventlog",
"_id": "AVv0zkiDLoUjsjJ7dByf",
"_score": 1.0,
"_source": {
"EventTime": "2017-05-11 01:59:43",
"Hostname": "WIN-NFRUUIO4D46.DOMAIN.local",
"Keywords": -9223372036854775808,
"EventType": "WARNING",
"SeverityValue": 3,
"Severity": "WARNING",
"EventID": 322,
"SourceName": "Microsoft-Windows-TaskScheduler",
"ProviderGuid": "{DE7B24EA-73C8-4A09-985D-5BDADCFA9017}",
"Version": 0,
"Task": 322,
"OpcodeValue": 0,
"RecordNumber": 1208518,
"ActivityID": "{5D29117E-4827-4F9B-93BB-6CC917ECEB45}",
"ProcessID": 920,
"ThreadID": 111444,
"Channel": "Microsoft-Windows-TaskScheduler/Operational",
"Domain": "NT AUTHORITY",
"AccountName": "SYSTEM",
"UserID": "SYSTEM",
"AccountType": "User",
"Category": "Launch request ignored, instance already running",
"Opcode": "Info",
"TaskName": "\\test-nrds",
"TaskInstanceId": "{5D29117E-4827-4F9B-93BB-6CC917ECEB45}",
"EventReceivedTime": "2017-05-11 01:59:45",
"SourceModuleName": "eventlog",
"SourceModuleType": "im_msvistalog",
"message": "Task Scheduler did not launch task \"\\test-nrds\" because instance \"{5D29117E-4827-4F9B-93BB-6CC917ECEB45}\" of the same task is already running.",
"@version": "1",
"@timestamp": "2017-05-11T00:00:11.394Z",
"host": "192.168.67.99",
"type": "eventlog"
}
}
]
}
}
Lets focus on the RecordNumber field. Looking at the mapping (think "schema") for the eventlog type, we can see that this field is mapped as a long:
curl -XGET 'http://localhost:9200/logstash-2017.05. ... g/_mapping'
https://pastebin.com/ygFdPLjE (Line 1078)
Code: Select all
"RecordNumber": {
"type": "long"
},
And I can consequently sort by this value in the GUI: