On the "Total Elasticsearch Disk Usage graph, the amount of space used is being incorrectly reported. As shown in the attached images, the total amount of disk space being used by the /usr/local/nagioslogserver directory is only 1.1 GB, and the total amount of disk space being used by the entire node is roughly 14 GB. However, the dashboard graph shows approximately 15 GB of storage being used by Nagios Log Server.
How does the "Total Elasticsearch Disk Usage" graph on the login dashboard calculate disk space usage? Which directories does it count? Is there a way to reset or correct its data?
"Total Elasticsearch Disk Usage" graph showing wrong sizes.
"Total Elasticsearch Disk Usage" graph showing wrong sizes.
You do not have the required permissions to view the files attached to this post.
Re: "Total Elasticsearch Disk Usage" graph showing wrong siz
What is the output of these commands:
Code: Select all
curl 'http://localhost:9200/_nodes/_all/stats/fs?pretty'
df -P /
Re: "Total Elasticsearch Disk Usage" graph showing wrong siz
Code: Select all
curl 'http://localhost:9200/_nodes/_all/stats/fs?pretty'
{
"cluster_name" : "82796ca2-21fa-48eb-ad6d-a95ef6f66a26",
"nodes" : {
"3-oLNW4aRO65ZnEp2W9CKg" : {
"timestamp" : 1624280954948,
"name" : "0d4c66c4-eaa3-4888-9b25-b670788b0761",
"transport_address" : "inet[{REDACTED}]",
"host" : "{REDACTED}",
"ip" : [ "inet[{REDACTED}]", "NONE" ],
"attributes" : {
"max_local_storage_nodes" : "1"
},
"fs" : {
"timestamp" : 1624280954948,
"total" : {
"total_in_bytes" : 42003038208,
"free_in_bytes" : 26839478272,
"available_in_bytes" : 24675430400,
"disk_reads" : 470146,
"disk_writes" : 97170131,
"disk_io_op" : 97640277,
"disk_read_size_in_bytes" : 28432606208,
"disk_write_size_in_bytes" : 1234792869888,
"disk_io_size_in_bytes" : 1263225476096,
"disk_queue" : "0",
"disk_service_time" : "0"
},
"data" : [ {
"path" : "/usr/local/nagioslogserver/elasticsearch/data/82796ca2-21fa-48eb-ad6d-a95ef6f66a26/nodes/0",
"mount" : "/",
"dev" : "/dev/mapper/vg_os-lv_root",
"type" : "ext4",
"total_in_bytes" : 42003038208,
"free_in_bytes" : 26839478272,
"available_in_bytes" : 24675430400,
"disk_reads" : 470146,
"disk_writes" : 97170131,
"disk_io_op" : 97640277,
"disk_read_size_in_bytes" : 28432606208,
"disk_write_size_in_bytes" : 1234792869888,
"disk_io_size_in_bytes" : 1263225476096,
"disk_queue" : "0",
"disk_service_time" : "0"
} ]
}
},
"z90GN3DASoqh8ScTbqONLw" : {
"timestamp" : 1624280961964,
"name" : "95099231-f84c-4e5b-add1-469ac5210fd0",
"transport_address" : "inet[{REDACTED}]",
"host" : "{REDACTED}",
"ip" : [ "inet[{REDACTED}]", "NONE" ],
"attributes" : {
"max_local_storage_nodes" : "1"
},
"fs" : {
"timestamp" : 1624280961964,
"total" : {
"total_in_bytes" : 42003038208,
"free_in_bytes" : 25716817920,
"available_in_bytes" : 23552770048,
"disk_reads" : 562773,
"disk_writes" : 102328340,
"disk_io_op" : 102891113,
"disk_read_size_in_bytes" : 17206617088,
"disk_write_size_in_bytes" : 957972099072,
"disk_io_size_in_bytes" : 975178716160,
"disk_queue" : "0",
"disk_service_time" : "0"
},
"data" : [ {
"path" : "/usr/local/nagioslogserver/elasticsearch/data/82796ca2-21fa-48eb-ad6d-a95ef6f66a26/nodes/0",
"mount" : "/",
"dev" : "/dev/mapper/vg_os-lv_root",
"type" : "ext4",
"total_in_bytes" : 42003038208,
"free_in_bytes" : 25716817920,
"available_in_bytes" : 23552770048,
"disk_reads" : 562773,
"disk_writes" : 102328340,
"disk_io_op" : 102891113,
"disk_read_size_in_bytes" : 17206617088,
"disk_write_size_in_bytes" : 957972099072,
"disk_io_size_in_bytes" : 975178716160,
"disk_queue" : "0",
"disk_service_time" : "0"
} ]
}
}
}
}
Code: Select all
df -P /
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/mapper/vg_os-lv_root 41018592 15905452 22999812 41% /
Re: "Total Elasticsearch Disk Usage" graph showing wrong siz
What version of Log Server are you running? You can see it on the bottom left hand side after logging into the web UI.
Re: "Total Elasticsearch Disk Usage" graph showing wrong siz
Nagios Log Server 2.1.8
Re: "Total Elasticsearch Disk Usage" graph showing wrong siz
Do you have any deleted files still open that could be holding onto the free space?
Code: Select all
lsof | grep deleted