- Server operating system version
- Debian 12.9
- Plesk version and microupdate number
- 18.0.66
Obsidians still seems to watch just for /dev/sd[x] drives for having metrics gathered in Grafana.
Example from one of my hosts where NVMe drives are totally ignored, while the metrics are delivered by the kernel for the drives:
Would be neat if this could be fixed / expanded. Best regards
Example from one of my hosts where NVMe drives are totally ignored, while the metrics are delivered by the kernel for the drives:
Bash:
# cat /proc/diskstats
259 0 nvme0n1 11244539 41601 1166709299 2534288 41946913 16638180 4583151873 83687872 0 3518900 86257601 38481 5 3539397224 35440 0 0
259 2 nvme0n1p1 179 31 6226 21 5 0 12 0 0 104 110 121 1 3808256 88 0 0
259 3 nvme0n1p2 1841 7136 76592 344 6461 5840 3533316 20607 0 17840 20951 0 0 0 0 0 0
259 4 nvme0n1p3 11242412 34434 1166621865 2533907 41940447 16632340 4579618545 83667263 0 3930740 86236522 38360 4 3535588968 35352 0 0
259 1 nvme1n1 4026567 23864 773971078 1252869 41930509 16654584 4583151873 129428984 0 2647800 130717818 38481 5 3539397224 35963 0 0
259 5 nvme1n1p1 66 0 1880 9 5 0 12 0 0 96 100 121 1 3808256 89 0 0
259 6 nvme1n1p2 293 449 7312 77 6457 5844 3533316 24109 0 20736 24186 0 0 0 0 0 0
259 7 nvme1n1p3 4026101 23415 773957270 1252765 41924047 16648740 4579618545 129404875 0 3092868 130693513 38360 4 3535588968 35873 0 0
8 0 sda 218182 10331 111016804 197647 371807 483832 599783508 3547384 0 806352 3862180 13439 0 1503703024 110924 36300 6223
8 16 sdb 268960 16134 125328741 180079 370538 485101 599783508 3473450 0 701432 3674407 13439 0 1503703024 15901 36294 4976
9 127 md127 9651 0 82816 1616 9905 0 3530920 63544 0 67356 65160 0 0 0 0 0 0
9 126 md126 513493 0 236344338 389736 836837 0 599727248 1088694412 0 973524 1089197016 13439 0 1503703024 112868 0 0
9 125 md125 208 0 7018 28 1 0 8 0 0 100 132 122 0 3808256 104 0 0
9 124 md124 15326264 0 1940578003 3810416 55721858 0 4564581652 3143142552 0 13145080 3146993460 38364 0 3535588968 40492 0 0
7 0 loop0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 1 loop1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 2 loop2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 3 loop3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 4 loop4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 5 loop5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 6 loop6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 7 loop7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Bash:
# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md124 : active raid1 nvme0n1p3[0] nvme1n1p3[1]
1864754496 blocks super 1.2 [2/2] [UU]
bitmap: 6/14 pages [24KB], 65536KB chunk
md125 : active raid1 nvme0n1p1[0] nvme1n1p1[1]
2094080 blocks super 1.2 [2/2] [UU]
md126 : active raid1 sda[1] sdb[0]
937560384 blocks super 1.2 [2/2] [UU]
bitmap: 0/7 pages [0KB], 65536KB chunk
md127 : active raid1 nvme1n1p2[1] nvme0n1p2[0]
8379392 blocks super 1.2 [2/2] [UU]
unused devices: <none>
Would be neat if this could be fixed / expanded. Best regards