• Hi, Pleskians! We are running a UX testing of our upcoming product intended for server management and monitoring.
    We would like to invite you to have a call with us and have some fun checking our prototype. The agenda is pretty simple - we bring new design and some scenarios that you need to walk through and succeed. We will be watching and taking insights for further development of the design.
    If you would like to participate, please use this link to book a meeting. We will sent the link to the clickable prototype at the meeting.
  • (Plesk for Windows):
    MySQL Connector/ODBC 3.51, 5.1, and 5.3 are no longer shipped with Plesk because they have reached end of life. MariaDB Connector/ODBC 64-bit 3.2.4 is now used instead.
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.

Issue No NVMe Disks in Grafana

aFrI

New Pleskian
Server operating system version
Debian 12.9
Plesk version and microupdate number
18.0.66
Obsidians still seems to watch just for /dev/sd[x] drives for having metrics gathered in Grafana.

Example from one of my hosts where NVMe drives are totally ignored, while the metrics are delivered by the kernel for the drives:


Bash:
# cat /proc/diskstats
 259       0 nvme0n1 11244539 41601 1166709299 2534288 41946913 16638180 4583151873 83687872 0 3518900 86257601 38481 5 3539397224 35440 0 0
 259       2 nvme0n1p1 179 31 6226 21 5 0 12 0 0 104 110 121 1 3808256 88 0 0
 259       3 nvme0n1p2 1841 7136 76592 344 6461 5840 3533316 20607 0 17840 20951 0 0 0 0 0 0
 259       4 nvme0n1p3 11242412 34434 1166621865 2533907 41940447 16632340 4579618545 83667263 0 3930740 86236522 38360 4 3535588968 35352 0 0
 259       1 nvme1n1 4026567 23864 773971078 1252869 41930509 16654584 4583151873 129428984 0 2647800 130717818 38481 5 3539397224 35963 0 0
 259       5 nvme1n1p1 66 0 1880 9 5 0 12 0 0 96 100 121 1 3808256 89 0 0
 259       6 nvme1n1p2 293 449 7312 77 6457 5844 3533316 24109 0 20736 24186 0 0 0 0 0 0
 259       7 nvme1n1p3 4026101 23415 773957270 1252765 41924047 16648740 4579618545 129404875 0 3092868 130693513 38360 4 3535588968 35873 0 0
   8       0 sda 218182 10331 111016804 197647 371807 483832 599783508 3547384 0 806352 3862180 13439 0 1503703024 110924 36300 6223
   8      16 sdb 268960 16134 125328741 180079 370538 485101 599783508 3473450 0 701432 3674407 13439 0 1503703024 15901 36294 4976
   9     127 md127 9651 0 82816 1616 9905 0 3530920 63544 0 67356 65160 0 0 0 0 0 0
   9     126 md126 513493 0 236344338 389736 836837 0 599727248 1088694412 0 973524 1089197016 13439 0 1503703024 112868 0 0
   9     125 md125 208 0 7018 28 1 0 8 0 0 100 132 122 0 3808256 104 0 0
   9     124 md124 15326264 0 1940578003 3810416 55721858 0 4564581652 3143142552 0 13145080 3146993460 38364 0 3535588968 40492 0 0
   7       0 loop0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   7       1 loop1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   7       2 loop2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   7       3 loop3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   7       4 loop4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   7       5 loop5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   7       6 loop6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
   7       7 loop7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Bash:
# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md124 : active raid1 nvme0n1p3[0] nvme1n1p3[1]
      1864754496 blocks super 1.2 [2/2] [UU]
      bitmap: 6/14 pages [24KB], 65536KB chunk

md125 : active raid1 nvme0n1p1[0] nvme1n1p1[1]
      2094080 blocks super 1.2 [2/2] [UU]

md126 : active raid1 sda[1] sdb[0]
      937560384 blocks super 1.2 [2/2] [UU]
      bitmap: 0/7 pages [0KB], 65536KB chunk

md127 : active raid1 nvme1n1p2[1] nvme0n1p2[0]
      8379392 blocks super 1.2 [2/2] [UU]

unused devices: <none>


Would be neat if this could be fixed / expanded. Best regards
 
to add, the structure of the Disks/RAIDs/Partitions:


Bash:
# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0 894.3G  0 disk
└─md126       9:126  0 894.1G  0 raid1 /backup-raid
sdb           8:16   0 894.3G  0 disk
└─md126       9:126  0 894.1G  0 raid1 /backup-raid
nvme0n1     259:0    0   1.7T  0 disk
├─nvme0n1p1 259:2    0     2G  0 part
│ └─md125     9:125  0     2G  0 raid1 /boot
├─nvme0n1p2 259:3    0     8G  0 part
│ └─md127     9:127  0     8G  0 raid1 [SWAP]
└─nvme0n1p3 259:4    0   1.7T  0 part
  └─md124     9:124  0   1.7T  0 raid1 /
nvme1n1     259:1    0   1.7T  0 disk
├─nvme1n1p1 259:5    0     2G  0 part
│ └─md125     9:125  0     2G  0 raid1 /boot
├─nvme1n1p2 259:6    0     8G  0 part
│ └─md127     9:127  0     8G  0 raid1 [SWAP]
└─nvme1n1p3 259:7    0   1.7T  0 part
  └─md124     9:124  0   1.7T  0 raid1 /

So the system was installed on partitions on the Raid1 built on the NVMe disks, the SATA drives have been added afterwards as a second Raid1 to be mounted as "/backup-raid" - the SATA drives are showing up in Monitoring, the NVMe ones aren't.
 
Interesting...

Plesk uses collectd to collect metrics; for disks, it uses the next settings
Code:
# more /etc/sw-collectd/conf.d/02disk.conf
#ATTENTION!
#
#DO NOT MODIFY THIS FILE BECAUSE IT WAS GENERATED AUTOMATICALLY,
#SO ALL YOUR CHANGES WILL BE LOST THE NEXT TIME THE FILE IS GENERATED.

LoadPlugin disk
LoadPlugin df
<Plugin df>
    ReportInodes false
    ReportByDevice false
    ValuesPercentage true

    IgnoreSelected true
    MountPoint "/sys/fs/cgroup"
    MountPoint "//run/user/.*/"
</Plugin>

There is nothing that disables NVMe disks.
If you list the /opt/psa/var/health/data/localhost/ directory, do you see anything about NVMe disks?

Could you please also show the output of df -Pl?
 
Last edited:
Here you go, the directories by devnames seem to be existing:

Code:
# ls -la /opt/psa/var/health/data/localhost/
total 320
drwxr-xr-x 80 root root 4096 Jan 23 06:28 .
drwxr-xr-x  3 root root 4096 Jan 16 23:08 ..
drwxr-xr-x  2 root root 4096 Jan 16 23:17 cpu
drwxr-xr-x  2 root root 4096 Jan 16 23:57 df-backup-raid
drwxr-xr-x  2 root root 4096 Jan 16 23:08 df-boot
drwxr-xr-x  2 root root 4096 Jan 16 23:08 df-dev
drwxr-xr-x  2 root root 4096 Jan 16 23:08 df-dev-shm
drwxr-xr-x  2 root root 4096 Jan 16 23:08 df-root
drwxr-xr-x  2 root root 4096 Jan 16 23:08 df-run
drwxr-xr-x  2 root root 4096 Jan 16 23:08 df-run-lock
drwxr-xr-x  2 root root 4096 Jan 18 20:38 df-var-lib-docker-overlay2-022c83f462015f27a4103e57b4b1c3141b99756153cb9dcb42b44ad04e179161-merged
drwxr-xr-x  2 root root 4096 Jan 18 15:48 df-var-lib-docker-overlay2-1bf2052bcab467af8a0234d3b5f7bebd149c07345bda9a5dcc702c7dfd788502-merged
drwxr-xr-x  2 root root 4096 Jan 18 15:28 df-var-lib-docker-overlay2-223062a66ee47ad94d861a19c4094f8ad36341bc95c6517d0f411870daa4c9e4-merged
drwxr-xr-x  2 root root 4096 Jan 18 20:23 df-var-lib-docker-overlay2-279c8ce0062621f34711f1d4f3900bd73dd299c1c2d4d03d0b6df9d7f42aa108-merged
drwxr-xr-x  2 root root 4096 Jan 18 20:23 df-var-lib-docker-overlay2-2c2c7afe11496dcd42ee9fddd5cfdbf6cedfedbb89d98915f31f346241b63c7b-merged
drwxr-xr-x  2 root root 4096 Jan 18 20:23 df-var-lib-docker-overlay2-3a1d3353b51e9f5df25e2cb6e9ddf38b70ece389ec6b08b906a483a156cf79c8-merged
drwxr-xr-x  2 root root 4096 Jan 18 20:23 df-var-lib-docker-overlay2-61c34e67d1488f4c8242ca1199805fbc7ba3de4b533b8d94928b87a8d0515925-merged
drwxr-xr-x  2 root root 4096 Jan 20 05:43 df-var-lib-docker-overlay2-d90d3a91d3173772ea5685e8ecc2bcfe250b38a0ab3e8c37d23ba2ab6ba8f2ba-merged
drwxr-xr-x  2 root root 4096 Jan 20 05:28 df-var-lib-docker-overlay2-edd36c30a369ed10b1ab6d601590d23f8ab525f8fd4a397f82c94fbb1f281a9e-merged
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-md0
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-md1
drwxr-xr-x  2 root root 4096 Jan 18 10:48 disk-md124
drwxr-xr-x  2 root root 4096 Jan 17 00:20 disk-md125
drwxr-xr-x  2 root root 4096 Jan 18 12:23 disk-md126
drwxr-xr-x  2 root root 4096 Jan 16 23:52 disk-md127
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-md2
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-md3
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-nvme0n1
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-nvme0n1p1
drwxr-xr-x  2 root root 4096 Jan 18 21:28 disk-nvme0n1p2
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-nvme0n1p3
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-nvme1n1
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-nvme1n1p1
drwxr-xr-x  2 root root 4096 Jan 18 22:58 disk-nvme1n1p2
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-nvme1n1p3
drwxr-xr-x  2 root root 4096 Jan 16 23:27 disk-sda
drwxr-xr-x  2 root root 4096 Jan 16 23:17 disk-sdb
drwxr-xr-x  2 root root 4096 Jan 18 15:48 interface-br-269d4440b908
drwxr-xr-x  2 root root 4096 Jan 18 20:38 interface-br-276db198e753
drwxr-xr-x  2 root root 4096 Jan 20 05:28 interface-br-28a597dafcdf
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-br-2bc826f17443
drwxr-xr-x  2 root root 4096 Jan 18 15:58 interface-br-4568855739f6
drwxr-xr-x  2 root root 4096 Jan 20 05:43 interface-br-4e5149458ece
drwxr-xr-x  2 root root 4096 Jan 18 15:28 interface-br-56ce9f6c47fa
drwxr-xr-x  2 root root 4096 Jan 18 16:08 interface-br-5c47abb828dd
drwxr-xr-x  2 root root 4096 Jan 18 16:13 interface-br-61fe7475c120
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-br-e7a1086b622d
drwxr-xr-x  2 root root 4096 Jan 18 16:03 interface-br-e8bdcf0ae676
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-br-e9e84c13717d
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-br-f4ed2e7f6fcc
drwxr-xr-x  2 root root 4096 Jan 17 19:10 interface-docker0
drwxr-xr-x  2 root root 4096 Jan 16 23:08 interface-enp35s0
drwxr-xr-x  2 root root 4096 Jan 16 23:08 interface-lo
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-veth0c01192
drwxr-xr-x  2 root root 4096 Jan 20 05:28 interface-veth10049ac
drwxr-xr-x  2 root root 4096 Jan 23 06:28 interface-veth16ed2a7
drwxr-xr-x  2 root root 4096 Jan 20 05:43 interface-veth1bf332e
drwxr-xr-x  2 root root 4096 Jan 18 15:28 interface-veth24a3c13
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-veth5788239
drwxr-xr-x  2 root root 4096 Jan 23 06:28 interface-veth5c45635
drwxr-xr-x  2 root root 4096 Jan 18 20:38 interface-veth9b8b7f4
drwxr-xr-x  2 root root 4096 Jan 23 06:28 interface-vethad1908c
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-vethc69ffc2
drwxr-xr-x  2 root root 4096 Jan 18 15:48 interface-vethc8f2d89
drwxr-xr-x  2 root root 4096 Jan 23 06:28 interface-vethe8d43b2
drwxr-xr-x  2 root root 4096 Jan 18 20:23 interface-vethf4a83dc
drwxr-xr-x  2 root root 4096 Jan 16 23:08 load
drwxr-xr-x  2 root root 4096 Jan 16 23:08 memory
drwxr-xr-x  2 root root 4096 Jan 16 23:08 processes
drwxr-xr-x  2 root root 4096 Jan 16 23:17 processes-Mail
drwxr-xr-x  2 root root 4096 Jan 16 23:17 processes-MySql
drwxr-xr-x  2 root root 4096 Jan 16 23:17 processes-Panel
drwxr-xr-x  2 root root 4096 Jan 26 13:58 processes-Web
drwxr-xr-x  2 root root 4096 Jan 16 23:17 processes-WebProxy
drwxr-xr-x  2 root root 4096 Jan 16 23:08 swap
drwxr-xr-x  2 root root 4096 Jan 16 23:08 sw_mem-Mail
drwxr-xr-x  2 root root 4096 Jan 16 23:08 sw_mem-MySql
drwxr-xr-x  2 root root 4096 Jan 16 23:08 sw_mem-Panel
drwxr-xr-x  2 root root 4096 Jan 16 23:08 sw_mem-Web
drwxr-xr-x  2 root root 4096 Jan 16 23:08 sw_mem-WebProxy

df -Pl

Code:
# df -Pl
Filesystem     1024-blocks      Used  Available Capacity Mounted on
udev              65882724         0   65882724       0% /dev
tmpfs             13181876      1284   13180592       1% /run
/dev/md124      1834360360 710716120 1030390132      41% /
tmpfs             65909364      1176   65908188       1% /dev/shm
tmpfs                 5120        20       5100       1% /run/lock
/dev/md125         2025320    121192    1799424       7% /boot
/dev/md126       921716772 316749068  558073304      37% /backup-raid
overlay         1834360360 710716120 1030390132      41% /var/lib/docker/overlay2/edd36c30a369ed10b1ab6d601590d23f8ab525f8fd4a397f82c94fbb1f281a9e/merged
overlay         1834360360 710716120 1030390132      41% /var/lib/docker/overlay2/d90d3a91d3173772ea5685e8ecc2bcfe250b38a0ab3e8c37d23ba2ab6ba8f2ba/merged
overlay         1834360360 710716120 1030390132      41% /var/lib/docker/overlay2/022c83f462015f27a4103e57b4b1c3141b99756153cb9dcb42b44ad04e179161/merged
overlay         1834360360 710716120 1030390132      41% /var/lib/docker/overlay2/279c8ce0062621f34711f1d4f3900bd73dd299c1c2d4d03d0b6df9d7f42aa108/merged
tmpfs             13181872         0   13181872       0% /run/user/0


What I just remember & may be somehow playing into the issue, so better mentioning it for full transparency: Originally the md numbers were counted from md0-md3 for the partitions on the NVMe disks/softraid. Those got reassigned to higher numbers after a reboot and me changing the naming in the mdadm.conf ("boot","swap","root","backup"):

Code:
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=8a844ee4:995aa287:a96639bb:7655033f name=boot:0
ARRAY /dev/md/1  metadata=1.2 UUID=e679a3de:2789cc88:9635ac78:0f7c8d2a name=swap:1
ARRAY /dev/md/2  metadata=1.2 UUID=3a2fe63a:a5092d9f:4db37e99:949ef773 name=root:2
ARRAY /dev/md/3  metadata=1.2 UUID=f6bb5d02:a7cb12b6:5b50723d:7e54ce4c name=backup:3

Edit: just checked the folders of the NVMe disks - the rrd files seem to be existing..

Code:
# ls -la /opt/psa/var/health/data/localhost/disk-nvme0n1/
total 1836
drwxr-xr-x  2 root root   4096 Jan 16 23:17 .
drwxr-xr-x 80 root root   4096 Jan 23 06:28 ..
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_io_time.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_merged.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_octets.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_ops.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_time.rrd
-rw-r--r--  1 root root 169192 Jan 26 16:43 pending_operations.rrd

# ls -la /opt/psa/var/health/data/localhost/disk-nvme1n1/
total 1836
drwxr-xr-x  2 root root   4096 Jan 16 23:17 .
drwxr-xr-x 80 root root   4096 Jan 23 06:28 ..
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_io_time.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_merged.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_octets.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_ops.rrd
-rw-r--r--  1 root root 336320 Jan 26 16:43 disk_time.rrd
-rw-r--r--  1 root root 169192 Jan 26 16:43 pending_operations.rrd


Best regards
 
Also they seem to be created right away as Plesk was set up on the server by creation time and the modified also tells metrics should go in - question is why they are not showing up anyhow in the Plesk UI monitoring..

Code:
stat /opt/psa/var/health/data/localhost/disk-nvme1n1/disk_io_time.rrd
  File: /opt/psa/var/health/data/localhost/disk-nvme1n1/disk_io_time.rrd
  Size: 336320          Blocks: 664        IO Block: 4096   regular file
Device: 9,124   Inode: 88372002    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-01-17 00:12:28.319245810 +0100
Modify: 2025-01-26 16:43:08.832140968 +0100
Change: 2025-01-26 16:43:08.832140968 +0100
 Birth: 2025-01-16 23:17:56.731878589 +0100


As I initially set up the Plesk on the freshly installed Debian 12.9, I did a "plesk repair all -y" just in case for it directly checking for issues - there was only one for it reporting access to the /etc/ssl not working out and it autofixed this.. I may want to try another run of it just checking if anything is / got wrong inbetween.. will follow up with the outcome of that.

Update on the plesk repair, only some minors for files of the www dirs:


Code:
# plesk repair all -n

Checking the Plesk database using the native database server tools .. [OK]

Checking the structure of the Plesk database ........................ [OK]

Checking the consistency of the Plesk database ...................... [OK]

Checking system users ............................................... [OK]

Checking Linux system files ......................................... [OK]

Checking virtual hosts' file system

  There is incorrect ownership on some items in the WWW root directory
  of the domain '*************.com' .................................... [ERROR]
  To see more details, run the command in the verbose mode: plesk repair fs -verbose
  One or more files or directories in the root directory of the domain
  ''*************.com' are either writable by anyone or neither readable
  nor writable by the owner. Such permissions are insecure and may
  result in or indicate a security breach ........................... [INFO]
  To see more details, run the command in the verbose mode: plesk repair fs -verbose

  There is incorrect ownership on some items in the WWW root directory
  of the domain 'test.'*************.com' ............................... [ERROR]
  To see more details, run the command in the verbose mode: plesk repair fs -verbose
  One or more files or directories in the root directory of the domain
  'test.'*************.com' are either writable by anyone or neither
  readable nor writable by the owner. Such permissions are insecure
  and may result in or indicate a security breach ................... [INFO]
  To see more details, run the command in the verbose mode: plesk repair fs -verbose

Checking Plesk version .............................................. [OK]

Checking Apache configuration ....................................... [OK]

Checking for custom configuration templates ......................... [OK]

Checking associations between domains and IP addresses .............. [OK]

Checking for corrupted reference between IP collections and
IPaddresses ......................................................... [OK]

Checking for links between APS applications and subscriptions ....... [OK]

Checking for nginx ULIMIT value ..................................... [OK]

Checking for extra configurations in database not owned by any object
................................................................... [OK]

Checking the status of the required Apache modules .................. [OK]

Checking the configuration of Apache modules ........................ [OK]

Checking web server configuration. Please wait ...................... [OK]

Checking the usage of PHP handlers .................................. [OK]

Checking for obsolete PHP-FPM configuration files ................... [OK]

Checking php-fpm configuration ...................................... [OK]

Repairing the mail server configuration ............................. [OK]

Checking MariaDB/MySQL database servers ............................. [OK]

Repair databases on available servers ............................... [OK]

Repair database users on available servers .......................... [OK]

Checking and restoring users for the domain eurobricks.com .......... [OK]

Error messages: 2; Warnings: 0; Errors resolved: 0



exit status 1
 
Thanks for taking a look and already having a future solution available - looking forward to have it rolled out in the future.

Thanks for the great direct support, appreciated!

Best regards
 
Back
Top