• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

tmpfs problem

ProWebS

Regular Pleskian
Hello,

suddenly I did a df -h in the server and I saw the above strange output:

Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 270G 383G 42% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/spool

as you can see the tmpfs partition have been created twice...(it wasnt like that from the beggining).
Any ideas on how to fix that?
 
Igor,

I tried your suggestion and here is the output:

[root@~]# /usr/lib64/plesk-9.0/handlers-tmpfs stop

[root@~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 151G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info



[root@ ~]# /usr/lib64/plesk-9.0/handlers-tmpfs start

[root@ ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 151G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 24K 3.9G 1% /usr/local/psa/handlers/spool


Even If i did try to run twice in a row /usr/lib64/plesk-9.0/handlers-tmpfs stop
still I could see:

[root@~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 151G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 8.9M 3.9G 1% /usr/local/psa/handlers/info


Any suggestion?
 
Unfortunatley I dont:

[root@~]# cat /etc/fstab
proc /proc proc defaults 0 0
none /dev/pts devpts gid=5,mode=620 0 0
/dev/md0 none swap sw 0 0
/dev/md1 /boot ext3 defaults 0 0
/dev/md2 / ext3 defaults,usrquota 0 0

I did a server reboot and it appears correctly:

[root@~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 688G 152G 502G 24% /
/dev/md1 2.0G 86M 1.9G 5% /boot
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-local
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-queue
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/before-remote
tmpfs 3.9G 4.9M 3.9G 1% /usr/local/psa/handlers/info
tmpfs 3.9G 0 3.9G 0% /usr/local/psa/handlers/spool

the problem now is that :

[root@~]# cat /proc/mdstat
Personalities : [raid1] [raid10] [raid0] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
4198976 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
2104448 blocks [2/2] [UU]

md2 : active raid1 sdb3[1]
726266432 blocks [2/1] [_U]

unused devices: <none>

a partition at the raid is off and i dont know if the tmpfs problem caused this or the opposite..
 
Back
Top