• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Question Assistance for Linux hard disk expansion during operation

menhorti

New Pleskian
Server operating system version
Debian GNU/Linux 10 (buster)
Plesk version and microupdate number
18.0.52 Update #3
Hello everyone,

we are planning a major server move so that our domains are finally in a Plesk instance on one server but now we are reaching our limits as far as storage is concerned.

For this we ordered a new hard drive, which I can now integrate - correctly and without data loss, preferably during operation - so far two hard drives were installed in the server as a raid and the mounting point was root.

I've already read through a number of explanations and descriptions - it seems to be possible, everyone does it differently, none of these ways is really simple or harmless.

Has anyone here (please!) ever done something similar and can give me some advice or way that should work.

I'm really grateful for any help, because I've already had experience with Linux administration, but I've never had a memory expansion during operation on a live server with 360 domains - keep your eyes open when looking for a job - I'll take it to heart next time take. But I'm currently in the wrong company and MUST make it somehow.

If further information is needed for assistance, I will be happy to provide it.

With laughing and crying eyes,:confused:

Tim
 

Attachments

  • server_lsblk.png
    server_lsblk.png
    15 KB · Views: 15
So you got two new devices nvme1 and nvme2?
Next time you have to set up a server, I recommend you use lvm on top of the raid and then create the volumes within lvm.
In such a setup, you can simply extend the lvm volume group adding more devices and then grow the individual volumes and filesystems.
 
Thanks for the replies!

I had already expected that it couldn't happen casually or even completely without downtime. I just took over the server and can now see how I get along with it (->****), because the server(s) were never designed for a company of this size and not much thought was given to them.

Actually, only one disk was ordered, but since the server was previously set up as a RAID, was a second one (nvme1 + nvme2) possibly installed as well? I wish it were different, but I can't tell you exactly.

@Peter Debik Another problem is that the server is designed for 4 × 500 GB, so we are at the limit. I basically ask myself why a server with over 700 domains for 2TB of data is designed, then also as a raid, because the storage space is not already scarce.

Is it fundamentally possible to run two disks on one mounting point and still retain all of the data from disk 1? According to my understanding, it should work, but then Raid from Disk 1 gets in the way, or am I wrong?

I also need a new employer, the company is driving me crazy.

Best regards,

Tim
 
Actually, only one disk was ordered, but since the server was previously set up as a RAID, was a second one (nvme1 + nvme2) possibly installed as well? I wish it were different, but I can't tell you exactly.
Use smartctl -a /dev/nvme1 or nvme smart-log /dev/nvme1 etc. on the devices and compare the power_on_hours, that should tell you which ones were recently installed :)
(if neither is installed, install smartmontools or nvme-cli)
When you know the new ones, you can build the new raid.
Is it fundamentally possible to run two disks on one mounting point and still retain all of the data from disk 1? According to my understanding, it should work, but then Raid from Disk 1 gets in the way, or am I wrong?
Well, there are fuse and bind mounts, but they are hard to get right.

How's the data distributed anyway? (try du -kc -d 4 /)
 
Hello @mow

First of all, thank you very much for the tip with smartctl, it was installed quickly and now I also know that NVME1 and NVME2 are the two new ones (at least I assume so if there is no health status?!).

Du Command gives me a long list of directories, what should I be looking for? The most interesting directories would probably be:
/var/www/vhosts/{domains}
/etc/domainkeys/{domains}
/var/qmail/mailnames/{domain mails}
/var/tmp/systemd-private-hashhashhash-plesk-php80-fpm_{domains}
/var/lib/mysql/{databases}

But what should I be looking for to get ahead or to spot a pattern? This looks like a standard 0815 Linux structure to me?

Please excuse my ignorance, I vow to get better...

Best regards,

Tim
 
Yes, nvme1n1 and nvme2n1 seem to be your new and empty drives. (according to screenshot)

Use the command "blkid" to check if they have any filesystem on them (command should return a list of all partitions/filesystems in your server and so these two disks should NOT be listed there)

Now comes the really "it depends" part....

Where is your data at?
I recommend to use the following command to find out
Bash:
ncdu /

Now it's up to you and your decisions if it suffices to only move away the data
- of the websites (/var/www/vhosts)
- the data of email (/var/qmail)
- the data of the mysql databases (/var/lib/mysql)
- or possibly the Plesk backup directory (/var/lib/psa/dumps)

In general these are the only four directories on a Plesk server, that really can and should accumulate a lot of data. (everything else is peanuts)

If it would suffice to only move one of the directories to the new disks (and if ever needed, move another directory to again two new disks inserted into the server) you can go on with just creating an additional md raid and use that.
Code:
mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/nvme1n1 /dev/nvme2n1

Otherwise it's best to use md raid and LVM on top of it, or just LVM with a raid1 config.

After that you need to perform these steps...more or less:

- create filesystem (mkfs.ext4 /dev/md3 for example)
- mount it temporary (mount /dev/md3 /mnt)
- initial copy/sync the said directory onto the new drive (I recommend "rsync -a --delete" for that)
- stop services (apache2, nginx, mysql, postfix, dovecot, proftpd)
- delta copy/sync the said directory onto the new drive (I recommend "rsync -a --delete" for that)
- rename old directory
- create new directory with same name
- unmount the temporary mounted new disk (umount /mnt)
- mount new disk/filesystem in this directory (and adjust /etc/fstab so it does auto-mount on boot)
- start services (apache2, nginx, mysql, postfix, dovecot, proftpd)

if everything work again, you can then delete the old/renamed directory that still resides on your root (/) partition
 
  • Like
Reactions: mow
Back
Top