@lara-missk and
@ChristophRo
These are all basic discussions with respect to disk management and optimal disk usage.
However, these discussions do not even touch the unlimited possibilities that are currently present with any (standard) Plesk instance.
For instance, it is possible - at least proven in an experimental setting - to get a dedicated server (read: always recommended) and add an unlimited amount of cloud based managed disks.
Sure, it is a solution that provides unlimited scalability, but also a solution that requires tweaks.
These tweaks are really necessary to remove or resolve common pitfalls with Plesk and/or technical / hardware infrastructure in general.
This pitfall suggested by
@ChristophRo
The only thing that Plesk does not like, is if these volumes are mounted in something like /mnt/ssdvol02 and then /var/www/vhosts or /var/qmail is symlinked into that. So you should mount these directly as /var/www/vhosts (or /var/qmail, etc.)
can be (fairly easily) managed by tweaking the default Plesk config : when doing that, it will also work like a charm ....... until the next Plesk update, which update always will increase the probability that custom settings are overwritten.
Other tweaks might be more interesting, since it is worthwhile to talk about scalability, since that is what
@lara-missk is searching for.
Adding disks might be interesting, but that is standard procedure that might best be avoided : rearranging disks is never a good idea - even if all of the tedious work is completed succesfully, one still has a highly fragmented disk that will deteriorate faster than other (new and clean) disks.
Stated differently, setting up your own partitions is a common pitfall, not associated with Plesk, but only related to choices of Plesk admins.
Adding external disks might be an interesting alternative to creating a clean (new) Plesk instance on a clean (new) server.
Nevertheless, creation of a (new) Plesk instance is recommended - it is always better.
In general, there might not be a lot of situations in which the aforementioned tweaks are necessary - they can often be avoided and should be avoided!
However, one can reach the limits of any dedicated server that can be purchased or the costs of a big / huge dedicated server can become prohibitive.
In those cases, there are some interesting features associated with Linux systems (and even more with Windows systems - that is another story).
Let's return to the example - only an example! - of cloud based managed disks, in this case Azure based managed disks.
However, we will not opt for the expensive managed disks, but for the simpe and very inexpensive file share!
It is possible to add a file share to any Linux OS, following the steps below :
- create a storage account, choose whichever storage account type you want (but not "cold" storage!)
- create a file share within the storage account and copy the (connection) settings
- just mount the file share to your local Linux OS (and make it reboot persistent)
and the above is oversimplified, it is only intended to illustrate the simplicity.
The specific file share will just behave like any other file share, even though it is somewhat slower .......
........ and that slowness can only spoil the fun if you use the file share for the wrong purposes.
In essence, you should (only) use the (static) file share for (static) storage of static files.
For instance, one could offload all local backups to that specific file share and store them safely and remotely with retention policies and deletion policies!
And that is the essential nature of any tweak : solve the problems by looking a bit different.
If you do not have sufficient space for databases or web files, why increase the space for those databases and web files?
One should first decrease the space used for other purposes ...... for instance, by setting full backup mode (read: not using wasteful incremental backup mode) or even by offloading static backup files to an external disk (read: a cloud based disk, a local external disk, even an USB drive).
In most cases, the offload of static files (like backups) will often provide sufficient space for other storage operations.
Nevertheless, tweaks might always be necessary!
More importantly, a good and intelligent approach to create efficiency might also be necessary!
For instance, why create a mounted Azure based file share (read: less secure) for offloading backups, while there is also the option to use the FTPS backup option (read: more secure) to store the backups in the Azure storage account?
So, start with creating backups in the cloud and use the FTP option ....... and only store 1 or 2 local backups on the local server.
Not enough space, even after offloading the backups?
Well, you can still use the mounted file share in order to serve some (only static) sites from the file share.
This might create a throughput issue, but even that can be tweaked by using Nginx as proxy and allowing Nginx to cache the pages locally.
But then again, it would also be very simple to use a CDN or Cloudflare (as a proxy) for specific sites - problem solved, but in a better way.
In summary, there might be many solutions and many tweaks, but all should be started with the question : why do I need extra space or disks?
In the answer to that question often lies a solution that is quite simple.
The solution is often related to an unidentified issue in the setup of Plesk or the sites (WordPress, arggh!!!)
Only when knowing the root cause of the problem, only then one can solve an issue properly.
@lara-missk ....... to be honest, I think that you should start with a dedicated server (if your disks are really full with hundreds of sites and backups) OR that you should start with an analysis of the reason why the disks are full (if you use WordPress + WooCommerce, then you are very likely to have sites with a lot of WP generated images, which images take a lot of space - in this case, just try a CDN like Cloudflare first).
Kind regards........
PS Even though Plesk is scalable without limits, it is my experience that is safer and more secure to use multiple (individual) dedicated servers as opposed to a highly complex scaled system. This is experience is based upon the fact that it does not make any sense to provide solutions for "data usage expansion" if this type of expansion is not combatted first : larger disks, larger servers ..... they all become clogged and slow, if there is no solution for the root cause of the problem. For that reason, I do not recommend incremental backups. In some test situations, a WordPress site with a reasonable number of changing images became several TBs in backup size, when using incremental backups. This is just to illustrate that one should be critical first, before solving anything!