• Our team is looking to connect with folks who use email services provided by Plesk, or a premium service. If you'd like to be part of the discovery process and share your experiences, we invite you to complete this short screening survey. If your responses match the persona we are looking for, you'll receive a link to schedule a call at your convenience. We look forward to hearing from you!
  • We are looking for U.S.-based freelancer or agency working with SEO or WordPress for a quick 30-min interviews to gather feedback on XOVI, a successful German SEO tool we’re looking to launch in the U.S.
    If you qualify and participate, you’ll receive a $30 Amazon gift card as a thank-you. Please apply here. Thanks for helping shape a better SEO product for agencies!
  • The BIND DNS server has already been deprecated and removed from Plesk for Windows.
    If a Plesk for Windows server is still using BIND, the upgrade to Plesk Obsidian 18.0.70 will be unavailable until the administrator switches the DNS server to Microsoft DNS. We strongly recommend transitioning to Microsoft DNS within the next 6 weeks, before the Plesk 18.0.70 release.
  • The Horde component is removed from Plesk Installer. We recommend switching to another webmail software supported in Plesk.

Question plesk backup slow time to restore. "tar -z -f" use only one core

solucionesuno

Regular Pleskian
i am restoring a backup and i can see that use the commando tar -z -f.

Its using only one core and 100% of 800% ( 8 cores) available.

Is anyway to change it to use all cores on restore?
 
We use pigz only for backups and tar for restore. It will take a lot of work of Plesk developers to use pigz for the restore too. Maybe we'll do it someday.
 
@IgorG i can´t beleave really your answer.

I think that it´s very importat can restore the backsup server the most quicly as posible. And use only one core for this is really precarious.
 
I'd like to bring this topic up again. We ran into a similar issue yesterday. The problem does not seem to be the size of the restore data, but the number of files. If a subscription has a high number of files, in our case 550,000, tar -xf takes "forever" to restore then. On a 12 core machine we have been waiting almot 24 hours while tar has 100% of a cpu core available and is not hung, but doing something. Yet the restore is still not completed.

I think that this needs some consideration to improve in Plesk, because the situation means that a desaster recovery cannot be done in a timely manner. In our case we are talking about a single subscription only. What will happen if a full restore of the whole system needs to be done? Customers won't wait a week or longer in case of a desaster. Luckily this is an unlikely scenario, but it is thinkable that some day it is needed. And in such a case a full backup is more or less useles, because it takes much too long to restore.
 
man pigz
Decompression can't be parallelized...
I do not see way how restoration of big numbers of files can be speeded up significantly via using filesystem-based backups. The only possibility is creating raw dumps of block devices but it requires at least mount each webspace to its own partition.
 
Maybe it is possible to create a tree structure when compressing. Currently it seems that all files of a subscription are written into a single tar. Maybe it would be better to create seperate tar files for each document root directory of a subscription and then create one big tar out of these separate tars. This would solve the problem with large file numbers in subscriptions as such subscriptions normally don't have all files in a single document root directory but distributed across several domains.
 
Two years later - looks to me like this remains an issue, Large subscriptions restored dreadfully slow.
 
Decompressing should be a lot faster than compressing, so it should not be the limiting factor.
Is there a lot of iowait? Some SSDs are really bad at writing lots of small files. What filesystem is used, and does it use write cache?
 
When will this feature finally arrive? Data volumes are continuously increasing, and there are now many good alternatives available. I'd like to bring this topic back into focus. Restorations take particularly long with Nextcloud instances.
 
Back
Top