• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Question plesk backup slow time to restore. "tar -z -f" use only one core

solucionesuno

Regular Pleskian
i am restoring a backup and i can see that use the commando tar -z -f.

Its using only one core and 100% of 800% ( 8 cores) available.

Is anyway to change it to use all cores on restore?
 
We use pigz only for backups and tar for restore. It will take a lot of work of Plesk developers to use pigz for the restore too. Maybe we'll do it someday.
 
@IgorG i can´t beleave really your answer.

I think that it´s very importat can restore the backsup server the most quicly as posible. And use only one core for this is really precarious.
 
I'd like to bring this topic up again. We ran into a similar issue yesterday. The problem does not seem to be the size of the restore data, but the number of files. If a subscription has a high number of files, in our case 550,000, tar -xf takes "forever" to restore then. On a 12 core machine we have been waiting almot 24 hours while tar has 100% of a cpu core available and is not hung, but doing something. Yet the restore is still not completed.

I think that this needs some consideration to improve in Plesk, because the situation means that a desaster recovery cannot be done in a timely manner. In our case we are talking about a single subscription only. What will happen if a full restore of the whole system needs to be done? Customers won't wait a week or longer in case of a desaster. Luckily this is an unlikely scenario, but it is thinkable that some day it is needed. And in such a case a full backup is more or less useles, because it takes much too long to restore.
 
man pigz
Decompression can't be parallelized...
I do not see way how restoration of big numbers of files can be speeded up significantly via using filesystem-based backups. The only possibility is creating raw dumps of block devices but it requires at least mount each webspace to its own partition.
 
Maybe it is possible to create a tree structure when compressing. Currently it seems that all files of a subscription are written into a single tar. Maybe it would be better to create seperate tar files for each document root directory of a subscription and then create one big tar out of these separate tars. This would solve the problem with large file numbers in subscriptions as such subscriptions normally don't have all files in a single document root directory but distributed across several domains.
 
Two years later - looks to me like this remains an issue, Large subscriptions restored dreadfully slow.
 
Decompressing should be a lot faster than compressing, so it should not be the limiting factor.
Is there a lot of iowait? Some SSDs are really bad at writing lots of small files. What filesystem is used, and does it use write cache?
 
Back
Top