• Plesk Uservoice will be deprecated by October. Moving forward, all product feature requests and improvement suggestions will be managed through our new platform Plesk Productboard.
    To continue sharing your ideas and feedback, please visit features.plesk.com

Plesk Backup Manager creates backups using only one CPU thread (No multithreading)

enerspace

Basic Pleskian
Username:

TITLE


Plesk Backup Manager creates backups using only one CPU thread (No multithreading)

PRODUCT, VERSION, OPERATING SYSTEM, ARCHITECTURE

Plesk Obsidian 18.0.72 Update #1 Web Pro Edition
Almalinux 8.10

PROBLEM DESCRIPTION

Currently, a backup that is several hundred GB in size can take several hours to be restored. The reason is the lack of multithreading support.

This must finally change in the year 2025.

There has already been a UserVoice post about this for over a year. But there really shouldn't have to be one in the first place:
https://plesk.uservoice.com/forums/184549-feature-suggestions/suggestions/49036487-add-backup-restore-sw-tar-multithreading

I therefore kindly ask that a multithreading alternative is finally added.

STEPS TO REPRODUCE

Restore a backup

ACTUAL RESULT

One CPU Thread

EXPECTED RESULT

Multithreaded

ANY ADDITIONAL INFORMATION

(DID NOT ANSWER QUESTION)

YOUR EXPECTATIONS FROM PLESK SERVICE TEAM

Confirm bug
 
Thank you for the report. I can't qualify that as a product bug, but I will discuss the matter further with our team and get back to you with more details.
 
According to our team, there are two different processes for archive: unpack & decompress. Plesk is using:
  • tar for unpacking (extraction). There are no widely used tools that natively support true multithreaded unpacking of tar archives, because the tar format is inherently sequential.
  • and zstd for content decompression. zstd allows to use multithreading and Plesk uses this multithreading.
zstd is involved to tar operations with tar options. With that said, can you please provide more details, or your specific vision/suggestion. Thank you in advance.
 
Hello,

thank you for your feedback. I have checked the restore process in detail and can confirm the following:

Currently, Plesk executes the restore with:

Bash:
[CODE]/usr/lib64/plesk-9.0/sw-tar --use-compress-program=pzstd -f - -xv ...

While pzstd is used, it is called without the --threads option. As a result, decompression is still running single-threaded, which causes extremely long restore times for large .tzst archives.

To fully utilize modern multi-core CPUs, the call should be adjusted, for example:

Bash:
/usr/lib64/plesk-9.0/sw-tar --use-compress-program="pzstd --threads=$(nproc)" -f - -xv ...

Alternatively, it would be even better to make the number of threads configurable in the Plesk backend, so administrators can define how many CPU cores should be used for backup/restore operations.

This would significantly improve performance for large restores.

Thank you.




Soll ich dir zusätzlich noch eine kurze deutsche Variante erstellen, die du für interne Dokumentation oder als Zusammenfassung nutzen kannst?
 
Thank you for your input. We forwarded it to our backup developers for further review. I will follow-up with more details as soon as possible.
 
It looks strange. Where did you get that information?

The man pzstd command says the following:
Code:
   Parallel ZSTD options:
       -p, --processes
              #    : number of threads to use for (de)compression (default:<numcpus>)

This means Plesk uses as many threads as there are CPU cores available on the system. This is the optimal way.
 
Hello Plesk team,

thanks for your clarification regarding pzstd. You are right: pzstd defaults to using all CPU cores for (de)compression. The performance issue we observe during large restores is therefore not caused by the decompressor threads, but by the sequential nature of tar itself (header parsing + file creation happen in a single process) combined with heavy filesystem I/O on many small files.

Because of that, large .tzst restores remain slow even when pzstd runs with multiple threads.

I have reported a problem and I don’t know myself how it can be improved. I can only give some ideas.

To improve restore times, please consider these options:

A) dar (Disk ARchive)
  • Designed for backups, supports slices, resume, selective restore, robust integrity checks.
  • Leverages modern compression (e.g., zstd) and enables parallelized restore at the orchestrator level.
  • Example workflow (conceptual):
    • Create: dar -c <archive> -R <root> -s <size> -z<level>
    • Restore: dar -x <archive> -R <target>
  • Benefits: fast resume after interruption, parallel processing possible, good fit for very large trees with many small files.
B) Modern deduplicating engines (borg, restic)
  • Block-based repositories with multithreaded comp/crypto, resume by design, fast selective restore.
  • Major gains for repeated backups and remote targets; strong integrity and pruning tools.
 
Back
Top