• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Backups very Slow

Thera

Regular Pleskian
Hi,

since 7 Days the Backups works extremly Slow (internal and to the external repository). What happens? Before Backups run in a few Minutes... now after 6 hours i got 30% of the Backup, we had this Problem at the beginning of Hosting.

Betriebssystem
Debian 9.13
Produkt
Plesk Obsidian
Version 18.0.41 Update #1, zuletzt aktualisiert: 22. Febr. 2022 22:14:34
 
What backup settings are you using? You can find your backup settings at Tools & Settings > Backup Manager > Backup Settings
 
Please install the free "Backup Telemetry" extension. With that you will be able to dive into the backup log tree and see exactly what part of the backup is slow. For example is it a specific subscription? Is it the FTP transfer process? You'll easily find out with that.
 
Hi, I am experiencing the exact same issue I guess.
This seems to happen since the last Plesk Version and is still persistent with the recent update.

Does this screenshot help?
 

Attachments

  • PLSK.jpg
    PLSK.jpg
    370.8 KB · Views: 11
In the "Operation Tree" please descend into the details. You'll find the part that takes much time for the backup in the tree.
 
Hey Peter,

like this? I am sure it is the large size of the SQL Database. But as you could see, in the previous version it took only about 2 hours.

plsk2.jpg
 
You can descend further down the tree, e.g. at the "6h" section.

I also recommend to use a real computer to do these things. I cannot really imagine how to effectively work through such issues on a smartphone.
 
compressing step seems to be the issue
backup.png


fyi: compressing method is set to "fast" and priorities are all the highest (7 and 19)
 
In this case it most likely takes very long, because the number of files that need to be packed is extremely high. Carefully examine the directories for temporary files such as "session" files or "cache" files. Some users have millions (truly millions) of them as they never erase outdated session or cache files. These might be very small per file, but the large number of files dramatically increases the compression time.
 
Update:
you led me in the right direction. I just further checked the hints in the operation tree and discovered that the issue seems to be here:

process.command_line: '/opt/psa/admin/bin/backup-archiver' '--pack' '--source=/var/www/vhosts/system/XXX.de/logs' '--destination=domains/XXX.de/backup_logs_2203070040.tzst' '--session-path=/opt/psa/PMM/sessions/2022-03-07-003224.146' '--warnings=/tmp/bwsAFohxR' '--compression-method=zstd' '--compression-level=fastest'


Just checked the path of /var/www/vhosts/system/XXX.de/logs and realized that access_ssl_log and error_log are almost 100 GB in size. WOW!

--> I will simply delete these files now. Is there a way to limit the size of log files or regularly delete them to avoid log files in such sizes?


Regards
 

Attachments

  • plesk_22.png
    plesk_22.png
    8.3 KB · Views: 2
Normal files. Just ordinary files, e.g. the session directory of a shop software or the cache directory of a Smarty application etc. This is not a database problem. Just look into the file structure. Sometimes it is hidden deep in a document root of a website, e.g. in a subdirectory "var" or similar where people normally don't look.
 
@Thera How is your CPU doing during the tar compression process? How much CPU power do you have, how much is utilized? Is the CPU temperature fine or do you find "throttle" messages in /var/log/messages?
 
Back
Top