• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Downloading backup fails

jojo

New Pleskian
Hi,

I'm using incremental backup on one of my domains, it currently sits at about 6GB in size.

When I click on the green arrow to start the download, it reaches about 2.3GB in download then it stops as if it was completed.

Opening the backup folder obviously is not possible because it is corrupted and not complete.

How can I solve this issue? What settings do I need to change to prevent the download from prematurely end?
 
Anyone can help? I cannot download any website backup because the connection always times out. How can I download the backup using FTP? I'm surprised there is no way to reliably download backups from Plesk...
 
I think that @Ales was very close to the solution. In this case the reason for the failure however is probably a server-side timeout. You need to increase the timeout values in sw-cp-server settings for long transactions to complete. Else the process will stop early. Try the solution from
Unable to download huge backup file in Plesk interface: 504 Gateway Time-out

You can set the timeout values to longer values than the article mentions, e.g.
Code:
fastcgi_read_timeout 6000
fastcgi_send_timeout 6000
 
I think that @Ales was very close to the solution. In this case the reason for the failure however is probably a server-side timeout. You need to increase the timeout values in sw-cp-server settings for long transactions to complete. Else the process will stop early. Try the solution from
Unable to download huge backup file in Plesk interface: 504 Gateway Time-out

You can set the timeout values to longer values than the article mentions, e.g.
Code:
fastcgi_read_timeout 6000
fastcgi_send_timeout 6000

Thanks, I tried that and increased both timeouts to 6000, still experiencing the same issue. Downloaded about 1.3GB out of 5.4 and the download stopped.
HTTP is not really reliable to download large files, any way I can download the website backups using FTP? I'm using incremental backups.
 
Have you tried to use a different browser? It should be possible to download large files through the browser when the timeouts are o.k.
 
Hello, same Problem here: Download from Plesk Backup-Manager of 4G Backup stops at ~900MB to 1.2 GB. Even downloading the Dump via sftp from /var/lib/psa/dumps/domains/mydomain (backup_user-data_xxx.tgz) stops with same Problem. I´m a little scared you understand. I used different Computers to try, but same Problem. Using Google Drive works to store Backups.
The Problem comes suddenly in November, I didn´t change server configuration.
 
The solution needs to come from Plesk and not in the form of SSH commands. The simple fact for me is that I could download 11GB backup by clicking the green arrow in Onyx, and now I can't after updating to Obsidian -- it stops at around 1.4 GB.

Again, this should not be a problem, period. It's a server and Plesk is it's interface. Dealing with backups is one of Plesk's core functions. A car that needs a mechanic to open the trunk is not much of a car.
 
Crazy that this still doesn't work. It's pointless to backup using Plesk if you then can't download them.
 
Maybe an adjustment to the backup strategy can help? For example backing up to an FTP storage space or a cloud?
 
Maybe an adjustment to the backup strategy can help? For example backing up to an FTP storage space or a cloud?
I backup to Google Cloud Storage (S3-compatible) and I had to manually download the (incremental) backup file I wanted directly from there. Luckily I needed a database dump, but if I happened to need something that has to be built from the full + incremental files, no clue how I'd do it.

Oh and by the way, the command plesk sbin pmm-ras --get-dump-list doesn't return anything, while in the Plesk UI I can see all my full and incremental backups. Command output:

Bash:
# plesk sbin pmm-ras --get-dump-list
<?xml version="1.0" encoding="UTF-8"?>
<dump-list/>

Pretty broken stuff.
 
The Plesk backup archive format is proprietary. You can unpack it and descend into the directory tree structure to find the clients files, but this structure is not meant for manual unpacking and restore. It is made to work with the Plesk backup manager.

"get-dump-list" needs parameters and only delivers locally available files: Available options --type, --guid, --id, --name, --marker, --owner-guid.
 
The Plesk backup archive format is proprietary. You can unpack it and descend into the directory tree structure to find the clients files, but this structure is not meant for manual unpacking and restore. It is made to work with the Plesk backup manager.

"get-dump-list" needs parameters and only delivers locally available files: Available options --type, --guid, --id, --name, --marker, --owner-guid.
The problem is that, as attested by people in this thread and others, the backup manager doesn't work with anything else than small files. So yeah, it's broken and unusable in those cases.

I copied the get-dump-list command from here, which if I'm not mistaken is your official support website, where it doesn't have any parameters and was updated two days ago. Also the command doesn't say anything about required parameters when executed.

I understand this is your job and everything, but maybe some self-criticism and effective internal reporting of issues would be better than trying to spin things here. Just saying, as a Plesk-loving customer for years. And I'm not only talking about this specific thread.
 
but maybe some self-criticism and effective internal reporting of issues would be better than trying to spin things here.
If you believe you found a bug, please feel free to report it here: Reports

Not being able to download large files through a web browser is not a bug, though, because such limits exist in web server software and browsers, where they cannot be influenced by the server.
 
If you believe you found a bug, please feel free to report it here: Reports

Not being able to download large files through a web browser is not a bug, though, because such limits exist in web server software and browsers, where they cannot be influenced by the server.
Servers can be configured (even for specific requests only) to not have any limit. Browsers don't have any limit.

Even if those 2 things were true, there are other ways, like file-streaming. There's plenty of examples of services allowing big file downloads: WeTransfer, MEGA, all the storage-as-a-service providers (DropBox, Gdrive, OneDrive, etc...) and a lot more.
 
Back
Top