• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue 18.0.30 update broke FTPS backups

KirkM

Regular Pleskian
Remote FTP backups have been working perfectly until the panel updated to 18.0.30. All backups, domain and full server are throwing error messages:

FULL SERVER:
The following error occurred during the scheduled backup process:

Export error: Size of volume backup_2009170035.tar 20805223935 does not match expected one 20806575198. The remote backup may not be restored.; Unable to validate the remote backup. It may not be restored. Error: Failed to exec pmm-ras: Exit code: 119: Import error: Unable to find archive metadata. The archive is not valid Plesk backup or has been created in an unsupported Plesk version

DOMAINS:
Export error: Size of volume backup_domain.com_2009170045.tar 1773085695 does not match expected one 1773713659. The remote backup may not be restored.
Unable to validate the remote backup. It may not be restored. Error: Failed to exec pmm-ras: Exit code: 119: Import error: Unable to find archive metadata. The archive is not valid Plesk backup or has been created in an unsupported Plesk version

I have only a couple hundred MB used on a 2 TB remote backup drive so it isn't out of space. It doesn't appear to be a connection issue with that drive as the panel does connect and write the backup files. *EDIT: I just also noticed that the file size listed on the remote backup is .1 GB bigger than the file size shown in Plesk panel for the same backup.* The backups are just no good. I am pretty sure 18.0.30 broke something as it also created the error mentioned in the thread Resolved - Checking the consistency of the Plesk database Inconsistency in the table 'smb_roleServicePermissions'

The solution to that issue was resolved using the solution listed in the thread. However, I can't seem to fix the backup issue caused by 18.0.30.
 
Last edited:
Actually, the error message that you are describing, existed in 18.0.28 and earlier, it was fixed in 18.0.29.

I cannot confirm the error after updates of several hosts to 18.0.30. No issues here.

Please look into psa.BackupsScheduled table and verify that your backup plan is only listed once there. If it is duplicate in there, it could be that the backup actually runs twice at the same time, leading to a distortion of the files.

Also please make sure, that the backup space you are storing your backup into is only used for one single backup set. If there are other backup jobs either on the same server or a different server that store backups into the same location, backups will become invalid, because they become a mixture of different systems.
 
Thank you for your reply Peter. I had read on other threads about this and did check the psa.BackupsScheduled table and there is only one of each scheduled backup listed. I did observe the full server backup the other day writing the backup and then starting a second one before the first one was done. It only happened once and I have been watching both the panel and the FTP storage in real time during backups and it hasn't happened again.

Also, it is ALL backups, full server or just domains, scheduled or manual. Every single one has that error. I have cleared all of the previous backups and tried again and that didn't work. As I mentioned, I updated the panel and the next backup was a couple of hours later and it threw the error and all have done it since. I have had this setup for months and it worked perfectly until the update to 18.0.30.

Also please make sure, that the backup space you are storing your backup into is only used for one single backup set. If there are other backup jobs either on the same server or a different server that store backups into the same location, backups will become invalid, because they become a mixture of different systems.
I am not sure I understand exactly what you mean here. On my IONOS HiDrive, I have the server backup and all of the individual website backups going to their own separate subdirectories. Only one server is backing up to this remote FTP. Again, all was working perfectly for months until the update.

One other thing I noticed is although Plesk panel displays all of the backups, both older valid ones and the new invalid ones ( .1 GB LARGER than the earlier valid backups), it is not honoring the 4 backup limit. I am assuming it is because of that missing header metadata so it doesn't recognize them as legitimate backups.
 
Last edited:
If all backups are in different directories, you're fine.

Could it be that you have checked the "Create a multivolume backup" checkbox in the scheduled backup settings? If so, what is the reason for that?
 
Could it be that you have checked the "Create a multivolume backup" checkbox in the scheduled backup settings? If so, what is the reason for that?

No, I never have that checked. All backups are full backups. The error occurs with both manual and scheduled backups - always full backups with no multivolume. I am going to try completely wiping the scheduled backups and the remote FTP server settings to see if something there went bad. Don't know why it would suddenly go bad somewhere when it worked fine before the panel update, but I'm running out of ideas.
 
It is definitely an FTPS issue regarding the archive metadata.

- Backup to server PSA dumps is fine and restores correctly
- Backup to FTPS on IONOS HiDrive shows error after backup and file that is written is slightly larger than a valid dumps backup:
Export error: Size of volume backup_domain.com_2009170045.tar 1773085695 does not match expected one 1773713659. The remote backup may not be restored.
Unable to validate the remote backup. It may not be restored. Error: Failed to exec pmm-ras: Exit code: 119: Import error: Unable to find archive metadata. The archive is not valid Plesk backup or has been created in an unsupported Plesk version
And attempts to restore the remote file shows this error:
Unable to import file as dump: Import error: Unable to find archive metadata. The archive is not valid Plesk backup or has been created in an unsupported Plesk version
What I find strange is the remote backup files are LARGER than a valid dump to the server. You would think it would be SMALLER if it is missing some metadata. It appears that something is being added or corrupted when sending to the remote FTP storage that is messing up the archive metadata.
 
Last edited:
New test:
- Create local backup to psa dumps
- download to local computer
- upload to IONOS HiDrive
- change naming to fit format for remote FTP backups
- restore from remote ftp
This works.
[UPDATE]
Backing up to remote ftp without FTPS and passive mode on works perfectly.

18.0.30 has broken something in the FTPS export. As I keep mentioning, I run multiple scheduled site backups during the day and a full server at night and everything was fine for months. I don't have auto panel updates enabled so I manually triggered it when I saw 18.0.30 was available. The next backup occurred about an hour later and failed along with every one since.
 
Last edited:
If FTP works, but FTPS does not, very likely it is a TLS version or cipher issue. The FTP storage space might not accept a certain TLS version that you server is trying to establish. Which TLS versions does it support?
 
It supports TLS 1.2 and 1.3. It does NOT support TLS 1.1. As I said, it worked just fine using FTPS for months and broke the day of the 18.0.30 update so unless that update changed to only use TLS 1.1 (which I find unlikely), the TLS version isn't the problem on the remote backup server. I have run the Plesk repair commands multiple times with no success. Is there a way to run a specific repair or re-install of the backup manager using the CLI?
 
So I noticed that there is Update 1 now for 18.0.30. I first tested FTPS and it failed again. I immediately installed the panel update and ran the exact same backup again and it works. Looks like either the update repaired whatever might have gone wrong with the previous 18.0.30 update on my particular server OR they quietly fixed a bug without acknowledging there was a problem. Hope it is the former as the latter would be very disappointing.
 
@KirkM Since when did your backups fail? I am backing up to HiDrive as well (even though through Strato, but I guess they use the same infrastructure) and my backups stopped working on September 24, but I was still running 18.0.29 until today. Interestingly, there was no details associated with the failed backups.

In /var/log/plesk/PMM/<backupdate>/ the file backup.log is empty.

This is the output of pmmcli.log (removed personal data). What I found interesting is that the <server_address> string had a double slash in that file:
ftps://<username>@ftp.hidrive.strato.com//users/<custom_pathname>

Code:
[2020-09-26 03:35:43.832|9010] DEBUG: LOG: custom log /var/log/plesk/PMM/backup-2020-09-26-03-35-43-783/backup.log
[2020-09-26 03:35:43.848|9010] DEBUG: LOG: logs dir requested: /var/log/plesk/PMM/backup-2020-09-26-03-35-43-783
[2020-09-26 03:35:43.856|9010] INFO: Executing asynchronously <subprocess[9012] 'nice --adjustment 15 ionice -c 2 -n 7 /usr/bin/sw-engine -c /opt/psa/admin/conf/php.ini /opt/psa/admin/sbin/backup_agent --dump -server -owner-guid <#id> -owner-type server -split 10485760000 -description-file /opt/psa/PMM/sessions/2020-09-26-033543.833/dump_description -no-gzip -exclude-pattern-file /opt/psa/PMM/sessions/2020-09-26-033543.833/exclude -session-path /opt/psa/PMM/sessions/2020-09-26-033543.833 -output-file <server_address> -ftp-passive-mode -ftp -exclude-logs'>
[2020-09-26 03:35:43.905|9010] DEBUG: Acquired session mutex: MainThread
[2020-09-26 03:35:43.905|9010] DEBUG: detecting running pmmcli daemon...
[2020-09-26 03:35:43.906|9010] DEBUG: starting pmmcli daemon...
[2020-09-26 03:35:43.926|9010] DEBUG: Executing asynchronously [9013] process. CmdLine is '/opt/psa/admin/sbin/pmmcli_daemon'
[2020-09-26 03:35:43.926|9010] DEBUG: Create type=Backup
[2020-09-26 03:35:44.619|9010] DEBUG: Released session mutex: MainThread
[2020-09-26 03:35:44.619|9010] DEBUG: Acquired session mutex: MainThread
[2020-09-26 03:35:44.619|9010] DEBUG: Update task id=1365, type=Backup
[2020-09-26 03:35:44.700|9010] DEBUG: Released session mutex: MainThread
[2020-09-26 03:35:44.701|9010] DEBUG: <__main__.MakeDumpAction object at 0x7f6eaa330090>: response
[2020-09-26 03:35:44.704|9010] INFO: Outgoing packet:
<?xml version="1.0" ?><response>
    <errcode>0</errcode>
    <data>
        <task-id>1365</task-id>
    </data>
</response>

I am just restarting my first backup after updating to 18.0.30. I will keep you posted.
 
That's interesting. I am not sure of the exact date the upgrade and subsequent immediate failures started happening but it had to be a week or so before my first post on Sept. 17. My backup log also showed no errors. I can't find errors in any logs.

Does it work using FTP instead of FTPS?
Do you get this error on FTPS backup completion?
Code:
Error:
Export error: Size of volume backup_yourdomain.com_2009271632.tar 164061183 does not match expected one 165026878. The remote backup may not be restored.
Error:
Unable to validate the remote backup. It may not be restored. Error: Failed to exec pmm-ras: Exit code: 119: Import error: Unable to find archive metadata. The archive is not valid Plesk backup or has been created in an unsupported Plesk version

Also, I noticed that when I do a backup of a test site that has just a stark home page with a size of less than 1 MB, it works with FTPS. A test site using Wordpress with a size of about 150 MB will fail every time. Both work with FTPS disabled. Looks like something is getting messed up with the metadata. Here is a screenshot of the files in the HiDrive after two back-to-back attempts. The top is the FTPS that generated the error notice and the one below is the successful FTP. Notice the file size difference. It looks like the FTPS attempt is getting truncated since there is almost a MB missing on the FTPS backup. I really have no idea.
Screen Shot 2020-09-27 at 2.12.24 PM.png
 
The problem persists on Plesk Obsidian 18.30 Update 2 on Ubuntu 18.04.5 LTS‬

Error: Export error: Size of volume backup_2010070007.tar 10736090624 does not match expected one 10736968649. The remote backup may not be restored.
Error: Unable to validate the remote backup. It may not be restored.
Error: Failed to exec pmm-ras: Exit code: 119: Import error: Unable to find archive metadata. The archive is not valid Plesk backup or has been created in an unsupported Plesk version

Code:
-rw-rw----+ 1 ftp_backup users 10G Sep 30 00:21 backup_2009300007.tar
-rw-rw----+ 1 ftp_backup users 10G Okt  1 00:21 backup_2010010007.tar
-rw-rw----+ 1 ftp_backup users 10G Okt  2 00:21 backup_2010020007.tar
-rw-rw----+ 1 ftp_backup users 10G Okt  3 00:21 backup_2010030007.tar
-rw-rw----+ 1 ftp_backup users 10G Okt  4 00:21 backup_2010040007.tar
-rw-rw----+ 1 ftp_backup users 11G Okt  5 02:42 backup_2010050001.tar
-rw-rw----+ 1 ftp_backup users 10G Okt  5 00:24 backup_2010050007.tar
-rw-rw----+ 1 ftp_backup users 10G Okt  6 00:21 backup_2010060007.tar
-rw-rw----+ 1 ftp_backup users 10G Okt  7 00:21 backup_2010070007.tar
Backup files are there, they seem to be OK (tar -xvf extracts them)
there's enough space on the disk and no quota is set.
[pmm] disable reuse connections is set to 1,
FTP Server is ProFTPd on a debian buster, files get transfered so it can't be a TLSv1.3-Issue
size is as plesk reports, so maybe the expected size is wrong?
Or metadata is actually really missing. How would i check that?
the difference is about 850kB, this could be some metadata missing

and while we're at it: why aren't my backups compressed when "disable compression" is not checked in Settings? shopuldn't they be .tar.gz?

regards!
 
and while we're at it: why aren't my backups compressed when "disable compression" is not checked in Settings? shopuldn't they be .tar.gz?
because the .tar contains several .tar.gz (domainmail, user_data ...) which are compressed. No need to compress again, that could even make things worse.
 
If you use Strato HiDrive with TLS 1.3, it will not work. Use TLS 1.2. At least I have this problem and only this solution works.

I already contacted Strato months ago, but they say everything is alright.
 
Why are you not using SFTP instead of FTPS?
I think its more secure and more powerful and if you transfer many files at once (which is not the case here) its even faster
 
as i said before: i don't believe it's an FTPS/ TLS 1.3 error. I had these before and these would make a connection impossible, FTPS connects and transfers data so I think the issue is somewhere else.

And it's a proFTPd at my home, not a Strato hidrive

FTPS worked and I just want it to work again.
 
Back
Top