• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Resolved Error on remote FTP when doing backups

aromero

New Pleskian
Server operating system version
Ubuntu 20.04.2 LTS
Plesk version and microupdate number
Versión 18.0.57 #5
I got this error that says:

Unable to create the remote backup: Transport error: Unable to resume an interrupted upload: Requested data is out of the cached data (current cache size: 10000 MB, cached data size: 10000 MB, required data size: 80880 MB): The connectivity problem: (55) Failed sending data to the peer: Last FTP request: APPE backup_2401081453.tar: Last FTP response: 125 Data connection already open; Transfer starting

Whenever the server tries to execute a backup task. This cache size is way too high, how can I fix this issue?
 
Maybe PPPM-14267.
@aromero Could you please try to add a variable "streamCacheSize" into the panel.ini file with value "50"?

[pmm]
streamCacheSize = 50

Afterwards, please try the backup again. Does it resolve the issue?
 
I just tried that, I get this:

Unable to create the remote backup: Transport error: Unable to resume an interrupted upload: Requested data is out of the cached data (current cache size: 50 MB, cached data size: 50 MB, required data size: 60133 MB): The connectivity problem: (55) Failed sending data to the peer: Last FTP request: APPE backup_2401091139.tar: Last FTP response: 150 Opening BINARY mode data connection

I guess there is gotta be a way to reduce the required data size, but I can't figure it out.
 
Which would mean to increase the size from 50 to 90000 (according to your previous post). That would be 90 GB though, so I am not sure whether it makes sense. We have an internal issue report on this, and it suggests to increase the streamCacheSize, but I am not sure what other issues this could cause if it is very high. I suggest contacting Plesk support staff with ticket on the matter, mentioning PPPM-14267 and this forum thread Issue - Error on remote FTP when doing backups so that they can check it directly on your server and maybe fix it there.
 
Shouldn't the server respond to APPEnd with the size that already exists on the server? This sounds like the transfer is restarting from 0, which would require the whole stream from the start to be cached, meaning that if the interruption occurs late in the transfer, the start won't be in the rolling stream buffer anymore ...
 
As far as I know it's a product issue that can only be fixed by setting the streamCacheSize.
 
As far as I know it's a product issue that can only be fixed by setting the streamCacheSize.
The reason is that the uploaded file does not exist in the filesystem. It is assembled on-the-fly from the .tar.gz/zstd archives in the domain subdirectory under /var/lib/psa/dumps/domains/. Those already compressed archives are assembled in a streamed tar archive, buffered by the stream cache.
If the transfer aborts, the uploader retries with APPE. The ftp server should respond with the data from the already uploaded fragment (from rfc959)
Code:
110 Restart marker reply.
             In this case, the text is exact and not left to the
             particular implementation; it must read:
                  MARK yyyy = mmmm
             Where yyyy is User-process data stream marker, and mmmm
             server's equivalent marker (note the spaces between markers
             and "=").
If it doesn't, the transfer has to restart from nothing. But the uploader can only go back as far as what's in the cache. So if there already was more uploaded than fits in the cache, the uploader can not continue, and the whole tar assembly would have to be restarted. Same if the marker goes back further than the stream cache, but normally there are at max a few megabytes lost in transit before the connection loss was detected, so this shouldn't ever happen with a 10000MB stream cache size.

Now why does this happen?
  • There is a problem with connectivity which interrupts the upload. Is one of the machines on a DSL line that forcibly changes IPs every 24h?
  • The ftp server is configured as write-only. This is good security-wise as the client can not mess with existing uploaded backups, but it also means the client can not get any (meta)data of or modify existing backuploads, including appending to them, and always needs to restart from the beginning.
    Your ftp server might have a setting to allow appending to existing files, maybe restricted to recently-aborted uploads. Otherwise you'd need to give the ftp user read and modify rights in addition to write, which however would allow an attacker who gained your credentials to read and delete all existing backups.
Solving either problem will make the error go away.
 
Back
Top