• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Path error in Amazon S3 extension to Backblaze B2

Denis Gomes Franco

Regular Pleskian
Hello. I'm in need of changing my backup configuration and I have been trying to set up the Amazon S3 extension to store backups on Backblaze B2 now that they have an S3-compatible API. Thing is, when it comes to the path field, it seems that I can only use a root directory, eg 'backups' and not a subdirectory such as 'backups/secondserver'. The extension always throws a 'Path does not exist' error even though 'backups/secondserver' was actually created inside the bucket and can be browsed via Backblaze's web interface.

Any ideas? And no, I won't be using DigitalOcean Spaces or Amazon's S3 service as they are way, way more expensive.
 
Funny that you mentioned this - we found this out the hard way a few days back while testing DR and couldn't get the "Path" to work yet. I was going to submit a ticket but didn't get a chance to hash out a full report.

Or maybe it's a missing config option. I'll look into it later today
 
I don't think there is going to be a resolution for this.

Plesk is making a headObject call to the Bucket + Path, I presume to check for existence - on B2, this throws a 404:

[2021-07-19 15:25:50.043] 20067:60f5997d7c701 DEBUG [extension/s3-backup] headObject error: Error executing "HeadObject" on "https://s3.us-west-002.backblazeb2.com/publictestbucket070/Demo/"; AWS HTTP error: Client error: `HEAD https://s3.us-west-002.backblazeb2.com/publictestbucket070/Demo/` resulted in a `404 Not Found` response NotFound (client): 404 Not Found (Request-ID: 24b29227bb272d74) -

It throws a 404, so everything halts. I presume this is related to Backblaze's implementation of s3 - folders are weird, as they technically "don't exist." From my understanding, the headObject call is technically only for files and while some endpoint implementations will return a success code for folders, this behavior is unpredictable.

Via aws-cli on B2:
Code:
aws s3api head-object  --bucket PublicTestBucket070 --endpoint-url=https://s3.us-west-002.backblazeb2.com/ --key Demo/ --debug
...
An error occurred (404) when calling the HeadObject operation: Not Found

Via aws-cli on our primary DR site (s3 comp. via OpenStack):
Code:
aws s3api head-object  --bucket PublicTestBucket070 --endpoint-url=https://s3.us-central.s3domain.com/ --key Demo
...
DEBUG - https://s3.us-central.s3domain.com:443 "HEAD /PublicTestBucket070/Demo HTTP/1.1" 404 0
An error occurred (404) when calling the HeadObject operation: Not Found

aws s3api head-object  --bucket PublicTestBucket070 --endpoint-url=https://s3.us-central.s3domain.com/ --key Demo/
...
DEBUG - https://s3.us-central.s3domain.com:443 "HEAD /PublicTestBucket070/Demo/ HTTP/1.1" 200 0
 
Dang it! And we need folders, since storing all Plesk backups on the same location (be it the root or any folder) will cause a mess. By the way, I'm trying to set up individual backups on each subscription, as the last time I tried backing up the whole server the thing just crashed.
 
You can just use a separate bucket for each server/site.

You could consider Wasabi - frankly, we've found B2 to be unreliable and inconsistent though @ 0.005/GB, I don't expect anything else.
 
Don't wanna deal with multiple buckets, but thanks for the suggestion. I'll check out Wasabi once again, last time I tried it (long time ago) I remember you had to pay for 30 days of storage of each file, eg., if you uploaded a file today and deleted it tomorrow, you would still have to pay for the space this file used as if it was stored for 30 days, which I thought was a pretty bad idea since I would be changing files frequently.
 
I couldn't seem to find the pricing for what they refer as 'Timed Deleted Storage', so I'll assume it's the same as active storage. They even have a graph showing how Wasabi could be cheaper than AWS but they still don't explain the price for this deleted storage anywhere.

And the period for that timed delete storage is 90 days, so I'll be paying for it whether I use a file for 90 days or not. I like to keep full backups for at least 7 days and due to backup file rotation I believe I would accrue a lot of charges.
 
I opened a ticket with Backblaze and referenced your post @john0001 , maybe they can shed some light on this. Funny thing is, I use Cyberduck with the S3 protocol (instead of the Backblaze protocol) and it works just fine with folders.
 
Alright, I set up object storage with Linode, let's see if it works. 4x as expensive, but let's see how it goes.

I was able to connect it via the S3 extension and it did not complain about directories, so for me it seems this is a problem with Backblaze's S3 implementation and not the extension.
 
Back
Top