Support

Akeeba Backup for Joomla!

#42603 Not backing up to Backblaze

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
6.0.2
PHP version
8.3.28
Akeeba Backup version
10.2.1

Latest post by nicholas on Wednesday, 14 January 2026 11:25 CST

timpennington

I recently switched to Rochen from HostGator, and have been trying to have my files backup nightly using BackBlaze, but have been running into problems.

I set up a new bucket in BackBlaze, and set up a cron on Rochen but it doesn't seem to be backing up to BackBlaze.

When I do a manual backup of my site to my server, it does Ok and gives me a message "Bucket 20251208Rochen not found under this BackBlaze B2 account" but I have checked and confirmed tje settings. (attached)

Last evening i set up another cron and received the email saying it occurred, but the files did not back up to Backblaze (attached).

I have also attached the log file.

I have tried searching for a solution, but can't figure it out.

 

Thank you in advance for any guidance.

 

Tim

nicholas
Akeeba Staff
Manager

Hello Tim,

For starters, please read the B2 bucket naming rules BackBlaze publishes in https://www.backblaze.com/docs/cloud-storage-buckets

Moreover, there are a few things that they don't mention, but we have found out:

  • Do not use uppercase or mixed-case names such as FOOBAR, Foobar, or FooBar. 
  • Enter your bucket name in the same all-lowercase name you created it.
  • The bucket name MUST NOT start with numbers.
  • It usually takes several minutes between creating a bucket and being able to use it. We recommend waiting for 1-2 hours.

Even though none of these should be a problem, we have observed that depending on which B2 server responds to the request the requests might sometimes fail if these additional rules are not followed. We have also observed that sometimes your B2 bucket is created on a storage pod with a misbehaving API which means that all upload requests from a specific location will fail, but using another server across the world to do the exact same request to the exact same bucket with the exact same authentication will work. There's a reason B2 is inexpensive; it's not as robust as S3, Swift, etc. It's somewhere between a consumer and an enterprise grade storage solution; that would make it "prosumer"-grade, I guess?

Try creating a new bucket named rochen20251208 (all lowercase, numbers after the letters) instead. Wait for about an hour, and you should be able to use it fine.

And yes, I know that you may have other buckets with numbers before the letters and mixed-case letters which work. As I said, it's a crapshoot. The big problem is that there's no single API server for B2 requests. The main API server simply tells us the domain name of the storage pod where your bucket is in, then we send API requests to that storage pod directly. You'd think that all storage pods have the same API functionality, and the documentation implies that. That has NOT been our observation at all. I stopped recommending B2 because of these observations. It will either work perfectly, or it will be absolute hell to make it work. Nothing in between. Amazon S3 is a bit more complicated to make it work (more configuration in a less user-friendly environment) but it's rock solid… and more than twice as expensive. So, yeah, you have a bit of a trade-off. 

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

timpennington

Thank you, Nicholas, for such a thorough and helpful reply.

 

I tried your advice of lowercase and it did not work; I will look at using Amazon instead. I apprecite your insight and suggestion on this.

 

Thank you again.

 

Tim

nicholas
Akeeba Staff
Manager

If you have problem with Amazon S3 as well on this host, please do tell me. 

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

timpennington

Thank you again, Nicholas.

My consultant had set me up with AWS a month or so ago when I was migrating my site to RTochen, but he has set it up to back up with "No Images" buit it was indeed backup up to AWS.

I tried to set it up with a new yser in AWS so it would do a full backup; I created a new profile in Akeeba for AWS but when I tried running a few backups I kept getting an error:

-----

Failed to process file /home/goxfwozo/public_html/finishingandcoating.com/administrator/components/com_akeebabackup/backup/site-finishingandcoating.com-20260111-154432est-Hkfz3TMho_r6rfPn.jpa

Error received from the post-processing engine:

Akeeba\S3\Connector::putObject(): [500] AccessDenied:User: arn:aws:iam::492017760423:user/finishingandcoatingbackup_1 is not authorized to perform: s3:PutObject on resource: "arn:aws:s3:::finishingandcoating.com/site-finishingandcoating.com-20260111-154432est-Hkfz3TMho_r6rfPn.jpa" because no identity-based policy allows the s3:PutObject action Debug info: SimpleXMLElement Object ( [Code] => AccessDenied [Message] => User: arn:aws:iam::492017760423:user/finishingandcoatingbackup_1 is not authorized to perform: s3:PutObject on resource: "arn:aws:s3:::finishingandcoating.com/site-finishingandcoating.com-20260111-154432est-Hkfz3TMho_r6rfPn.jpa" because no identity-based policy allows the s3:PutObject action [RequestId] => PMA42ZVV1K6V6E9Z [HostId] => jegrIPIvd57nnfNrOzc4v2lvygLjd7pDbStUzjlwYrZ2Za6kOQVHCb44OUBCvbNKhO3Dflgqh8AiQwJ64rc4D5VKt8J7+PXt )

Post-processing interrupted -- no more files will be transferred

----

I have attached the log file; I must admit that AWS is a little bewildering to me and I am not used to it at all, but I know when the "no image" profiles it was doing it great; unfortuinately I deleted those profiles (DOH!) just to clean things up, so I can't just update them to "full backup." Hopefully it will be easy to get my settings right.

 

Thank you!

timpennington

Here is the log; my apologies I didn't compress it and it was too large

nicholas
Akeeba Staff
Manager

Can you try using the same IAM user (Access and Secret Key) with CyberDuck to upload a file into the same "directory" (path) in the same S3 bucket from your computer?

If that doesn't work: Amazon is right, the user is not allowed access to the bucket, or at least the bucket path you are trying to use. Create and configure a new IAM user and use that new user's Access and Secret Key in Akeeba Backup.

If using CyberDuck worked: the problem lies with the host, which I consider extremely unlikely as I am using Rochen for nearly 20 years and can most definitely upload to S3 without any issues.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

timpennington

Thank you!

 

I created a new user and set permissions, then did a backup and it worked great. I set a cron job on Rochen, so now hopefully it will do it daily. Thank you.

 

Q: It backed up as 1 large file; I know you recommend breaking it up and I saw the note "If you disable multipart uploads remember to set a split archive size of 2-30Mb or you risk backup failure due to timeouts!" and I was looking to see where that was "disabled" and could not find that specific language on the settings in Akeeba; can you tell me what specific setting that is?

 

Thank you!

nicholas
Akeeba Staff
Manager

Yes, of course. Under the Post-Processing Engine and after you select Upload to Amazon S3 you will see an option "Disable multipart uploads". Leave it set to No. This is the default and recommended setting.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

timpennington

Thank you!

nicholas
Akeeba Staff
Manager

You're welcome!

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

timpennington

Quick follow-up question, Nicholas: I have 2 profiles set in my Akeeba backup (Default and AWS) and both "Disable multipart uploads" are set to "No" but the last two backups taken by AWS have been 1 large file at around 1.5 GB. Is there another setting I need so they are uploaded in multiple parts?

 

Thank you

nicholas
Akeeba Staff
Manager

You have the correct setting, do not change it.

When "Disable multipart uploads" is set to No the big file is uploaded to Amazon S3 in 5 MiB chunks. This allows you to upload files up to ~24 GiB without timing out. Therefore, you can have a single part archive which uploads fine.

If you set this to Yes, each part file must be uploaded in one go. This limits you to much smaller part files, up to 20-50 MiB. This would require you to split your backup into ~150 files which become really hard to manage.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

timpennington

Thank you for that explanation. I feel better now!

 

Much appreciated

 

tp

nicholas
Akeeba Staff
Manager

You're welcome!

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!