Support

Akeeba Backup for Joomla!

#38974 post-processing > S3 protocol for PlanetHoster (their new "N0C Storage")

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
n/a
PHP version
n/a
Akeeba Backup version
n/a

Latest post by nicholas on Monday, 29 May 2023 05:29 CDT

woluweb

Hi Nicholas & team!

The host that I use for almost all my projects (namely PlanetHoster) has just introduced their new "N0C Storage", working with S3 (not speaking of Amazon S3, speaking of the protocol).

More information about that here: https://kb.n0c.com/en/knowledge-base/n0c-filemanager/#plugins-for-s3-protocole

So I would like to use the post-processing in Akeeba Backup to send my backups to that N0C Storage.

Is this possible with the current options in the dropdown for post-processing? Or do you need to add a new option in that dropdown?

Txs very much :)

nicholas
Akeeba Staff
Manager

There is no documentation or information that I can see beyond that very sketchy page which says very little.

If you can connect to it with any other S3 client (e.g. CyberDuck) you can use it for backups. Check out the documentation for Upload to Amazon S3, under the custom endpoint. Most likely they are using API v2 (not v4), since they do not mention a region which is always required for the S3v4 protocol.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

woluweb

Txs Nicholas for your answer.

Right now I am packing for JoomlaDays Netherlands #jd23nl, but after the week-end I'll investigate further (and post here the result of my investigations / tests).

woluweb

Just a quick feedback already : I made a test and it works indeed when specifying the right Endpoint.

They are busy writing a procedure (the product has just been announced, it is not in production yet I think. Or in beta). I keep you posted next week after the JoomlaDays Netherlands.

nicholas
Akeeba Staff
Manager

(the product has just been announced, it is not in production yet I think. Or in beta).

Ah, this explains why the documentation is at the state it is now.

I keep you posted next week after the JoomlaDays Netherlands.

Awesome! Thank you :)

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

woluweb

Hi Nicholas,

I am now back from #jd23nl and could test again the upload of backups.

Only websites being on N0C servers can upload on N0C Storage I have learned.
So I tested again with a website hosted on a N0C server.

But apparently the postprocessing does not work yet.
And I contact you bc I see that the error message mentions "\Connector\S3v4\"... while I selected my Profile Configuration "v2 (legacy mode)" and not "v4 (preferred by Amazon)"

Would there be a bug there?

Txs!

Error received from the post-processing engine:

Upload cannot proceed. Amazon S3 returned an error.

Akeeba\Engine\Postproc\Connector\S3v4\Connector::uploadMultipart(): [403] Unexpected HTTP status 403 Debug info: SimpleXMLElement Object ( [Code] => SignatureDoesNotMatch [RequestId] => __________ [HostId] => ________________ )

Post-processing interrupted -- no more files will be transferred

 

nicholas
Akeeba Staff
Manager

Is the endpoint correct? The RequestId and HostId are sent by Amazon proper, not by any S3–compatible implementation I have seen before.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

woluweb

Hi,

Here is the detailed explanation about how to configure Akeeba Backup to be able to store to N0C Storage: 

https://planethoster.live/threads/n0c-storage-lancement-questions-reponses.6466/#post-24492

And since the screenshots are only thumbnails where you are not connected on planethoster.live, here is the main screenshot with the right configuration:

fetch?id=2051863&d=1685046964

 

nicholas
Akeeba Staff
Manager

Try changing the bucket access.

Remove the trailing slash from the directory. Remember that the directory is relative to your bucket's root.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

woluweb

Txs for the suggestions Nicholas.

Here are the results of my tests :

- the trailing slash is indeed not necessary (I removed it and all works fine)

- but for "bucket access" 

  - "Path Access (legacy)" does work

  - but "Virtual Hosting (recommended) does not work and throws the following error :

Akeeba\Engine\Postproc\Connector\S3v4\Connector::putObject(): [7] Failed to connect to bhhfvesw.ht2-storage.n0c.com port 5443 after 45324 ms: Couldn't connect to server Debug info:

 

I ask them if they intend to make "Virtual Hosting" work.

nicholas
Akeeba Staff
Manager

Most third party services with an S3-compatible API do not support the Virtual Hosting method. This method creates a subdomain using your bucket name. This has its challenges regarding TLS certificates which makes it perfectly reasonable it's not widely supported.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

woluweb

Txs for the feedback :)

So I won't be frustrated if only "legacy" is supported :D

But anyways I did ask the question bc maybe they intend to set it up and then it is good to know. I prefer waiting a couple of weeks and have the "final best setup" *before* implementing this on tens of websites...

nicholas
Akeeba Staff
Manager

So I won't be frustrated if only "legacy" is supported :D

Nope! It's perfectly normal for third party services. It's only "legacy" for Amazon S3 proper.

But anyways I did ask the question bc maybe they intend to set it up and then it is good to know. I prefer waiting a couple of weeks and have the "final best setup" *before* implementing this on tens of websites...

That's fair!

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

woluweb

Just to confirm : the Virtual Hosting method is not foreseen (not atm anyways)

So with the screenshot above (knowing that we can remove the trailing slash from the directory) we have the final configuration.

BTW, would you consider adding "PlanetHoster N0C Storage" to your dropdown of pre-configured cloud hosting for post-processing?

nicholas
Akeeba Staff
Manager

I am not going to be removing either access method. I know for a fact that a lot of third party S3-compatible services will only work with path-style access for the reasons I explained and your host all but confirmed. So no worries there.

BTW, would you consider adding "PlanetHoster N0C Storage" to your dropdown of pre-configured cloud hosting for post-processing?

No. I do not add any third party S3-compatible services as its own post-processing engine.

It's not just adding a dropdown value. It's creating its own post-processing class, configuration file, and language files. This is a lot of busywork and it becomes a proper nightmare when I need to refactor anything as I now no longer have one S3 post-processing class to refactor, I have multiple. The more of these there are, the more likely it is I will make a mistake unless I can explicitly test with it.

This means I will need to buy access to every supported service, regardless of how many people use it. Since the vast majority of S3-compatible services don't have a free tier and are only offered tied to some other expensive service I will have to be spending a lot of money just to maintain the code.

Since I would be directly supporting a service, I am not responsible for any changes in that service even though I don't get paid from the service provider, nor am I consulted before any changes take place. I have to do a lot more work keeping up with many more services and go through the headache of communicating with my clients even when the communication from the service itself is as clear as mud. This adds a lot of work and responsibility which costs money and can lose me clients, without bringing in any income.

This is even worse considering the fact that if I add any single one random service I have to add any other random service nobody has heard of, even if it's used by one person and they ask me to. There are hundreds of them out there. It's trivial for any half-competent host to install and configure OpenStack and its S3-compatible API. Having to support all these hundreds of random installations —or each one of my clients' bespoke ownCloud S3-compatible installation— in Akeeba Backup becomes a VERY expensive proposition, to the point that I need to triple or quadruple the price of my software. This, in itself, will lose me clients.

Finally, once I start adding specific S3-compatible services people mistakenly understand that if it's not listed it's not supported. This loses me both potential and current clients.

To sum it up, I get a lot more work to do, I have to spend a lot more money, and I get to lose clients as a result. Or I can do no work, don't spend any money, and not lose clients. I think it's very clear which is the only reasonable course of action here. I am not lazy, I did my risk analysis before saying “yes” to everything :)

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

woluweb

Txs for this (very) detailed answer Nicholas.

I totally understand your point (I was not even aware of all the consequences :D).

Have a nice day & week,

Marc

nicholas
Akeeba Staff
Manager

You're welcome! Have a great day :)

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!