The short answer is no, you cannot do that.
The long answer is that the difference between mysqldump and taking a backup lies in many, many different things.
mysqldump is a threaded C programme which can have a continuous open connection to the database, reading data while the SQL code is being generated. PHP is single threaded. It can either receive data (and wait while this takes place), or construct the SQL code. It cannot do both at the same time.
mysqldump can only dump an entire table, without any kind of post-processing, unlike the backup engine. This makes it impossible to take a backup without including too much, which could cause problems during restoration (and one of the original reasons I had to write my own backup engine instead of using mysqldump, the other being the differences at the time between MySQL 3.23 and 4.0 which required a lot of pre- and post-processing of the database dump).
Moreover, mysqldump needs the table to be locked throughout the backup. The backup engine does not. This allows you to take a backup without effectively having to put the site offline. However, to achieve that, it needs to be querying the table in chunks. Smaller chunks = more queries = longer times. Also remember that the more records in the database, the slower querying it becomes when go further from the start of the table (there's seek time before MySQL can start returning data).
The backup engine also knows to break the backup step when PHP or the web server would time out the request, splitting the long backup into multiple steps. If you were to run mysqldump, which is all or nothing, it would fail on any non-trivial site – like yours. You'd never have enough time to complete the database backup before timing out or hitting a memory limit. Not to mention that using exec() to execute a CLI command would be disabled on a lot of servers anyway. That was the third major reason I had to write my own backup engine.
Further to that, the backup engine also does a lot more than just dumping a database to a big SQL file. It will take care of differences between database servers and their different versions, split the SQL file into chunks, keep track of which tables are in which files, add everything to a backup archive, upload the backup archive as needed etc. If you were to use mysqldump you would have none of these features. Your restoration would only be possible as a complete restoration, on the exact same server and location you backed up from, and only if your server limits allow you to perform the database restoration in one page load. The chances of that happening on any non-trivial site on anything but the most expensive dedicated server are exactly nil.
As you can see, there are many reasons why Akeeba Backup does not use mysqldump. I mean, it should've been obvious. Back in 2006 the 2-3 backup solutions that existed for Joomla did use mysqldump and didn't work on anything that wasn't a tiny, trivial site on a very beefy and overpowered VPS. I didn't write Akeeba Backup because there was no way to run a backup. I wrote it because all the ways to run a backup suffered from the same problems which made site transfers risky and all too often a losing gamble resulting in a lot of stress.
You can of course increase the database batch size ("Number of rows per batch") in Akeeba Backup's configuration to reduce the number of queries made. As long as you have enough PHP memory, you can go from 100 queries per batch to 1000000. Don't worry about overdoing it. Before backing up each table Akeeba Backup checks the average row size and the available PHP memory. If you shot for too high of a number, Akeeba Backup will reduce the batch size. This works relatively well in most cases. It only has a problem if you have a table with a few rows that are way over the average row size (think about a few rows being 10 MiB when the average row size is 10 KiB).
This will still take a while, but the speed difference should be within 50% of what mysqldump piped to gzip would achieve on the same machine. Do note that I am measuring the speed difference running both mysqldump and Akeeba Backup from the CLI, using Minimum execution time 0, Maximum execution time 60, Execution time bias 75, and all settings from Disable step break before large files up to and including Set a large memory limit to Yes. If you are taking a backup over the web it will always be much, much slower because you have to do a lot of requests, each one adding an overhead of anywhere between 0.3 to 5 seconds. Times a few hundred to get the backup done, and you see where all the time went.
Nicholas K. Dionysopoulos
Lead Developer and Director
🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!