Support

Akeeba Backup for Joomla!

#42646 Separator for jpa name

Posted in ‘Akeeba Backup for Joomla! 4 & 5’
This is a public ticket

Everybody will be able to see its contents. Do not include usernames, passwords or any other sensitive information.

Environment Information

Joomla! version
n/a
PHP version
8.4
Akeeba Backup version
n/a

Latest post by RobertG on Tuesday, 20 January 2026 09:13 CST

RobertG

EXTREMELY IMPORTANT: Please attach a ZIP file containing your Akeeba Backup log file in order for us to help you with any backup or restoration issue. If the file is over 10MiB, please upload it on your server and post a link to it

Hi,

I'm using a PHP script to make a copy of the backup files on a remote server (Swiss Backup), in addition to the copy that Akeeba Backup Pro makes to an external server (PlanetHoster N0C Storage). Then I want to delete the older files on my second server.

I need to retrieve the date of jpa files on Swiss Backup server in order to delete those old files. On this server, the date and time correspond to those of the file copy, not those of the backup creation.

By default, the name is defined as site-[HOST]-[DATE]-[TIME_TZ]-[RANDOM] and I used "-" to get the "[DATE]" (third field from the end) but I discover that sometimes the hyphen is present in the random field, sometimes the name doesn't use that field.

What separator could I use instead of a hyphen, which should not be used in the random field and allow me to obtain a correct table and the creation date of the backup?

Thank you for your advice.

Regards,
Robert

 

nicholas
Akeeba Staff
Manager

You are thinking this the wrong way.

Your backup archives have a name similar to this: site-www.example.com-20260120-123456eest-random.jpa.

The prefix is always the same: site-www.example.com-.

The date and time is always in the following format: YYYYMMDD-HHmmssT- where YYYY is a four digit year, MM is a two digit month, DD is a two digit day of the month, HH is a two digit hour of the day in 24-hour format, mm is the two digit minutes, and ss is the two digit seconds. The timezone (T) is of variable length, but always ends in a dash.

Finally, the random suffix always has the exact same length: 16 characters.

Armed with that information you can remove the fixed length prefix and suffix which leaves you with a date in the YYYYMMDD-HHmmssT format. This can be easily converted to date and time.

If you are managing the files manually, e.g. over SFTP, you can list the files ordered by filename descending; this will place them in reverse chronological order (newest first, oldest last) thanks to what I explained above.

If you want to automatically manage removal of old, remotely stored backups archive files you can of course use the Remote Quotas setting in Akeeba Backup.

If you prefer to run a shell script, you can adapt something like the following (it lists files in reverse chronological order and removes all of them except for the last 7 days; note that the www.example.com domain is hardcoded into the script):

#!/bin/bash

# Directory containing the files (change this to your directory)
DIR="/path/to/your/directory"

# Current date and time (for comparison)
current_date=$(date +%s)

# Function to extract the timestamp from the filename and convert to seconds since epoch
get_timestamp() {
    local filename=$1
    # Extract YYYYMMDD-HHmmssT part (e.g., 20260120-123456ee)
    local timestamp_part=$(echo "$filename" | grep -oP 'site-www\.example\.com-\K[^-]+-[^-]+')
    # Convert to seconds since epoch
    date -d "${timestamp_part:0:8}-${timestamp_part:9:2}-${timestamp_part:11:2} ${timestamp_part:14:2}:${timestamp_part:16:2}:${timestamp_part:18:2}" +%s 2>/dev/null || echo "0"
}

# Find all matching files, sort by filename (newest first), and list them
find "$DIR" -type f -name 'site-www.example.com-*-*.j*' | \
    grep -E '^.*-[0-9]{8}-[0-9]{6}[a-zA-Z]{2}-[0-9a-zA-Z.-]{16}\.(jpa|j01|j02|j[0-9]{2})$' | \
    sort -r | \
    while read file; do
        echo "$file"
    done

# Find files older than 7 days (based on filename timestamp) and delete them
find "$DIR" -type f -name 'site-www.example.com-*-*.j*' | \
    grep -E '^.*-[0-9]{8}-[0-9]{6}[a-zA-Z]{2}-[0-9a-zA-Z.-]{16}\.(jpa|j01|j02|j[0-9]{2})$' | \
    while read file; do
        timestamp=$(get_timestamp "$file")
        # Calculate days difference (7 days = 604800 seconds)
        days_diff=$(( (current_date - timestamp) / 86400 ))
        if [ "$days_diff" -gt 7 ]; then
            echo "Deleting: $file"
            rm "$file"
        fi
    done

Finally, I would like to kindly note that it's easier for me to reply to tickets if you tell me what you want to do, not how you are trying to do it. Reading your ticket I am unsure if your problem is that you are not aware of remote quotas, if you did not think about sorting by name descending, if you don't know how to write a shell or PHP script to remove old files, or something entirely different I cannot divine just by knowing that you're somehow confused about file naming.

Nicholas K. Dionysopoulos

Lead Developer and Director

🇬🇷Greek: native 🇬🇧English: excellent 🇫🇷French: basic • 🕐 My time zone is Europe / Athens
Please keep in mind my timezone and cultural differences when reading my replies. Thank you!

RobertG

Thank you, Nicholas,

As I said I make a copy of the files on an external server where the date of the files creation is replaced by the date of the copy, I'm using a SFTP connexion.

At the end of the script, I delete the old files and I cant use ftp_mdtm to know their dates.

If the random suffix has always the same length, it helps. But on some sites the random suffix is absent, so I need to manage these cases too.

Regards,

Robert

Support Information

Working hours: We are open Monday to Friday, 9am to 7pm Cyprus timezone (EET / EEST). Support is provided by the same developers writing the software, all of which live in Europe. You can still file tickets outside of our working hours, but we cannot respond to them until we're back at the office.

Support policy: We would like to kindly inform you that when using our support you have already agreed to the Support Policy which is part of our Terms of Service. Thank you for your understanding and for helping us help you!