• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

serverside backup to Amazon S3

zeroday

Basic Pleskian
Ok,

shoot me, but I am a scripting newbie, but wanted to share a script I created and used to backup my serverside backups to amazon and remove them afterwards

#!/bin/sh

###################################
### ###
### BACKUP FOR DOMAIN ###
### ###
###################################

DOMAIN=put-here-a-client-name

# $ENVIRONMENTAL VALUE: use basedir for the backups (note my master reseller is called ‘beheer’)
DATADIR=/var/lib/psa/dumps/resellers/beheer/clients

# $ENVIRONMENTAL VALUE: save logfile
LOGS=â€/var/lib/psa/backupscript/log`date +_%d%m`.logâ€

# $ENVIRONMENTAL VALUE: value to xx days, so keep earlier days, can be used daily and modified after testing
REMOVE=$(python s3cmd ls s3://bucketname | grep `date –date=’14 days ago’ +%d%m%Y` | cut -f 4 -d ‘/’)

# first backup the new stuff
python s3cmd put -r –progress -v $DATADIR/$DOMAIN/domains/ s3://bucketname/â€$DOMAINâ€_`date +%d%m%Y`/ >> $LOGS

echo $REMOVE date will be deleted

# delete the object older than (xx) days defined
python s3cmd del s3://bucketname/$REMOVE/*

rm -rf $DATADIR/$DOMAIN/domains/*

in this script I first define the environmental variables to make sure you only need to enter the correct path and ‘client’ name
than the content on your server is put into the bucked on S3, Then old backup removed from S3 and the actual backup deleted from your server to safe space.

Note: probably I need to use a ’sync’ option after the put to make sure all data is correctly put onto the server (kind of validation?) if all is uploaded correctly
there is not an if than else statement, the backup content on server is ALWAYS removed when stuff fails ..

for a first script a nice try .. I know the bucketname could also be a ‘variable’ but did not choose for that (yet).

1. my server is 250GB
2. my server is conneced to a 10MB/s
3. my server has around 50 GB of content right now, where 3 domains have 2/3 of all data content
4. upload of around 5GB takes around 90 minutes to Amazon

that is why I created for my situation (private server) 3 clients for the 3 biggest domains
and created some other clients for the other 20 domains (I have around 25 domains on the server right now)

so sending 20GB will take me some 5 hours (roughly) .. that is why I split the stuff and do not a total server backup.
stuff is put in serveral client_date folders on Amazon which can be deleted after x days (note if the backup misses the date it will not delete the content of the older folder ..

any updates to the script is appreciated ;-)
 
I've modified the script more as serverside backups and this part separated were not a good choice


#!/bin/sh
CLIENT=name of client with it's domains
EXCLUDE=what domain to skip, can be empty, multipe domains seperatated with ,
OBJECT=need 1 domain from client to check if backup is ok
BASEDIR=/var/lib/psa/backupscript
BASEDIROBJECT=/var/lib/psa/dumps/.discovered/*_
# beheer = name of reseller in my setup
DATADIR=/var/lib/psa/dumps/resellers/beheer/clients
LOGS="/var/lib/psa/backupscript/log`date +_%d%m`.log"
BACKUPFILE=backup*.xml

# First backup the domains of client and split the files and gzip it
/usr/local/psa/bin/pleskbackup --clients-name $CLIENT --exclude-domain=$EXCLUDE
-s


cd /$BASEDIR
file="$BASEDIROBJECT$OBJECT"

### bash check if file exists

if [ -f $file ];

### first backup the new stuff

#change [bucketname] into the bucketname of Amazon S3
then python s3cmd put -r $DATADIR/$CLIENT/domains/ s3://[bucketname]/"$CLIENT"_`date +%d%m%Y`/
rm -rf $BASEDIROBJECT$OBJECT
touch "$BASEDIR"/logs/"$CLIENT"_backup-transferred`date +_%d%m%Y`.txt

else echo "nothing to backup"

fi

echo removing files in $DATADIR/$CLIENT
rm -rf $DATADIR/$CLIENT/*
rm -rf $DATADIR/$CLIENT/.??*

this script succeeded to backup 7 clients with around 20 domains and around 18GB towards Amazon daily

I do not use the webbased backup function anymore as some backups were delayed big time and messed up the crontab for the amazon transfer.. so putting it in 1 script was more easy.
;'-)
 
Back
Top