Ok,
shoot me, but I am a scripting newbie, but wanted to share a script I created and used to backup my serverside backups to amazon and remove them afterwards
#!/bin/sh
###################################
### ###
### BACKUP FOR DOMAIN ###
### ###
###################################
DOMAIN=put-here-a-client-name
# $ENVIRONMENTAL VALUE: use basedir for the backups (note my master reseller is called ‘beheer’)
DATADIR=/var/lib/psa/dumps/resellers/beheer/clients
# $ENVIRONMENTAL VALUE: save logfile
LOGS=â€/var/lib/psa/backupscript/log`date +_%d%m`.logâ€
# $ENVIRONMENTAL VALUE: value to xx days, so keep earlier days, can be used daily and modified after testing
REMOVE=$(python s3cmd ls s3://bucketname | grep `date –date=’14 days ago’ +%d%m%Y` | cut -f 4 -d ‘/’)
# first backup the new stuff
python s3cmd put -r –progress -v $DATADIR/$DOMAIN/domains/ s3://bucketname/â€$DOMAINâ€_`date +%d%m%Y`/ >> $LOGS
echo $REMOVE date will be deleted
# delete the object older than (xx) days defined
python s3cmd del s3://bucketname/$REMOVE/*
rm -rf $DATADIR/$DOMAIN/domains/*
in this script I first define the environmental variables to make sure you only need to enter the correct path and ‘client’ name
than the content on your server is put into the bucked on S3, Then old backup removed from S3 and the actual backup deleted from your server to safe space.
Note: probably I need to use a ’sync’ option after the put to make sure all data is correctly put onto the server (kind of validation?) if all is uploaded correctly
there is not an if than else statement, the backup content on server is ALWAYS removed when stuff fails ..
for a first script a nice try .. I know the bucketname could also be a ‘variable’ but did not choose for that (yet).
1. my server is 250GB
2. my server is conneced to a 10MB/s
3. my server has around 50 GB of content right now, where 3 domains have 2/3 of all data content
4. upload of around 5GB takes around 90 minutes to Amazon
that is why I created for my situation (private server) 3 clients for the 3 biggest domains
and created some other clients for the other 20 domains (I have around 25 domains on the server right now)
so sending 20GB will take me some 5 hours (roughly) .. that is why I split the stuff and do not a total server backup.
stuff is put in serveral client_date folders on Amazon which can be deleted after x days (note if the backup misses the date it will not delete the content of the older folder ..
any updates to the script is appreciated ;-)
shoot me, but I am a scripting newbie, but wanted to share a script I created and used to backup my serverside backups to amazon and remove them afterwards
#!/bin/sh
###################################
### ###
### BACKUP FOR DOMAIN ###
### ###
###################################
DOMAIN=put-here-a-client-name
# $ENVIRONMENTAL VALUE: use basedir for the backups (note my master reseller is called ‘beheer’)
DATADIR=/var/lib/psa/dumps/resellers/beheer/clients
# $ENVIRONMENTAL VALUE: save logfile
LOGS=â€/var/lib/psa/backupscript/log`date +_%d%m`.logâ€
# $ENVIRONMENTAL VALUE: value to xx days, so keep earlier days, can be used daily and modified after testing
REMOVE=$(python s3cmd ls s3://bucketname | grep `date –date=’14 days ago’ +%d%m%Y` | cut -f 4 -d ‘/’)
# first backup the new stuff
python s3cmd put -r –progress -v $DATADIR/$DOMAIN/domains/ s3://bucketname/â€$DOMAINâ€_`date +%d%m%Y`/ >> $LOGS
echo $REMOVE date will be deleted
# delete the object older than (xx) days defined
python s3cmd del s3://bucketname/$REMOVE/*
rm -rf $DATADIR/$DOMAIN/domains/*
in this script I first define the environmental variables to make sure you only need to enter the correct path and ‘client’ name
than the content on your server is put into the bucked on S3, Then old backup removed from S3 and the actual backup deleted from your server to safe space.
Note: probably I need to use a ’sync’ option after the put to make sure all data is correctly put onto the server (kind of validation?) if all is uploaded correctly
there is not an if than else statement, the backup content on server is ALWAYS removed when stuff fails ..
for a first script a nice try .. I know the bucketname could also be a ‘variable’ but did not choose for that (yet).
1. my server is 250GB
2. my server is conneced to a 10MB/s
3. my server has around 50 GB of content right now, where 3 domains have 2/3 of all data content
4. upload of around 5GB takes around 90 minutes to Amazon
that is why I created for my situation (private server) 3 clients for the 3 biggest domains
and created some other clients for the other 20 domains (I have around 25 domains on the server right now)
so sending 20GB will take me some 5 hours (roughly) .. that is why I split the stuff and do not a total server backup.
stuff is put in serveral client_date folders on Amazon which can be deleted after x days (note if the backup misses the date it will not delete the content of the older folder ..
any updates to the script is appreciated ;-)