• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

How to rsync backup a Plesk VPS to Amazon S3?

theant

New Pleskian
Hello
I wish to know if there is a tutorial to setup a regular (daily) rsync backup for a Plesk VPS to Amazon S3.
Thanks
 
I have used JungleDisk to backup to Amazon S3. They got purchased by RackSpace I think and don't charge for transfer if you use their storage instead. Storage costs are the same though. If you are unable to find a solution let me know and I can do a little more research. I've created a script for backing up to Google Drive and Dropbox. I am sure I could adapt one of those to also do Amazon S3 if needed.
 
As long as you don't mind the tiny monthly charge, I've found Jungledisk to be rather good. The key thing is that it does de-duplication and in a really smart way. For example, if you backup a complete server backup in the form of a .gz.tar file one week, then do it again the next week, great deal of those two files will be the same. Jungledisk notices this, and will only backup the chunks that are different. Your S3 storage use then becomes much, much less than it might otherwise be AND your backup times are vastly reduced.

It does hammer your disk when it is updating its internal databases and things though, so you will see a bit of a load spike when you use it.

The code has not been updated for years though, and it doesn't seem particularly high on the RackSpace priority list. Don't let this put you off though! It has worked fine for me - so far!
 
As long as you don't mind the tiny monthly charge, I've found Jungledisk to be rather good. The key thing is that it does de-duplication and in a really smart way. For example, if you backup a complete server backup in the form of a .gz.tar file one week, then do it again the next week, great deal of those two files will be the same. Jungledisk notices this, and will only backup the chunks that are different. Your S3 storage use then becomes much, much less than it might otherwise be AND your backup times are vastly reduced.

It does hammer your disk when it is updating its internal databases and things though, so you will see a bit of a load spike when you use it.

The code has not been updated for years though, and it doesn't seem particularly high on the RackSpace priority list. Don't let this put you off though! It has worked fine for me - so far!

Does it still work?
 
Yes. I did experience an odd problem recently though, but it was nothing to do with JunkDisk. It was just that after restoring a backup FROM jungledisk onto the Hardware Node (to /vz/backup), I found that vzarestore would not allow me to then restore that backup, and that PVA could not even see the newly restored backup. This was caused by a fault, or oddity, with vzarestore though. I was able to resolve it by specifying the backup location rather than just giving it the backup id.

Another thing to keep in mind when restoring a backup from Jungledisk is that by default it uses / for the restore, and to store its various backup databases. On my system, / doesn't have huge amounts of space -- most of the disk is partitioned for /vz, so restoration of very large backups failed. Again the solution was simple - I just needed to change the default location used by JungleDisk to /vz/jungledisk by editing the jungledisk configuration file.
 
Thanks, I found something else S3cmd, my problem now is how to restore the entire Linux server if am to backup the server.
 
Back
Top