• Please be aware: Kaspersky Anti-Virus has been deprecated
    With the upgrade to Plesk Obsidian 18.0.64, "Kaspersky Anti-Virus for Servers" will be automatically removed from the servers it is installed on. We recommend that you migrate to Sophos Anti-Virus for Servers.
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Issue Complete Server Backup take really long time

user8374

New Pleskian
Hello,

i updated my whole server to a new one...

changes:
+ new server has better/newer cpu with more cores (4 -> 8)
+ new server has more ram (double) (32gb ddr2 -> 64gb ddr4)
+ new server has better connection (100mbit -> 1gbit)
+ new server has better/bigger harddrives (1TB raid0 -> 4TB raid0 / 250gb ssd)
+ new server has ubuntu (debian 7 -> ubuntu 18.04)
+ new server has newest onyx

problem:
after migrating everything to the new server, i made an backup plan which runs every night like on the old server. one backup is saved local and one is uploaded to a s3 space. on the old server the backup started on 1.30 am and was completed and uploaded between 8-9am. on the new server the backup took like years. after 14-16 hours the local backup is finished. after that the backup does not seem to be uploaded on the s3 space. in the backup manager the circle is spinning, but nothing is uploaded. if i look into the logs it seems that the upload is not started. i already tried to reduce the backup size by 50%, but it does not help.

what is this? the new server is alot faster but the backup took over 5 times longer.

can someone help?
 
Hello

Well raid 0 is very brave of you i would have picked raid 1, raid 6 or raid 10.

if one disk dies in raid 0 all you're data is gone.

raid 1 is stripeing so it wrights the same info to all disks.

what RPM are the 4 TB drives?
 
are you using the same s3 bucket? or at least the same Availability Zone?
 
Hello

Well raid 0 is very brave of you i would have picked raid 1, raid 6 or raid 10.

if one disk dies in raid 0 all you're data is gone.

raid 1 is stripeing so it wrights the same info to all disks.

what RPM are the 4 TB drives?

sorry ,, the new server has raid 1 , the old one only raid 0 because I needed the space and back when it was bought, 1TB was the biggest harddrive which was available.

Both, the old and the new server has 7200rpm harddrives. The only difference is that the old has sas interface and the new sata. 15000 would be better I know, but that is not the problem.

i checked hdparm and dd. The new server has a cached read around 500mb, the old around 300mb. Both have around 160mb direct read and 100-120mb writing speed.

are you using the same s3 bucket? or at least the same Availability Zone?

the s3 bucket seems not the problem. the upload of the file takes around 2 hours. According to the logs, the process of the tar archive takes so long. After 2 days the backup is complete, so there are no errors. When I tar the domain folder manual it is alot faster ...

I made a few settings today:

+ backup manager should not check if free space is available
+ low priority is disabled.

maybe it is now faster. this two options was the only ones i found, which could be the reason why it is so slow.



What i really dont get are these logs paths:

Code:
[2019-04-09 02:45:31.905|28139] INFO: Exclude file 'magento-mirror-magento-1.9/skin/frontend/rwd/default/scss/.sass-cache/ffb6964d1c72515030ae7081fc679e8d12a0fa57/_var.scssc' according exclude pattern '^[^/]*cache[^/]*/|^[^/]*cache[^/]*$|/[^/]*cache[^/]*/|/[^/]*cache[^/]*$'

[2019-04-09 02:45:31.905|28139] INFO: Exclude file 'magento-mirror-magento-1.9/skin/frontend/rwd/default/scss/.sass-cache/ffb6964d1c72515030ae7081fc679e8d12a0fa57/scaffold-forms.scssc' according exclude pattern '^[^/]*cache[^/]*/|^[^/]*cache[^/]*$|/[^/]*cache[^/]*/|/[^/]*cache[^/]*$'

[2019-04-09 02:45:32.295|28139] INFO: pmm-ras finished. Exit code: 0

[2019-04-09 06:03:42.120|23793] INFO: The utility succesfully executed.

[2019-04-09 06:03:42.170|23793] INFO: Export file domains/.../backup_1904090141.tar

[2019-04-09 06:03:42.170|23793] INFO: Create directory /var/lib/psa/tmp/pmm-de-tmp-repo-uOmxnU/domains/..

according to the log, nothing is done for more than 3 hours.
 
Last edited:
Back
Top