• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Website Growing In Disk Space 2GB Every Day!

Lee8

New Pleskian
Hi All,

I have a VPS server currently running Plesk Onyx 17.5.3 on a Linux server with CentOS 7.3.1611and have around 40 hosted there.

In the last 7 days or so one site in particular burst past its resource limit and I got a flagged email to notify me, the limit is 4GB for storage.

Since then every day I have been getting the same email notification and the site is growing at a rate of around 2GB per night.

All sites are WordPress and we do run updates daily updates to the sites to ensure that they are all up to date with their core, themes and plugins.

My first thought was that it may have been caused by some code in an update so rolled it back around 15 days to by pass the first email start point but it is still going.

There is no obvious trace of a hack in either the front end or the file structure of the site and we do run multi layer security to try to avoid this as much as possible.

This problem is only on this single site and that site now exceeds 16gb (limit 4) and is growing daily.

Has anyone encountered this issue and know what may be causing it?

Is there a way to identify where the size growth is coming from?

Thanks in advance

Lee
 
Could it be a backup routine that runs in the night? Sometimes customers use backup plugins in Wordpress that gzip the whole Wordpress site and create an archive directory under the site's home directory. You can find where the big files are by
# du
from your FTP home directory. If it is not a backup in the customer home directory, it could be a backup done by the Plesk backup manager that is stored locally. Some users create full backups and do not realize that backups are using disk space. Then you should check log files, too. Maybe some errors are going on or bad bots are downloading thousands of files each night? This might cause log files to grow rapidly.
 
Hi Peter,

Apologies I should have made that clear, we run backups daily but via the Dropbox extension so no backups are stored locally either by plugins or by Plesk as far as I am aware.

We manage all of the sites centrally so there are no plugins installed by the clients.

Looking at our security software we can see a ton of 404 errors happening I presume as a DDOS on that site so would that cause the logs to grow, I have checked the size but are unsure what the size is as it doesnt specify it in our SFTP software it just has 8 digits.

I am unsure what you are suggesting with # du is? Is this a command for SSH access or is there a file called this?

Can the zipped up error and access logs be deleted do you know?

Thanks for your answers so far, much appreciated.

Lee


Could it be a backup routine that runs in the night? Sometimes customers use backup plugins in Wordpress that gzip the whole Wordpress site and create an archive directory under the site's home directory. You can find where the big files are by
# du
from your FTP home directory. If it is not a backup in the customer home directory, it could be a backup done by the Plesk backup manager that is stored locally. Some users create full backups and do not realize that backups are using disk space. Then you should check log files, too. Maybe some errors are going on or bad bots are downloading thousands of files each night? This might cause log files to grow rapidly.
 
# du
is the "disk usage" Linux command. It goes through the directory tree starting from where you call the command and displays the disk space occupation for each file and directory plus subtotals and totals. When you use it (on the Linux consule, via SSH), omit the "#", that is only symbolizing the prompt.

The zipped error log archives can be deleted without any impact.

On the console you can see the exact log file sizes by
# cd logs
# ls -la

The dropbox extension stores temporary files to your local disk before uploading them to Dropbox. I remember several other forum cases where the extension was causing trouble, for example this one that could be similar to your case: https://talk.plesk.com/threads/pppm-5475-dropbox-backup-plugin.340630/
 
Last edited:
When examining disk usage on a "per domain" level use:
#du /var/www/vhosts/exampledomainname.com
 
Hi Peter

Thank you for your explanation, I have ran the du command and it seems my particular issue might be with the Object Cache of a particular plugin, each file line is only has a marker of between 4 -12 (kb or mb whichever it is) but there are litterally thousands of lines and its still printing them so I am going to start with disabling this part of that cache plugin to see if that stops the growth, I will also delete the existing Object Cache Files to see if i am then back down to a proper size tomorrow.

I will also look into your other suggestions in the mean time to educate myself a bit further.

Thanks again

Lee


# du
is the "disk usage" Linux command. It goes through the directory tree starting from where you call the command and displays the disk space occupation for each file and directory plus subtotals and totals. When you use it (on the Linux consule, via SSH), omit the "#", that is only symbolizing the prompt.

The zipped error log archives can be deleted without any impact.

On the console you can see the exact log file sizes by
# cd logs
# ls -la

The dropbox extension stores temporary files to your local disk before uploading them to Dropbox. I remember several other forum cases where the extension was causing trouble, for example this one that could be similar to your case: https://talk.plesk.com/threads/pppm-5475-dropbox-backup-plugin.340630/
 
Back
Top