• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Issue Plesk backup issue - taking too long to list files of a particular website

VirtualHorror

New Pleskian
Server operating system version
CentOS 7.9
Plesk version and microupdate number
18.0.49
Hello,

I seem to be having a peculiar issue with Plesk Backup Manager. I am not certain as to when this issue exactly started but I believe I noticed the backups were taking very much longer than usual to complete (about 14-18 hours) about 2 months ago but I ignored it since the backups were being successfully completed even if it took long.
Yesterday when I was looking at backup manager it caught my attention that there was a gap of 1 day between full backups eventhough they were scheduled to backup at 00:45 everyday and retain 7 days of backups as you can see in the screenshot.
1.jpg

Scenario:
I opened up backup telemetry to check what was happening and was surprised when I noticed that it seemed to be taking the longest time (almost 3 hours) for listing files of a particular website which isn't normal since it didn't take so long before this issue started and absolutely no changes have been made to configuration of the website as I personally manage it. None of the other websites on the server seem to be taking so long to list files.
2.jpg
5.png
4.jpg

I noticed that even after it had checker for free space/listed files it took a very long time for archiving the files for that website once again (10-11 hours; not sure of the exact time but I am presuming so based on total time the backup used to take that I mentioned earlier.) Please note: In the screenshot below, the archive creation process is still going on and was not completed within 2h 33m as shown
3.jpg

I went on to investigate the particular website and realized that the website had have a fairly low file count as well as low disk space usage (~500 MB) as you can see in the screenshot below BUT the disk space manager was not able to list the files/disk usage of
Code:
whmcs-php-temp
folder but the disk space space of all other folders was listed
. I thought this was unusual. (Yes, this website runs WHMCS)
I tried to access the folder through Plesk file manager, tried to list files through SSH and FTP to no avail.
I then remembered that raising folder was created to store the temporary PHP session files of WHMCS
7.jpg
9.jpg

I the stopped the backup thinking the whmcs-php-temp folder was the issue as it contained a lot of files and took a lot of time to load.
I then created a new backup job excluding that particular folder but it didn't help as it's still running till now....

SERVER SPECIFICATIONS:
CPU Model: Intel Xeon E5-2690 v3 @ 2.60GHz
Cores: 8
RAM: 8 GB
Swap RAM: 6 GB
Disk: 1 TB RAID 10 HDD
Network speed: 2 Gbps

I have confirmed that during backups the IO, CPU Usage, RAM, Storage are within normal limits (~30%) and am absolutely sure that it is nothing to do with the server resources or hardware.
I have also confirmed speed test between our Plesk server and backup server is about (~1.2 Gbps most of the time)

Plesk Scheduled Backup configuration:
Scheduled backup task time: 00:45 everyday
Backup retention: 7 full backups
Backup type: Full backup
Backup content: Configuration, Mail messages, User files and databases
Store in: FTP storage
Maximum number of simultaneously running scheduled backup processes: 3

Run all backup processes with low priority
Priority: 7
IO Priority: 5
Compression level: Fast
Start the backup only if your server has the specified amount of free disk space (in megabytes) : 2048
 

Attachments

  • 6.jpg
    6.jpg
    151.8 KB · Views: 7
I've only seen similar symptoms when there are a huge number of files, e.g. >200,000, in some folder or subfolder. You are writing the file count is "fairly low", but how did you count that and what is considered "low"?

Another potential issue could be that the max number of concurrent backup jobs is set to "1" in the backup configuration of the server. In that case, set it to at least "2".
 
I've only seen similar symptoms
It's not only me experiencing this issue?
Any frequently suggested solutions for this type of issue?

You are writing the file count is "fairly low", but how did you count that and what is considered "low"?
I have manually gone into every single directory for that website except the whmcs-php-temp folder which I am not able to load through any method to check the number of files.
Although I have not counted the exact number of files, I can certainly confirm that it is lower than 70,000 files.

Another potential issue could be that the max number of concurrent backup jobs is set to "1" in the backup configuration of the server. In that case, set it to at least "2".
It is currently set to 3 maximum processes of simultaneously running scheduled backup
 
I have manually gone into every single directory for that website except the whmcs-php-temp folder which I am not able to load through any method to check the number of files.
Although I have not counted the exact number of files, I can certainly confirm that it is lower than 70,000 files.
You sure about that? 70k files can be listed in a matter of 1-2 minute on even the slowest systems/harddisks I know and on faster servers this takes a second or less.
For me it all sounds like you have millions of files (possibly all zero or very few byte in size) in at least one subdirectory of that site.

I recommend installing the "ncdu" utility and then run an "ncdu /path/to/this/website" and let it run as long as it takes. (may also take several hours, like the backup does)
After that you can "browse" through the directory tree of this website and it will show you the used diskspace and also the number of files in every directory.
 
You sure about that? 70k files can be listed in a matter of 1-2 minute on even the slowest systems/harddisks I know and on faster servers this takes a second or less.
Except the whmcs-php-temp directory I am sure that rest of the directories combined together have less than 70k files

I just installed and ran the ncdu utility you suggested (it's still running) and found that actual website files (except the whmcs-php-temp) folder showed approximately 600 MB and about ~30,000 files if I am correct)

Like you suggested I seem to have millions of files (possibly all zero or very few byte in size) in the whmcs-php-temp subdirectory of that site but I don't understand why it was taking so long even when I had put that folder in the exclude list for both scheduled backup and the manual backup which I started after stopping the scheduled one since it was taking too long.

The current progress of ncdu utility is
Total items: 1141800
Size: 4.9 GB
Note: After I think the first 30,000 files all of the files as I was able to see is from whmcs-php-temp directory.

I'll post an update once it's complete
 
Screenshot_20230106_145154_Termius.jpg

Just checked on the status ncdu utility and it was completed

As I expected the most files were in the whmcs-php-temp folder and that is consuming almost 20 GB
What I don't understand is why is the backups still taking so long even when I have already excluded whmcs-php-temp folder?

cc: @Peter-debik @ChristophRo
 
I see two possibilities:

a) your exclusion of this directory does not work, i.e. is not configured properly (having played with excluded directories for the Plesk backup myself in the past, I can tell that this is a big PITA to get it right....and the documentation does not really help)

b) while the backup may exclude this directory when saving the files and folder structure, it may still enumerate the whole directory tree beforehand for whatever reason. (statistics or stuff like that)

My take on that is anyway that you clean up this directory.
I guess that there are millions of old php session files in it, that do not get cleaned up automatically. (so maybe a cronjob is in order, to prevent such thing from happening again in the future)
 
a) your exclusion of this directory does not work, i.e. is not configured properly (having played with excluded directories for the Plesk backup myself in the past, I can tell that this is a big PITA to get it right....and the documentation does not really help)
What I am exactly excluding in the 'Exclude specific files from the backup' field is
Code:
/var/www/vhosts/[REDACTED]/whmcs-php-temp

b) while the backup may exclude this directory when saving the files and folder structure, it may still enumerate the whole directory tree beforehand for whatever reason. (statistics or stuff like that)
Makes sense
Should I change the folder for WHMCS PHP sessions to the default one suggested by Plesk instead of a custom folder and see if that helps? (Maybe the default folder clears itself automatically)
I am curious as to how other users running WHMCS are managing the PHP sessions situation on Plesk

I guess that there are millions of old php session files in it, that do not get cleaned up automatically. (so maybe a cronjob is in order, to prevent such thing from happening again in the future)
Could you please suggest a cron job command as well as recommended interval to run it?
And in the case someone is on that website and PHP is using that session data, if the cron job runs and deletes those files will it not disrupt the user's experience?
 
You could simply delete session files older n days. You can delete session files older n days with /usr/bin/find <path to your directory>/* -mtime +n -exec rm {} \; with n = number of days to keep.
 
You could simply delete session files older n days. You can delete session files older n days with /usr/bin/find <path to your directory>/* -mtime +n -exec rm {} \; with n = number of days to keep.
Will certainly try this.
Wouldn't storing even 1 full day of PHP Session files also be quite a lot? I was reading about the resolution for this issue on Stackoverflow and a few people were suggesting 24 minutes not really sure if that I correct though

Also shouldn't Plesk be automatically cleaning the PHP Sessions files at a certain intervals set by Plesk developers in the sessions clean-up script?
I had already set the the session.save_path in PHP settings for that website and thought it was getting automatically cleaned
Reference: https://support.plesk.com/hc/en-us/...-file-folder-in-Plesk-PHP-is-not-auto-cleaned
 
You can go into crontab as root, like

# crontab -e

and hit i for insert, then paste this:

22 2 * * * find /var/lib/php/session -type f -delete -mtime 3

this will delete ALL sessions older than 3 days, automatically at 2:22 AM.. To save and quit, hit ESC then type : w q
 
You can go into crontab as root, like

# crontab -e

and hit i for insert, then paste this:



this will delete ALL sessions older than 3 days, automatically at 2:22 AM.. To save and quit, hit ESC then type : w q
Thank you for that

Wouldn't 3 days or even 1 day be quite a long time for PHP sessions to be stored?
I had cleared the whmcs-php-temp folder about an hour ago and am already seeing over 500 files...

Is there no possible way to create something like the inbuilt plesk-php-cleanuper script that checks files in a folder frequently whether it is being used by PHP, if not removes it
 
Wouldn't storing even 1 full day of PHP Session files also be quite a lot? I was reading about the resolution for this issue on Stackoverflow and a few people were suggesting 24 minutes not really sure if that I correct though
A shorter time will do, but it does not hurt to choose 1 or 2 days either.

Also shouldn't Plesk be automatically cleaning the PHP Sessions files at a certain intervals set by Plesk developers in the sessions clean-up script?
Plesk does that (or PHP does that), but are you sure that these are true PHP session files or are they rather "session" files generated by an application for its own purpose? Many applications do that and don't do the housekeeping afterwards.

I had already set the the session.save_path in PHP settings for that website and thought it was getting automatically cleaned
Reference: https://support.plesk.com/hc/en-us/...-file-folder-in-Plesk-PHP-is-not-auto-cleaned
That's probably also because these are no "real" PHP session files?
 
A shorter time will do, but it does not hurt to choose 1 or 2 days either.
My primary concern is that after I cleared all the files in the existing folder within a span of an hour then we're about 500-800 files. Over the day it might become more and more causing the backups to slow down again

Plesk does that (or PHP does that), but are you sure that these are true PHP session files or are they rather "session" files generated by an application for its own purpose? Many applications do that and don't do the housekeeping afterwards.
So I'm not certain as to whether they are true PHP session files or "session" files but here is the example of a file name inside that folder "sess_r1ar7lv9ikiscm0pap2mshi62m". The application in reference in WHMCS but was not able to find any information whether its generated by it or PHP to while serving the WHMCS website.

That's probably also because these are no "real" PHP session files?
That could be possible.

I don't really have a preference to use a custom PHP temporary files folder but I'm not able to use the default session folder which is /var/lib/php/session since WHMCS says "The PHP session save path /var/lib/php/session is not writable. Please investigate the session.save_path PHP setting or contact your system administrator."
I did read the article in Plesk KnowledgeBase (Domain on Plesk with WHMCS shows error: The PHP session save path /var/lib/php/session is not writable - Support Cases from Plesk Knowledge Base) for the same error but it suggests creating a custom folder which is what I have currently done and when I do that I have issues with backup because of those files never being deleted....don't really know what to do anymore
 
Cleaning of PHP session is a task that PHP can't do properly itself, so in most cases your OS automatically creates a cronjob for that, if you install PHP on your server.
For example on Debian/Ubuntu it's /etc/cron.d/php that cleans up old session every 30 minutes. (automatically checks for non-active session files and does delete them if they are older than the configured php session lifetime in the php.ini)

But of course, this only works for the default session directories, so if you store them elsewhere, you need to take care of that yourself.
We also often need this and in our case we simply delete all sess_* files from the php session directories when they are older than 1 day.
So far we never had any problems with that and the worst that could happen I imagine, is that a user gets automatically logged out of some site after 24 hours of idling...
 
Thank you for that

Wouldn't 3 days or even 1 day be quite a long time for PHP sessions to be stored?
I had cleared the whmcs-php-temp folder about an hour ago and am already seeing over 500 files...

Is there no possible way to create something like the inbuilt plesk-php-cleanuper script that checks files in a folder frequently whether it is being used by PHP, if not removes it
Sure, you can make it shorter, maybe +1 day is better for you? Just change the command.. -t
 
Cleaning of PHP session is a task that PHP can't do properly itself, so in most cases your OS automatically creates a cronjob for that, if you install PHP on your server.
For example on Debian/Ubuntu it's /etc/cron.d/php that cleans up old session every 30 minutes. (automatically checks for non-active session files and does delete them if they are older than the configured php session lifetime in the php.ini)

But of course, this only works for the default session directories, so if you store them elsewhere, you need to take care of that yourself.
We also often need this and in our case we simply delete all sess_* files from the php session directories when they are older than 1 day.
So far we never had any problems with that and the worst that could happen I imagine, is that a user gets automatically logged out of some site after 24 hours of idling...
Makes sense.

How do you deal with websites that store session files in a directory other than the default one? If you have scheduled backup setup wouldn't that also be taking quite a long time similar to what I'm experiencing?

Also I found that Plesk has its own script to do so, what if I modify the script to below and run the cron every 10 minutes?
Code:
#!/bin/sh
### Copyright 1999-2021. Plesk International GmbH. All rights reserved.
#  This purges session files older than X, where X is defined in seconds
#  as the largest value of session.gc_maxlifetime from all your php.ini
#  files, or 24 minutes if not defined.  See ${maxlifetime}
# Look for and purge old sessions every hour
pgrep -f ".*$0$" | grep -qv $$ && exit 0
renice 19 -p $$ >/dev/null 2>&1
[ -x /usr/lib64/plesk-9.0/maxlifetime ] && [ -d /var/www/vhosts/[REDACTED]/whmcs-php-temp ] && /usr/lib64/plesk-9.0/php_session_cleaner /var/lib/php/session $(/usr/lib64/plesk-9.0/maxlifetime)
Could this possibly work^?
Note: I have modified the directory in the script, everything else is same
 
Sure, you can make it shorter, maybe +1 day is better for you? Just change the command.. -t
The main issue is that I wouldn't have had a problem if the files were stored in the default PHP session folder and 1 day would be fine, but now since they are stored inside a website directory, all content of the website is being backed up including this folder which is making it take a lot of time. Even if I set it to 1 day I'm presuming there would be about 15,000 files and about 1 GB of data just in this folder so it would indirectly be affecting how long it takes for the whole server backup to complete.

What do you think about this code
Also I found that Plesk has its own script to do so, what if I modify the script to below and run the cron every 10 minutes?
Code:
#!/bin/sh
### Copyright 1999-2021. Plesk International GmbH. All rights reserved.
#  This purges session files older than X, where X is defined in seconds
#  as the largest value of session.gc_maxlifetime from all your php.ini
#  files, or 24 minutes if not defined.  See ${maxlifetime}
# Look for and purge old sessions every hour
pgrep -f ".*$0$" | grep -qv $$ && exit 0
renice 19 -p $$ >/dev/null 2>&1
[ -x /usr/lib64/plesk-9.0/maxlifetime ] && [ -d /var/www/vhosts/[REDACTED]/whmcs-php-temp ] && /usr/lib64/plesk-9.0/php_session_cleaner /var/lib/php/session $(/usr/lib64/plesk-9.0/maxlifetime)
Could this possibly work^?
Note: I have modified the directory in the script, everything else is same
 
you need to adjust this a bit more - this should work
Code:
[ -x /usr/lib64/plesk-9.0/maxlifetime ] && [ -d /var/www/vhosts/[REDACTED]/whmcs-php-temp ] && /usr/lib64/plesk-9.0/php_session_cleaner /var/www/vhosts/[REDACTED]/whmcs-php-temp $(/usr/lib64/plesk-9.0/maxlifetime)
there is no need and real benefit, to let it run more than once every hour

If you have lots of sites on your server that store session files in custom directories, you can also use a cronjob like this:
Code:
/usr/bin/find -O3 /var/www/vhosts -type f -name 'sess_*' -mmin +1440 -delete &> /dev/nul
We use that on several systems but only let it run once a day, because it may take a couple minute to crawl through the whole vhosts directory tree
Yeah, yeah, I know, not 100% save this command, but in reality, who or what else would create and use files with names that start with "sess_" ?
 
Back
Top