• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Webstats run server out of memory due to 'sort' command on access_log.webstat

HostaHost

Regular Pleskian
We're finding high traffic servers fail to rotate log files if they have webstats enabled. In the daily log rotation and webstats execution, Plesk 11.5 calls this command against each domain's access_log.webstat log file:

sh -c /bin/sort -t ' ' -k 4.9,4.12n -k 4.5,4.7M -k 4.2,4.3n -k 4.14,4.15n -k 4.17,4.18n -k 4.20,4.21n /var/www/vhosts/system/domain/logs/access_log.webstat | /usr/bin/webalizer -c /var/www/vhosts/system/domain/conf/webalizer.conf -n domain -D /usr/local/psa/var/webalizer/webalizer.cache -N 15 -F clf -

If the file is huge; i.e. larger than the system's memory, the sort command then proceeds to use all of the memory on the server, ultimately fails because the operating system kills the command off via the linux OOM killer, Plesk's 'statistics' or 'web_statistic_executor' commands never complete, so log rotation never occurs and the file just keeps growing each day.

We have sites that log 20+ gigs of log data per day and can't rotate their logs or complete their webstats because of this issue.

Can whatever Plesk program that's calling sort be edited? Then we could add a -S flag to tell it how much memory to use instead of letting it crash each night.
 
Watching every site on every server to see which ones get enough traffic to have a large log file is not a scalable solution. I can't believe Parallels thought it a good idea to run a web log, typically a massive file, into a program that has to store it in memory to do its work.
 
I really love servers blowing up every time a site has higher traffic than normal and then the daily stats execute. Is it parallels' official stance a sort process consuming 11 gigs of memory is your users' problem to deal with by running stats hourly via hacking your own cron jobs since the GUI provides no way to accomplish this?

Code:
  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEM  TIME+  COMMAND
31951 root  30  10 11.0g  10g  684 R 100.0 88.3  25:08.12 sort
 
Back
Top