• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Issue Higher memory usage - Memory Maxing out | Need Help ASAP

Vik P

New Pleskian
Hello,
I am new to the forum so forgive me for for any mistakes while posting this thread.

We have a dedicated server with following configuration
Processor: 1x3.2GHz E3-1230
Operating System: Centos 6.2 64-bit-old
RAM: 8 GB GB

We have around 10WordPress websites on this server with moderate traffic.

Usually the Ram usage is around 2 GB average.
But since Yesterday, probably after Version 17.8.11 Update #78 update, the memory is maxing out in minutes after restarting or booting up.

We are using PHP mod_fastcgid processes for different websites & seems like these processes are taking around 200-500M memory. MYSQL is also using 3.5 GB of memory.
Check below snapshot of running processes,

top - 09:26:32 up 4:59, 4 users, load average: 4.25, 4.55, 4.36
Tasks: 302 total, 2 running, 300 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.8%us, 1.1%sy, 0.0%ni, 72.8%id, 23.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8038216k total, 7339036k used, 699180k free, 2211808k buffers
Swap: 4097016k total, 111748k used, 3985268k free, 2056584k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
888 lss_dev 20 0 265m 83m 8656 S 16.3 1.1 0:02.21 php-cgi
889 sixsigma 20 0 467m 102m 74m S 10.0 1.3 0:02.41 php-cgi
892 sixsigma 20 0 264m 81m 8568 R 1.3 1.0 0:01.46 php-cgi
890 sixsigma 20 0 491m 130m 91m S 1.0 1.7 0:04.52 php-cgi
5696 mysql 18 -2 3622m 524m 6480 S 1.0 6.7 8:52.16 mysqld
877 root 20 0 15160 1420 944 R 0.7 0.0 0:00.22 top
85 root 20 0 0 0 0 S 0.3 0.0 0:49.91 kblockd/3
17388 root 30 10 145m 50m 3840 D 0.3 0.6 1:13.92 statistics_coll
21174 nginx 20 0 65136 18m 2032 S 0.3 0.2 0:43.04 nginx
27684 apache 20 0 355m 27m 3048 S 0.3 0.4 0:01.53 httpd
28335 apache 20 0 355m 27m 2968 S 0.3 0.4 0:01.39 httpd
31769 sixsigma 20 0 15160 1440 948 S 0.3 0.0 0:03.27 top
1 root 20 0 19352 1396 1172 S 0.0 0.0 0:00.84 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root RT 0 0 0 0 S 0.0 0.0 0:00.04 migration/0
4 root 20 0 0 0 0 S 0.0 0.0 0:00.33 ksoftirqd/0
5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 stopper/0

Once the memory starts maxing out the websites are getting 503 error almost to the point of going down.

I am not a system admin but a developer. I have been working on it since last 14 hours with no luck.
Help would be appreciated.

I look forward to hearing back from you guys.

Thanks in advance.
 
Hard to say, there are a lot of things which can do this. Looking at the top result, its totally faire as Linux works in such a way that it utilizes almost all memory for the overall performance,
So which php versions do you use/have? Did you take into consider to switch from fastcgi to fmp? and what about free disk space
see for example the following knowledgebase entrys:
High CPU and memory usage by Apache or PHP processes in Plesk
How to check which domain PHP process consumes CPU resources?
 
Last edited:
I'm having a similar problem. On a daily basis I get reports of the swap usage reaching it's max, then within a fairly short time period it goes back to normal. Like @Vik P the server is used to host a similar number of WordPress sites.

My actual RAM is 2GB and I had a swap file (installed by plesk) that was about the same size. I decided to increase the swapfile size, and since this is an EC2 I added a 4GB EBS and turned it into my swapfile. I assumed that would resolve the issue, as previously it would never exceed about 85%. But the same thing is happening. Seems like no matter how large the swapfile is, the server gets to the limits of the swapfile.

I ran this same configuration before on Onyx and never had an issue. I've been running Obsidian for quite some time. I'm wondering if it's actually an Obsidian Advanced Monitoring issue, and not a server issue (ie incorrect reporting).

Ideas?
 
Back
Top