• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue Plesk Eating Vultr Server CPU after SSL Installation

Rule of thumb is two workers per CPU core ... if you have enough RAM, that is.
As those workers are spawned on demand, you won't notice problems until you get enough traffic that the master process starts new workers that immediately abort because kernel says out of memory (you have no safety margin because there's no swap).
Oh, we did set a swap of 4GB. Does that help and to what extend? Thanks!
 
Rule of thumb is two workers per CPU core ... if you have enough RAM, that is. Adjust one or the other until max_children x memory limit < total RAM.
As those workers are spawned on demand, you won't notice problems until you get enough traffic that the master process starts new workers that immediately abort because kernel says out of memory (you have no safety margin because there's no swap).
Here's the updated system info.
 

Attachments

  • Screen-Shot-2021-07-08-at-10.46.24-AM.jpg
    Screen-Shot-2021-07-08-at-10.46.24-AM.jpg
    291.4 KB · Views: 6
It helps insofar as the site will get slow but not outright fail, and you can take countermeasures if Buffer and Cached drop down while swap is almost full. (Swap is allowed to be full as long as you have a lot of buffer/cache, that's working as intended.)
 
If you've got 1-2 site, cpu x 2 might be fine, but keep in mind it's a per site configuration, so 4 for both sites actually gives you 8. Not to mention buffers should be reserved other processes.
 
10 is likely too high.

Issue with large mem + post size stems from things like $_POST being memory bound, so if someone submitted 500 MB of data, it would be allowed with a 700 max_post and mem_limit, and then now you've got a PHP worker using 500MB of memory. Repeat that, and you're server will hit the dreaded OOM killer
 
If you've got 1-2 site, cpu x 2 might be fine, but keep in mind it's a per site configuration, so 4 for both sites actually gives you 8. Not to mention buffers should be reserved other processes.
So 1-2 per site config. Considering I add page rules in Cloudflare to cache all static files correctly, how many site could I host in my current setup? I'm really amazed that so many people tell you can host up to 30 sites in one $5 VPS, but I have to upgrade $18 VPS to run 2 site LOL!
 
There is no set rule. I'd reference my previous post on another thread:

it will certainly depend on what your hosting, and how those sites are setup, and if you've got other bottlenecks in your stack, like IO/DB. If everything else is completely optimized, or it's pure PHP, then you want children = threads for maximum performance. Or maybe children = threads -1 if you have other services running. That said, most applications, say WP, aren't like that. They make many DB calls, read/write files, etc. In these cases, you'll want more children than threads, as threads can be "idle" if waiting on the DB, thereby not using CPU time. In this case, the worker would not be able to process new requests, so you'd want other workers to be able to pick up the slack.

Static files don't use PHP processes, so that's irrelevant - NGINX/HTTPD handle that load and it's mostly minimal.

To understand how many PHP processes you need, you'd have to understand why you have them; each PHP "worker" is a single thread executing whatever PHP code that a client requests. That worker executes only that 1 client's request at a time and will not do anything else. While a PHP thread/worker is running, it'll use CPU time for actual execution - however, it'll also spend a significant amount of time usually waiting for IO, DB, HTTP calls, etc. Therefore, a worker taking 500ms != using 500ms of CPU, which means 1 CPU can process more than 2 workers / reqs / second.

Finding the optimal worker configuration involves figuring out what % of time the CPU is actually running and what % of time it's waiting/idling. From there, you usually want the lowest figure give or take a few as a buffer.

Take this script for example:

<?php sleep(30); ?>

If someone hit a page with that code, the worker would idle for 30 seconds. During that time, the "worker" can't process new requests, while the CPU can. So if you have one worker, and one visitor, the next one that comes will need to be queued, in which case you'd want more workers

On the other hand, if you've got something that is CPU intensive (Math, String Manipulation) and CPU is nearly 100% of the execution time, you want workers = CPUs +/-.

Say you have a PHP script that takes 1 second to run (and uses only the CPU) and only one core. User A hit the page, and Worker A uses 85% of the CPU to run the request (saving some for other system processes). Now User B comes along while Worker A is running and requests the page again, spawning Worker B. The amount of CPU available hasn't changed - you now have 85% of the CPU that needs to be shared amongst two workers, which of course slows down execution. This would be fine, except you'd need to factor in context switch (they do make a difference, despite taking nano-to-micro seconds), which really means you have less than 85% to share amongst two processes, thereby lowering performance.

tl:dr; benchmark, measure, adjust

how many site could I host in my current setup

I don't have an answer for you besides to benchmark and tune as needed to get a decent sense. It's also largely dependent on what "kind" of sites
 
There is no set rule. I'd reference my previous post on another thread:



Static files don't use PHP processes, so that's irrelevant - NGINX/HTTPD handle that load and it's mostly minimal.

To understand how many PHP processes you need, you'd have to understand why you have them; each PHP "worker" is a single thread executing whatever PHP code that a client requests. That worker executes only that 1 client's request at a time and will not do anything else. While a PHP thread/worker is running, it'll use CPU time for actual execution - however, it'll also spend a significant amount of time usually waiting for IO, DB, HTTP calls, etc. Therefore, a worker taking 500ms != using 500ms of CPU, which means 1 CPU can process more than 2 workers / reqs / second.

Finding the optimal worker configuration involves figuring out what % of time the CPU is actually running and what % of time it's waiting/idling. From there, you usually want the lowest figure give or take a few as a buffer.

Take this script for example:

<?php sleep(30); ?>

If someone hit a page with that code, the worker would idle for 30 seconds. During that time, the "worker" can't process new requests, while the CPU can. So if you have one worker, and one visitor, the next one that comes will need to be queued, in which case you'd want more workers

On the other hand, if you've got something that is CPU intensive (Math, String Manipulation) and CPU is nearly 100% of the execution time, you want workers = CPUs +/-.

Say you have a PHP script that takes 1 second to run (and uses only the CPU) and only one core. User A hit the page, and Worker A uses 85% of the CPU to run the request (saving some for other system processes). Now User B comes along while Worker A is running and requests the page again, spawning Worker B. The amount of CPU available hasn't changed - you now have 85% of the CPU that needs to be shared amongst two workers, which of course slows down execution. This would be fine, except you'd need to factor in context switch (they do make a difference, despite taking nano-to-micro seconds), which really means you have less than 85% to share amongst two processes, thereby lowering performance.

tl:dr; benchmark, measure, adjust



I don't have an answer for you besides to benchmark and tune as needed to get a decent sense. It's also largely dependent on what "kind" of sites
Ok, thanks a lot!

I will dive deeper into the settings and put it to a test.

All the best!
 
Back
Top