There is no set rule. I'd reference my
previous post on another thread:
Static files don't use PHP processes, so that's irrelevant - NGINX/HTTPD handle that load and it's mostly minimal.
To understand how many PHP processes you need, you'd have to understand why you have them; each PHP "worker" is a single thread executing whatever PHP code that a client requests. That worker executes only that 1 client's request at a time and will not do anything else. While a PHP thread/worker is running, it'll use CPU time for actual execution - however, it'll also spend a significant amount of time usually waiting for IO, DB, HTTP calls, etc. Therefore, a worker taking 500ms != using 500ms of CPU, which means 1 CPU can process more than 2 workers / reqs / second.
Finding the optimal worker configuration involves figuring out what % of time the CPU is actually running and what % of time it's waiting/idling. From there, you usually want the lowest figure give or take a few as a buffer.
Take this script for example:
<?php sleep(30); ?>
If someone hit a page with that code, the worker would idle for 30 seconds. During that time, the "worker" can't process new requests, while the CPU can. So if you have one worker, and one visitor, the next one that comes will need to be queued, in which case you'd want more workers
On the other hand, if you've got something that is CPU intensive (Math, String Manipulation) and CPU is nearly 100% of the execution time, you want workers = CPUs +/-.
Say you have a PHP script that takes 1 second to run (and uses only the CPU) and only one core. User A hit the page, and Worker A uses 85% of the CPU to run the request (saving some for other system processes). Now User B comes along while Worker A is running and requests the page again, spawning Worker B. The amount of CPU available hasn't changed - you now have 85% of the CPU that needs to be shared amongst two workers, which of course slows down execution. This would be fine, except you'd need to factor in context switch (they do make a difference, despite taking nano-to-micro seconds), which really means you have less than 85% to share amongst two processes, thereby lowering performance.
tl:dr; benchmark, measure, adjust
I don't have an answer for you besides to benchmark and tune as needed to get a decent sense. It's also largely dependent on what "kind" of sites