• We value your experience with Plesk during 2024
    Plesk strives to perform even better in 2025. To help us improve further, please answer a few questions about your experience with Plesk Obsidian 2024.
    Please take this short survey:

    https://pt-research.typeform.com/to/AmZvSXkx
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Issue Getting 504 when pm.max_children is reached, i have tried different value

kalombo

Basic Pleskian
Hi,

We are getting 504 whenever pm.max_children is reached, for the webite to be available again, we have to restart php service.
the reason for this error are google crawler, when reach pm.max_children set in php the site goes 504 or not availe, we have tried different numbers still getting the same issue.

My question is: how can we fix this? or may auto restart php whenever the site is 504, or limit google crawler consuming cpu?

In attachement is our php setting and google crawler stat.
 

Attachments

  • 8888778877.JPG
    8888778877.JPG
    31.9 KB · Views: 8
  • 894545.JPG
    894545.JPG
    64.3 KB · Views: 9
Change "dynamic" to "on demand" and reduce the max script runtime (not shown in your screenshot) to a value like 10 or 20 seconds.
 
Thanks for your reply. I have changed it to 10s as you can see in attached screen.

But now i am having a new issue : 1600390#0: *8154 upstream timed out (110: Connection timed out) while reading response header from upstream

ina attachment. can you please advise?
 

Attachments

  • 03241444.JPG
    03241444.JPG
    29.5 KB · Views: 6
  • 011102.JPG
    011102.JPG
    55.5 KB · Views: 6
  • 010.JPG
    010.JPG
    56.2 KB · Views: 6
Great, you're almost at your goal. The reason for the issues here is that you have a script that runs way too long. Normally, website scripts run a fraction of a second. When you are doing uploads it might take longer, but for regular operations anything beyond a few seconds is too long. This is the cause for your previous symptoms: Scripts start, start, start, but they never stop or they stop so late that all the slots you allowed for PHP are filled up with them so no more scripts can be started, hence the website becomes unresponsive and results in a 504 gateway timeout as Apache can not deliver new responses.

The root cause for such a behavior are very often infinite redirect loops or includes. Another reason can be that the website is waiting on an external resource (e.g. it tries to load something with file_get_contents from an external source), but the source is not responding. What kind of website is this? If it's Wordpress or Joomla you could try to deactivate plugins and check the website again, then activate one after one to find the culprit.
 
Thanks a lot. I have struggled with this for a while. Your advise seems to resolve the issue.


I am ussing Magento 2
 
just for clarification, these issues happen only with googlebot. They send too many request.As you know i cant bock googleboot. may a way to limit their requests?
 
just for clarification, these issues happen only with googlebot. They send too many request.As you know i cant bock googleboot. may a way to limit their requests?
There is no feature in Plesk to limit requests from search crawlers like Google. There used be a feature in the Google Search Console to set the rate limit for your domains, but it has been removed earlier this year. Google does have an page detailing a few other options to reduce the crawl rate for site owners: Reduce Googlebot Crawl Rate | Google Search Central | Documentation | Google for Developers

There also is a great blog post on the Plesk blog explaining what you can do to block bad bots (Google is off course is not a bad bot). Blocking bad bots can reduce your server load.
 
It seems that the max execution time did not resolve the issue.
I have set pm.max_children 30, on demand

As you can see in the screen by running command top the children limit number is reached as the site is again 504, is there something i have missed?

But in logs i can see any issue, everything is working properly
 
I have changed pm from ondemand to dynamic, it seems to work but i can fully confirm after some time 4-5 days
 
attachment
It looks like the php-fpm processes are running for an unusual long time. This, together with the "upstream timed out" you posted earlier, it makes me think that something(s) on the website are not preforming well. Resulting in scripts that run for a very long time.

I have little experience with Magento so I can't give you any suggestions on where to look. But would not be comfortable running that site on my servers.

Another thing you can fiddle with to improve load is to only us Nginx (disable proxy mode). This will probably require adding some custom Nginx directives as .htaccess file won't work any more. I am sure you'll find some examples Googling for Nginx directives for Magento. Perhaps there are even examples on this forum.
 
I have just found these directives and wonder if are the ones for magento 2?

 
same information here

 
I have just found these directives and wonder if are the ones for magento 2?

The treat is about magento 2, so I guess the directives are too. But I am not sure. I'd recommend creating a test domain and cloning your site to test the directives.
 
I have tried everything i could, the issue still persisting: once the number of max children reached the site goes 504 until i restart php service for the site to be available again.

I was wondering maybe be my server not able to handle the qty of google requests as show the screen i have sent to you before?

As i tempory solution, is there a commmand to set plesk cron jobs to restart php server each 1-3 hours which? because by restart php the site will be available again.
 
You can close this issue, i have just found a solution.

The problem was incorrect pm.max_children.

To find the correct pm.max_children, we should procced as follow:

With this command ps -ylC php-fpm --sort:rss

The RSS column shows non-swapped physical memory usage by PHP-FPM processes in kilo Bytes.

On an average each PHP-FPM process took ~290MB of RAM on my machine.

Appropriate value for pm.max_children can be calculated as:

pm.max_children = Total RAM dedicated to the web server / Max child process size - in my case it was 290MB

The server has 269257MB of RAM, so: take 75% of this ram and cacluclate the right pm.max_children

Once you have this number you can also find :

pm.max_children = 678
pm.start_servers = 25% of pm.max_children=170
pm.min_spare_servers = 25% of pm.max_children=170
pm.max_spare_servers = 75% of pm.max_children=509


Here is the source of my solution
 
Gee, that's not a "solution", that's a workaround at best. What are you gonna do once the number of requests against the website increases? ...
 
Dont know. Any suggestion? Maybe increase server capacities.

I am getting - 9.64 million scan requests in the last 90 days from googlebot, plus yandex,plus bing dont know if it is my server not able to deal with such amount of requests.

We have changed website theme, before the changes, everything used to work well without any 504 error, with the same amount of requests
 
Back
Top