• Please be aware: Kaspersky Anti-Virus has been deprecated
    With the upgrade to Plesk Obsidian 18.0.64, "Kaspersky Anti-Virus for Servers" will be automatically removed from the servers it is installed on. We recommend that you migrate to Sophos Anti-Virus for Servers.
  • The Horde webmail has been deprecated. Its complete removal is scheduled for April 2025. For details and recommended actions, see the Feature and Deprecation Plan.
  • We’re working on enhancing the Monitoring feature in Plesk, and we could really use your expertise! If you’re open to sharing your experiences with server and website monitoring or providing feedback, we’d love to have a one-hour online meeting with you.

Issue Plesk starts dropping http connections after a certain amount of requests

Marchiuz

New Pleskian
Server operating system version
AlmaLinux 8.4 x86_64
Plesk version and microupdate number
Plesk Obsidian 18.0.41.1
I have two domains setup on plesk - let's say usersite.com and api.usersite.com. usersite.com is powered by nuxt.js - a front-end framework which runs on node.js. It makes API calls to api.usersite.com, which is a Laravel application powered by Octane (so it uses swoole instead of php-fpm). Both of these projects are running inside docker containers.
Now to the problem - when there is slightly higher traffic to usersite (200 users per minute) API site starts to drop connections, immediately resulting in 504. Perhaps someone could guide me in the right direction of why this might be happening? First I thought it was an issue of php-fpm that's why I've reworked my project and dockerized it to be served by Octane server instead of php-fpm. That doesn't seem to have helped, however. Worth mentioning that nginx is used as reverse proxy. So perhaps it's something to do with nginx limiting request count?
 
Update:
I think it's nginx fault. Since API requests are proxied, they all came from the same ip - server's ip. Once it starts refusing connections, I noticed that I am still able to fire requests and receive proper responses using postman.
 
Update:
I think it's nginx fault. Since API requests are proxied, they all came from the same ip - server's ip. Once it starts refusing connections, I noticed that I am still able to fire requests and receive proper responses using postman.
I dont thinks so... Nginx out of the box is able to serve hundreds ( or maybe thousands ) of simultaneous requests, you bottleneck must be some place else... What are the nginx logs says? ( proxy_access_ssl , proxy_access_log and proxy and proxy_error_log )
 
I dont thinks so... Nginx out of the box is able to serve hundreds ( or maybe thousands ) of simultaneous requests, you bottleneck must be some place else... What are the nginx logs says? ( proxy_access_ssl , proxy_access_log and proxy and proxy_error_log )
Where may I find these logs?
 
Since the front-end project runs in docker, I can see the logs for it in docker view. Here's an image showing some of the errors.
1664986499236.png

content of proxy_error_log
1665045933017.png

It's weird that latest errors shown in this file are from 09-27 but we're getting the errors every day

@mow My thoughts as well, how could I debug this?
 
Update:
I did fix the "same ip address from proxy" issue, so now proxied requests actually take the ip of the client. But that has not solved my issue :/
 
Update:
I did fix the "same ip address from proxy" issue, so now proxied requests actually take the ip of the client. But that has not solved my issue :/
That only fixes the logging. The actual connections still come from the proxy.
 
That only fixes the logging. The actual connections still come from the proxy.
Oh, then how may I properly fix it? I assume using nginx as proxy is quite often practice nowadays, how do they deal with such issue?
 
Back
Top