• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Issue recv() failed (104: Connection reset by peer) while reading response header from upstream

kaboemm

New Pleskian
Hello,

Me and my business partner experiencing the same issue at around the same time.

We run large PHP crawl scripts that sometimes takes an hour of 3. We have never experienced issues with this as we used all the correct time out en buffer settings (nginx and php). If we run our scripts it randomly 502's with the following log entry:
4214#0: *17080 recv() failed (104: Connection reset by peer) while reading response header from upstream

I have Google myself tired and have tried every settings I could find on the internet but none of it works. It also happens randomly, sometimes with 2 hours, sometimes with 15 minutes. The funny part is that sometimes it does work, and most of the time is this when it runs scheduled at night with a cron job.

Info:
I run plesk on my own VPS with CentOS Linux 7.8.2003 (Core)‬ with update Plesk Obsidian 18.0.29 Update #3. 4 vCPU, 8GB RAM and around 150GB SSD storage. Im running PHP 7.4.10 with nginx only (No proxy with Apache).

PHP:
FPM application served by nginx

memory_limit 2048M
max_execution_time 21600
max_input_time 21600
post_max_size 128M
upload_max_filesize 64M
max_input_vars = 10000

Nginx directive:
client_header_timeout 21600s;
client_body_timeout 21600s;
keepalive_timeout 21600s;
keepalive_requests 10000;
proxy_connect_timeout 21600s;
proxy_send_timeout 21600s;
send_timeout 21600s;
fastcgi_connect_timeout 21600s;



I have also tried adjusting the nginx.conf with worker_processes to 4. I have also tried adjusting /etc/sw-cp-server/config and put
fastcgi_buffers 32 32k;
fastcgi_buffer_size 64k;
there without any luck.

I tried to run it with Apache as proxy aswell. No luck.

If there is anyone that could hepl me solve this problem I would be really happy because this is hurting my business. Also this problem isnt really easy to reproduce or test as sometimes it can take hours for it to 502.

If you need any more info, please ask.

Thanks,

Gr Kaboemm
 
I don't think that there is any reliable method to run a script that takes an hour or even longer to complete through a web server front end. My suggestion is to change the script so that it runs in the background and reports it's state to somewhere like a file or a database entry (e.g. " 40% completed"), also that it checkes every n seconds whether a tag exists that tells it to prematurely stop. Then you can simply create an auto-reload web-page that loads these data from the source and maybe has a button like "stop script" that sets the tag to tell the script to stop.
 
Back
Top