• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Question 504 Gateway Time-out

quattro123

Basic Pleskian
Hey, suddenly without any changes, something seems to kill all my websites hosted with plesk.
Pages are not loading, and if so, extremely slow.

Most of the times after waiting a long time, I am getting a "504 Gateway Time-out" error back.
After restarting the server, it seems to work a couple of minutes, then it gets stuck again.

Plesk itself seems to be reachable. I can login and do stuff.

Where would be the bist place to start whats going on? any log files?


Regards
 
Hi thanks,

i checked that and repaired it. Seems like everything was okay with that, output in ssh was "Error messages: 0; Warnings: 0; Errors resolved: 0"


I checked my proxy_error_log and i saw hundreds error like this one (domain is XXXed by myself):

2022/02/28 10:51:20 [error] 977#0: *52769 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 79.140.114.132, server: xxx.de, request: "POST /chat/loadheaderdata HTTP/2.0", upstream: "https://XXX:7081/chat/loadheaderdata", host: "www.xxx.de", referrer: "XXX DE - Telefonsex by BEATE UHSE"
 
Perhaps it is related to the default timeout limit being 60 seconds for proxying requests from nginx to Apache.
Go to Domains > example.com > Apache & nginx Settings.
Add the following lines to the Additional nginx directives field to increase the timeout limit to 180 seconds (3 minutes):
Code:
proxy_connect_timeout 180s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 180s;
Apply the changes.
 
Also check the number of concurrent PHP-FPM instances and children for that domain (PHP settings). It is thinkable that the maximum number of children is already busy so that no additional ones can be spawned until the timeout expires. You can see the actual instances by running
# ps aux | grep php-fpm
on the Linux console. You can try to increase the number of children pm.max_children and requests pm.max_request to a higher number, e.g. 50 children and 10000 requests and see if this allows more responses from Apache (and does not overload your system).

Increasing the timeout values is another good approach, however, if a chat needs more than 60 seconds to respond, there is probably an overload situation or another cause why the chat script cannot process a request. Normally it should not take that long to run.
 
Hey thanks for all the input! I will check the points above

is there a way to check wheter there is a overload situation or not?
I am running a large forum and we imported some topics (like almost 500.000). Maybe this is somehow killing the SLQ database or something else? How could i check that?
 
14:42:32 up 2:56, 1 user, load average: 11,46, 15,12, 18,34

But that is because i am frequently restarting the server due to the problem above


CPU: 12 vCores
RAM: 24 GB
 
Have you checked this with top or htop where the load is really coming from? A high load causes the web server to slow down considerably (as all other components).
 
Another thing I realized is the nginx error log:
Somehow I believe that here are some strange things are going on?



Code:
2022/02/28 14:56:09 [alert] 21148#0: *69482 open socket #117 left in connection 180
2022/02/28 14:56:09 [alert] 21148#0: *69234 open socket #111 left in connection 181
2022/02/28 14:56:09 [alert] 21148#0: *69589 open socket #103 left in connection 182
2022/02/28 14:56:09 [alert] 21148#0: *69549 open socket #175 left in connection 183
2022/02/28 14:56:09 [alert] 21148#0: *69539 open socket #50 left in connection 184
2022/02/28 14:56:09 [alert] 21148#0: *69452 open socket #71 left in connection 187
2022/02/28 14:56:09 [alert] 21148#0: *69451 open socket #45 left in connection 189
2022/02/28 14:56:09 [alert] 21148#0: *69498 open socket #125 left in connection 190
2022/02/28 14:56:09 [alert] 21148#0: *69579 open socket #88 left in connection 193
2022/02/28 14:56:09 [alert] 21148#0: *69494 open socket #78 left in connection 194
2022/02/28 14:56:09 [alert] 21148#0: *69554 open socket #90 left in connection 195
2022/02/28 14:56:09 [alert] 21148#0: *69536 open socket #160 left in connection 196
2022/02/28 14:56:09 [alert] 21148#0: *69593 open socket #132 left in connection 197
2022/02/28 14:56:09 [alert] 21148#0: *69340 open socket #194 left in connection 198
2022/02/28 14:56:09 [alert] 21148#0: *69380 open socket #118 left in connection 199
2022/02/28 14:56:09 [alert] 21148#0: *69279 open socket #115 left in connection 200
2022/02/28 14:56:09 [alert] 21148#0: *69429 open socket #155 left in connection 201
2022/02/28 14:56:09 [alert] 21148#0: *69585 open socket #44 left in connection 203
2022/02/28 14:56:09 [alert] 21148#0: *69581 open socket #97 left in connection 205
2022/02/28 14:56:09 [alert] 21148#0: *69413 open socket #221 left in connection 207
2022/02/28 14:56:09 [alert] 21148#0: *69519 open socket #126 left in connection 208
2022/02/28 14:56:09 [alert] 21148#0: *69506 open socket #95 left in connection 209
2022/02/28 14:56:09 [alert] 21148#0: *69540 open socket #104 left in connection 210
2022/02/28 14:56:09 [alert] 21148#0: *69325 open socket #17 left in connection 211
2022/02/28 14:56:09 [alert] 21148#0: *69578 open socket #87 left in connection 212
2022/02/28 14:56:09 [alert] 21148#0: *69533 open socket #154 left in connection 213
2022/02/28 14:56:09 [alert] 21148#0: *69503 open socket #43 left in connection 215
2022/02/28 14:56:09 [alert] 21148#0: *69440 open socket #238 left in connection 216
2022/02/28 14:56:09 [alert] 21148#0: *69505 open socket #80 left in connection 218
2022/02/28 14:56:09 [alert] 21148#0: *69591 open socket #128 left in connection 219
2022/02/28 14:56:09 [alert] 21148#0: *69574 open socket #79 left in connection 222
2022/02/28 14:56:09 [alert] 21148#0: *69530 open socket #150 left in connection 223
2022/02/28 14:56:09 [alert] 21148#0: *69416 open socket #224 left in connection 224
2022/02/28 14:56:09 [alert] 21148#0: *69542 open socket #139 left in connection 225
2022/02/28 14:56:09 [alert] 21148#0: *69526 open socket #64 left in connection 226
2022/02/28 14:56:09 [alert] 21148#0: *69258 open socket #57 left in connection 227
2022/02/28 14:56:09 [alert] 21148#0: *69544 open socket #165 left in connection 228
2022/02/28 14:56:09 [alert] 21148#0: *69511 open socket #69 left in connection 229
2022/02/28 14:56:09 [alert] 21148#0: *69273 open socket #93 left in connection 234
2022/02/28 14:56:09 [alert] 21148#0: *69278 open socket #15 left in connection 239
2022/02/28 14:56:09 [alert] 21148#0: *69576 open socket #82 left in connection 247
2022/02/28 14:56:09 [alert] 21148#0: *69572 open socket #73 left in connection 251
2022/02/28 14:56:09 [alert] 21148#0: aborting
2022/02/28 14:57:21 [error] 23725#0: *2 upstream prematurely closed connection while reading response header from upstream, client: 185.110.35.2, server: _, request: "GET /modules/repair-kit/index.php/api/process-list HTTP/2.0", upstream: "http://127.0.0.1:8880/modules/repair-kit/index.php/api/process-list", host: "server.XXX.de", referrer: "https://server.XXX.de/smb/web/view"
2022/02/28 14:57:21 [error] 23725#0: *2 upstream prematurely closed connection while reading response header from upstream, client: 185.110.35.2, server: _, request: "GET /smb/task/task-progress HTTP/2.0", upstream: "http://127.0.0.1:8880/smb/task/task-progress", host: "server.XXX.de", referrer: "https://server.XXX.de/smb/web/view"
2022/02/28 17:54:44 [error] 6262#0: *118319 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 89.22.101.191, server: , request: "GET / HTTP/1.1", upstream: "http://89.22.101.191:7080/", host: "89.22.101.191"
2022/02/28 17:57:03 [error] 6262#0: *119072 connect() failed (111: Connection refused) while connecting to upstream, client: 185.110.35.2, server: _, request: "GET /admin/task/task-progress HTTP/2.0", upstream: "http://127.0.0.1:8880/admin/task/task-progress", host: "server.XXX.de", referrer: "https://server.XXX.de/admin/server-protection/settings"
2022/02/28 17:57:23 [emerg] 259#0: bind() to 89.22.101.191:443 failed (99: Cannot assign requested address)
2022/02/28 17:57:25 [error] 655#0: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 185.110.35.2, server: _, request: "GET /admin/task/task-progress HTTP/2.0", upstream: "http://127.0.0.1:8880/admin/task/task-progress", host: "server.XXX.de", referrer: "https://server.XXX.de/admin/server-protection/settings"
2022/02/28 17:57:25 [error] 655#0: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 185.110.35.2, server: _, request: "GET /admin/task/task-progress HTTP/2.0", upstream: "http://127.0.0.1:8880/admin/task/task-progress", host: "server.XXX.de", referrer: "https://server.XXX.de/admin/server-protection/settings"
2022/02/28 17:57:26 [error] 655#0: *12 connect() failed (111: Connection refused) while connecting to upstream, client: 185.110.35.2, server: _, request: "GET /ws HTTP/1.1", upstream: "http://127.0.0.1:8880/ws", host: "server.XXX.de"
2022/02/28 17:57:26 [error] 655#0: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 185.110.35.2, server: _, request: "GET /admin/task/task-progress HTTP/2.0", upstream: "http://127.0.0.1:8880/admin/task/task-progress", host: "server.XXX.de", referrer: "https://server.XXX.de/admin/server-protection/settings"
2022/02/28 19:31:41 [error] 12581#0: *63874 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 89.22.101.191, server: , request: "GET / HTTP/1.1", upstream: "http://89.22.101.191:7080/", host: "89.22.101.191"
 
Hi, I guess I found the issue. My system goes down when plesk starts the automatic backup set to 2 am.
Because of any reason it stops and failes and Plesk starts again at 06:32.

I have no idea why this happens. But anyway, I need to stop the backup process manually (right now), otherwise the system is blocked.

1646116001050.png
 
Check the backup log, especially the migration.log file in the backup log directory that descends from /var/log/plesk/PMM for the reason why the backup stops. I also recommend to review the compression level setting. A high compression level may lead to a very high cpu load that can cause other services to time out. If the system's load is already high and you start a backup that uses compression, the system will certainly stall. Further, I suggest to lower the backup process priority and the backup disk I/O priority which can also be configured in the general backup settings. Rather let the backup take longer than to cause other processes to hang.
 
Hi Peter,
thanks again for the feedback!

All the settings you mentioned are okay (already set priorities and compression level before).

One question: when I define a sheduled backup process in the page /backup/schedule/, is this the only backup process defined then?
Or do I create another parallel process when I change the backup time here?

Like:

1. I open the page and change the time to 1 am, klick ok.
2. I open it again, change the tim to 3 am and klick ok.

--> There will be only one sheduled task for the backup for 3 am - and not two for 1 am and 3 am - right?


Attachted the migration.log, I had to shorten it a lot inbetween.


Code:
[
302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_git_2202132302_2202142302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_grafana_2202132302_2202142302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_sectigo_2202201729_2202260702.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_notifier_2202062302_2202082302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_onboarding_2202132302_2202142302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_monitoring_2202201729_2202220717.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dropbox-backup_2201232302_2201292302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_social-login_2202132302_2202182302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_log-browser_2202201729_2202220717.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_monitoring_2202132302_2202172302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_xovi_2201302302_2202012302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_dropbox-backup_2201232302_2201242302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_sectigo_2202062302_2202092302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_onboarding_2201232302_2201252302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_baqend_2202201729_2202240732.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_rest-api_2202270726.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_sectigo_2202270726.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_composer_2201302302_2201312302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_onboarding_2202201729_2202240732.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_sslit_2202201729_2202240732.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_catalog_2202132302_2202162302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_keyXX81RqcL_2202062302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_galileo_2201302302_2202022302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_notifier_2201302302_2202022302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_baqend_2201302302_2202012302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_social-login_2201302302_2202042302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_repair-kit_2202062302_2202092302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_social-login_2202201729_2202240732.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check mysql.daily.dump.4.gz
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_dist_baqend_2201302302_2202042302.tzst
[2022-03-01 11:34:55.734|19144] INFO: Check backup_ext_revisium-antivirus_2201232302_2201242302.tzst
[2022-03-01 11:34:55.734|19145] INFO: pmm-ras started : /opt/psa/admin/bin/pmm-ras --get-ftp-dump-list --use-ftp-passive-mode --lightweight-mode --dump-storage=ftp://[email protected]//html/bf_backup/ --type=server --guid=5eee3933-5cc8-4b3d-a67d-4661b003e723 --session-path=/var/log/plesk/PMM
[2022-03-01 11:34:55.734|19144] INFO: Repository '/var/lib/psa/dumps/': Get initial backup info for backup_info_2010070416_2010080002.xml
[2022-03-01 11:34:55.734|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2010070416_2010080002/props
[2022-03-01 11:34:55.734|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2010070416_2010080002/storages
[2022-03-01 11:34:55.734|19145] INFO: Repository 'ftp://XXX3205.XXXhosting-server.de//html/bf_backup/': Initializing...
[2022-03-01 11:34:55.735|19145] INFO: Curl version: 0x74f01
[2022-03-01 11:34:56.142|19144] INFO: Repository '/var/lib/psa/dumps/': Validate backup backup_info_2201232302_2201292302.xml
[2022-03-01 11:34:56.142|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2201232302_2201292302/props
[2022-03-01 11:34:56.142|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2201232302_2201292302/status_OK
[2022-03-01 11:34:56.142|19144] INFO: Repository '/var/lib/psa/dumps/': Validate incremental backup backup_info_2201232302_2201292302.xml
[2022-03-01 11:34:56.142|19144] INFO: Find incremental backups in '' for prefix 'backup' and base version 2201232302
[2022-03-01 11:34:56.159|19144] INFO: Read properties from xml backup_info_2202201729_2202240732.xml
[2022-03-01 11:34:56.194|19144] INFO: Repository '/var/lib/psa/dumps/': Validate backup backup_info_2201302302_2201312302.xml
[2022-03-01 11:34:56.194|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2201302302_2201312302/props
[2022-03-01 11:34:56.194|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2201302302_2201312302/status_OK
[2022-03-01 11:34:56.194|19144] INFO: Repository '/var/lib/psa/dumps/': Validate incremental backup backup_info_2201302302_2201312302.xml
[2022-03-01 11:34:56.194|19144] INFO: Find incremental backups in '' for prefix 'backup' and base version 2201302302
[2022-03-01 11:34:56.212|19144] INFO: Read properties from xml backup_info_2010070416.xml
[2022-03-01 11:34:56.212|19144] INFO: Repository '/var/lib/psa/dumps/': Validate backup backup_info_2010070416.xml
[2022-03-01 11:34:56.212|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2010070416/props
[2022-03-01 11:34:56.212|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2010070416/status_OK
[2022-03-01 11:34:56.595|19144] INFO: Repository '/var/lib/psa/dumps/': Validate backup backup_info_2202270726.xml
[2022-03-01 11:34:56.595|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2202270726/props
[2022-03-01 11:34:56.595|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2202270726/status_OK
[2022-03-01 11:34:56.595|19144] INFO: Read properties from xml backup_info_2201302302_2202032302.xml
[2022-03-01 11:34:56.596|19144] INFO: Repository '/var/lib/psa/dumps/': Validate backup backup_info_2201302302_2202032302.xml
[2022-03-01 11:34:56.596|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2201302302_2202032302/props
[2022-03-01 11:34:56.596|19144] INFO: Repository '/var/lib/psa/dumps/': Get info from .discovered/backup_info_2201302302_2202032302/status_OK
[2022-03-01 11:34:56.596|19144] INFO: Repository '/var/lib/psa/dumps/': Validate incremental backup backup_info_2201302302_2202032302.xml
[2022-03-01 11:34:56.596|19144] INFO: Find incremental backups in '' for prefix 'backup' and base version 2201302302
[2022-03-01 11:34:56.614|19144] INFO: pmm-ras finished. Exit code: 0
[2022-03-01 11:34:57.006|19145] INFO: Pause before next attempt...
[2022-03-01 11:34:58.023|19145] INFO: TransportError Transport error: unable to list directory: Curl error: (9) Access denied to remote resource: Last FTP request: CWD bf_backup Last FTP response: 550 bf_backup: Datei oder Verzeichnis nicht gefunden [common/plesk-utils/PMM/repository-transport/transport.cpp:TransportError]
virtual void plesk::tRepositoryFtp::ListDirEx(const string&, std::__cxx11::list<plesk::FileInfo>&)
[2022-03-01 11:34:58.023|19145] INFO: pmm-ras finished. Exit code: 121
[2022-03-01 11:34:58.064|19152] INFO: pmm-ras started : /opt/psa/admin/bin/pmm-ras --check-repository --dump-storage=ftp://[email protected]//html/bf_backup --session-path=/var/log/plesk/PMM --use-ftp-passive-mode
[2022-03-01 11:34:58.064|19152] INFO: Repository 'ftp://XXX3205.XXXhosting-server.de//html/bf_backup': Initializing...
[2022-03-01 11:34:58.064|19152] INFO: Curl version: 0x74f01
[2022-03-01 11:34:58.065|19152] INFO: Repository 'ftp://XXX3205.XXXhosting-server.de//html/bf_backup': Initialized
[2022-03-01 11:34:58.065|19152] INFO: Test stage: Check directory access
[2022-03-01 11:34:58.065|19152] INFO: Ftp init url ftp://XXX3205.XXXhosting-server.de//html/bf_backup/
[2022-03-01 11:34:58.268|19152] INFO: Pause before next attempt...
[2022-03-01 11:34:59.295|19152] INFO: Pause before next attempt...
[2022-03-01 11:35:00.319|19152] INFO: pmm-ras finished. Exit code: 0
 
1. I open the page and change the time to 1 am, klick ok.
2. I open it again, change the tim to 3 am and klick ok.

--> There will be only one sheduled task for the backup for 3 am - and not two for 1 am and 3 am - right?
Yes, correct.

...
[2022-03-01 11:34:56.614|19144] INFO: pmm-ras finished. Exit code: 0
[2022-03-01 11:34:57.006|19145] INFO: Pause before next attempt...
[2022-03-01 11:34:58.023|19145] INFO: TransportError Transport error: unable to list directory: Curl error: (9) Access denied to remote resource: Last FTP request: CWD bf_backup Last FTP response: 550 bf_backup: Datei oder Verzeichnis nicht gefunden [common/plesk-utils/PMM/repository-transport/transport.cpp:TransportError]
virtual void plesk::tRepositoryFtp::ListDirEx(const string&, std::__cxx11::list<plesk::FileInfo>&)
[2022-03-01 11:34:58.023|19145] INFO: pmm-ras finished. Exit code: 121
[2022-03-01 11:34:58.064|19152] INFO: pmm-ras started : /opt/psa/admin/bin/pmm-ras --check-repository --dump-storage=ftp://[email protected]//html/bf_backup --session-path=/var/log/plesk/PMM --use-ftp-passive-mode
[2022-03-01 11:34:58.064|19152] INFO: Repository 'ftp://XXX3205.XXXhosting-server.de//html/bf_backup': Initializing...
[2022-03-01 11:34:58.064|19152] INFO: Curl version: 0x74f01
[2022-03-01 11:34:58.065|19152] INFO: Repository 'ftp://XXX3205.XXXhosting-server.de//html/bf_backup': Initialized
[2022-03-01 11:34:58.065|19152] INFO: Test stage: Check directory access
[2022-03-01 11:34:58.065|19152] INFO: Ftp init url ftp://XXX3205.XXXhosting-server.de//html/bf_backup/
[2022-03-01 11:34:58.268|19152] INFO: Pause before next attempt...
[2022-03-01 11:34:59.295|19152] INFO: Pause before next attempt...
...
Looks like your FTP storage space is inaccessible. For a working backup you'll need to correct that problem first. Also please pay attention to this thread: Forwarded to devs - Recurring, intermittent backup process stuck since update to 18.0.41 on three independent systems
 
Perhaps it is related to the default timeout limit being 60 seconds for proxying requests from nginx to Apache.
Go to Domains > example.com > Apache & nginx Settings.
Add the following lines to the Additional nginx directives field to increase the timeout limit to 180 seconds (3 minutes):
Code:
proxy_connect_timeout 180s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
fastcgi_send_timeout 180s;
fastcgi_read_timeout 180s;
Apply the changes.
I am trying to update changes but not able to change getting error.

Invalid nginx configuration: nginx: [emerg] bind() to [2406:da1a:e54:b700:eb4e:e414:10ff:366]:443 failed (99: Cannot assign requested address) nginx: configuration file /etc/nginx/nginx.conf test failed

Unable to use the current nginx configuration file and to rollback to the previous version of the file because they both contain invalid configuration.
 
Back
Top