• If you are still using CentOS 7.9, it's time to convert to Alma 8 with the free centos2alma tool by Plesk or Plesk Migrator. Please let us know your experiences or concerns in this thread:
    CentOS2Alma discussion

Question NGINX Optimization

safemoon

Basic Pleskian
Hello,

How can i setup my server to handle high traffic with many concurrent users with NGINX. I would like to use all the benefits from the technologies NGINX uses such as micro caching, reverse proxy, load balancing, etc.

Ideally, I would like to setup a second server to act as a load balancer. when many users visit my website (php application) they should be split between the two servers.
 
Not really a Plesk question.

Tuning NGINX or anything really requires knowledge of what you're hosting. Streaming videos? Building a CDN? A forum? I'd recommend either doing some research yourself on what-does-what when it comes to NGINX, or hiring a system admin. Any advice I or anyone else gives you here will be extremely generalized and not necessarily be the best option for your use case.

Having said that:
micro caching, reverse proxy, load balancing, etc.
NGINX doesn't magically do these things, and these things won't magically make your site faster either. Microcaching can be configured in Plesk -> Webserver Settings -> Enable NGINX Caching -> Set the TTL to 30 seconds or something.

NGINX acts as a RP when you're using NGINX + Apache. I wouldn't use NGINX for the sake of having a RP.

Load balancing is tricky. You need three servers at least - a load balancer, two backends. While in theory, your LB can serve as a backend, it's really not a good idea. LB plesk isn't officially supported and will likely require major modifications and configuration changes. Again, I'd defer that to a systems admin as it dependent on use case.

For general optimization, Centminmod's stack/conf is a great place to start.
 
Last edited:
Not really a Plesk question.

Tuning NGINX or anything really requires knowledge of what you're hosting. Streaming videos? Building a CDN? A forum? I'd recommend either doing some research yourself on what-does-what when it comes to NGINX, or hiring a system admin. Any advice I or anyone else gives you here will be extremely generalized and not necessarily be the best option for your use case.

Having said that:

NGINX doesn't magically do these things, and these things won't magically make your site faster either. Microcaching can be configured in Plesk -> Webserver Settings -> Enable NGINX Caching -> Set the TTL to 30 seconds or something.

NGINX acts as a RP when you're using NGINX + Apache. I wouldn't use NGINX for the sake of having a RP.

Load balancing is tricky. You need three servers at least - a load balancer, two backends. While in theory, your LB can serve as a backend, it's really not a good idea. LB plesk isn't officially supported and will likely require major modifications and configuration changes. Again, I'd defer that to a systems admin as it dependent on use case.

For general optimization, Centminmod's stack/conf is a great place to start.

Hello, I will be hosting a PHP application that will be a multi-vendor eCommerce website, basically an online marketplace.
I have already done extensive research on many configurations but I could not find sufficient help.

I want to use a reverse proxy so I can have the same application (files) on other servers and split the load when users visit the domain. Some users access the files from server#1 and some other users from server#2. As I can see nginx has the ability to do that based on location, so if one server is in USA and the other one in EUROPE, it will benefit each user respectively.

Load balancing is something that really confuses me...some people say that you need multiple servers, but by multiple servers, they mean different ports, or another webserver on the same server... could you please clarify more on that topic? How can I achieve load balancing with nginx?
Load balancing doesnt have to be done directly from plesk, can't it be configured from the shell access (ssh)?

The goal is to be able to have thousands of concurrent connections without interference and timeouts.

thank you for your reply.
 
The goal is to be able to have thousands of concurrent connections without interference and timeouts.
It's a good goal, but not one I can provide an answer to here and now. Again, there are many many many variables. For one, how much load exactly? What kind of budget? Are you solely trying to meet a concurrency goal? Or HA for uptime/failover? Do you currently have this traffic? Is it bursts or constant?

I'm not going to write a thesis here, but there are many ways to do load balancing as you describe. Actually - my statement about need three servers is wrong. It's perfectly possible to do this via DNS, anycast, etc.

Loading balancing at core is splitting traffic to a "pool" of servers, to balance load as the name implies. You don't need a reverse proxy to do this, though it's a pretty common method. There's other ways, usually at lower OSI layers that route the actual traffic vs request.

Some users access the files from server#1 and some other users from server#2. As I can see nginx has the ability to do that based on location, so if one server is in USA and the other one in EUROPE, it will benefit each user respectively.
You need something to route requests to these servers. How do you determine who goes to US, who goes to EU? A load balancer/proxy can do this, using IP for Geolocation. You could also anycast the same IP, so users get routed to the closest origin, or use DNS, to serve different records.
Load balancing is something that really confuses me...some people say that you need multiple servers, but by multiple servers, they mean different ports, or another webserver on the same server... could you please clarify more on that topic? How can I achieve load balancing with nginx?
It's really not a "I have NGINX, how to load balance?" kind of question. You can setup a basic configuration with NGINX, sure. But if you'd like it to work properly, and well, there's quite a bit you'd have to plan for. Load balancing literally means distributing requests ("load") across multiple backends. These could be DNS servers, webservers, kube clusters, or physical servers. In your case, you'd need the latter, as you want them geographically separate.

Load balancing doesnt have to be done directly from plesk, can't it be configured from the shell access (ssh)?
Yes, but you'd need to make your "Plesk" servers work with load balancing. There are lots of implications and it's an extremely complex topic. How do you route traffic? How do you handle failover scenarios? How do you handle replication (you can't just "lsync/rsync" your database) or data conflict/mismatch? How do you handle user sessions?

Frankly, I'd really, really recommend:
1. Hiring somone
2. Evaluating whether you actually *need* load balancing. A 1 core, 1 GB VPS can serve 5-10k rqs of static HTML files. You can certainly get one server serving hundreds, thousands, of connections if it's tuned and set up optimally.
 
Hello,
Thank you for your input
I am a software developer, I have a client who wants to run my software at these specifications. Therefore I need to send him a quotation, need to know how this can be setup so i can advertise it and learn the server configuration part.

Most of your questions i can not answer.
As far as the database i have developed the php part on my software to handle read requests on mysql from multiple (master/slave) databases and keep them in sync in the write requests.
Sessions are stored on the db/sdd

I couldnt find any proper tutorials on how to setup a server or two servers to split the load balance and have concurrent users.

I am using loader.io and it always crashes when i get 400 concurrent users even tho the server is powerful and does not consume any resources.
I have checked the basic apache/nginx configs to allow more connections, but they did not help.

I also tried ApacheBench with a concurrency level of 14000, and it completed successfully within 8seconds and the same test after some optimization in 4 seconds. But the loader.io test still fails at the same 400 concurrent users
 
I am a software developer, I have a client who wants to run my software at these specifications. Therefore I need to send him a quotation, need to know how this can be setup so i can advertise it and learn the server configuration part.
It helps to know how your application functions/where the bottlenecks are. But again, would really really recommend hiring a consultant/sysadmin.

As far as the database i have developed the php part on my software to handle read requests on mysql from multiple (master/slave) databases and keep them in sync in the write requests.
That's great, and it could make things easier. Or it might not.
I couldnt find any proper tutorials on how to setup a server or two servers to split the load balance and have concurrent users.
Because, again, it's far from a one-size fit all solution.

This article might be a good start: How To Set Up Highly Available Web Servers with Keepalived and Floating IPs on Ubuntu 14.04 | DigitalOcean

Again, nowhere near comprehensive. If you just copy the commands and run then, it *may* work, but it certainly won't be the best solution.

I am using loader.io and it always crashes when i get 400 concurrent users even tho the server is powerful and does not consume any resources.
I have checked the basic apache/nginx configs to allow more connections, but they did not help.
How do you know it has enough resources? What happens? What crashes? What errors? There are many potential bottlenecks. NGINX/Apache is one. PHP/PHPFPM is another. MySQL is another. IO is another. External requests is another. You want to figure out what is happening and causing it to hit a standstill at 400.

I also tried ApacheBench with a concurrency level of 14000, and it completed successfully within 8seconds and the same test after some optimization in 4 seconds. But the loader.io test still fails at the same 400 concurrent users
Different tests, different things. Loader and ab both of different profile/load methods.
 
Back
Top