Nginx Throughput/Reverse Proxy Optimization?

I’ve been benchmarking a rewrite of a JSON API for one of my sites, a C# / ASP .NET Core site using the Kestrel web server. I’m using hey with 50 concurrent requests and a total of 20,000 requests:

./hey_linux_amd64 -n 20000 -H 'Content-Type: application/x-www-form-urlencoded' -m POST -d '....' 'https://example.com/api/calculate.json'

When I hit the app directly via HTTP, I’m seeing ~7000 requests per second and an average response time of 6.9ms. However, if I stick Nginx in front of it, I’m only seeing ~3600 requests per second with an average response time of nearly double (13.6ms).

Still acceptable of course, and it’s a pretty large improvement over an old PHP version of the same site, but I’m just surprised that Nginx is adding that much overhead. Are there any performance tweaks I should look at with Nginx to get it closer to the speeds I can achieve when directly hitting the backend server?

The Nginx virtual host config is pretty straightforward - roughly something like this:

server {
	server_name example.com;
	listen 443 http2;
	listen [::]:443 http2;
	ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
	ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

	root /var/www/example/dotnet-beta/;

	gzip_static on;
	brotli_static on;

	try_files /ClientApp/dist/$uri @aspnet;

	location ~* \.(?:js|css)$ {
		root /var/www/example/dotnet-beta/ClientApp/dist/;
		expires max;
	}

	location @aspnet {
		proxy_pass http://unix:/run/example-beta.sock;
		proxy_http_version 1.1;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection keep-alive;
		proxy_set_header Host $host;
		proxy_cache_bypass $http_upgrade;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto $scheme;
	}
}

The main Nginx config is just the stock Debian one with a few tweaks such as changing the SSL ciphers to remove insecure ones.

This is on a VPS with one core so I wonder if context switching is the main reason for the overhead?

No idea about interaction with C# servers at all, anyway I’d suggest (as usual) to double-check @eva2000’s forum (I’ve eventually resorted to repackage NGINX for non-CentOS installs with almost all the patches suggested there, plus ModSecurity and a few from openresty); also double-check the configs he details as well. I’d say that the difference from a “Vanilla” NGINX is sensible (even if I didn’t scientifically measure it as George does). nginxconfig.io is another good source to double-check configs for different purposes.
A little thing that captured my attention was
proxy_pass http://unix:/run/example-beta.sock;
(according to docs a UNIX-domain socket path specified after the word “unix” [should be] enclosed in colons)
Few common mostly-safe “tweaks” are pcre_jit, tcp_nopush, open_file_cache, ssl_session_cache
my 2¢

Try using a TCP socket instead of the UNIX one.

First test goes to HTTP and the second to HTTPS? If that’s true, there it is the overhead.

His hey command line example puts https:// so I’d guess both were https.

If it has to re-establish the session through a separate encoded session for transport, well, that’d do it.

Great suggestion, thanks! Will do.

I tested both, just forgot to include it in my post. TCP was slightly slower.

I tested both with both HTTP and HTTPS, I just forgot to include the “listen 80” in the config I posted here. The real config is split across a few separate include files.

HTTPS should be faster as it’ll be using HTTP2. I can’t remember the exact difference but I’ll test it again when I get a chance.

So I was looking at a few benchmarks, and apparently ASP .NET Core is faster than Nginx even for plain text files (eg. see TechEmpower Framework Benchmarks) so I guess I’m always going to see some sort of slowdown when using Nginx. I do need a web server of some sort in front of it though, as the server has a few sites on it, some of which are PHP. I’ll see if I can speed up Nginx, or try something else (like HAProxy).

I wonder if anyone has built a reverse proxy on top of Kestrel - seems like its perf would be beneficial.

Daniel,

If you’re passing from HTTPS on NGINX through to HTTPS, you’re encoding twice. Make the back end non-SSL and just use the front end. See how that affects things beyond changing your FIFO config.

I guess I wasn’t clear in my post. The scenarios I tested were:

  • Direct connection to HTTP backend
  • Direct connection to HTTPS backend
  • HTTP Nginx to backend via HTTP
  • HTTP Nginx to backend via Unix socket
  • HTTPS Nginx to backend via HTTP
  • HTTPS Nginx to backend via Unix socket

Didn’t actually use HTTPS for Nginx to backend.

Does the content change much/regularly? Could you use fastcgi_cache at all?

You could also try using LiteSpeed; seems the performance in the latest versions is several times that of nginx for common tasks.

It doesn’t change much, but the input to the API changes a bit, and the output depends on the input, so I’m not sure if caching would help much (plus the backend service is very fast anyways)

Just tried Litespeed and it does look faster - I’m seeing ~5000 requests per second (vs ~3600 with Nginx) and average response time of 9.9ms (vs 13.6ms with Nginx). That’s with an out-of-the-box config; all I did was enable SSL and add the virtual host with a reverse proxy configuration. However, serving the static assets (CSS/JS) seems slower for some reason - I think it’s not using the pre-compressed files (.gz and .br) from disk.

I’m considering getting another IP for this VPS and just directly serving the backend service rather than proxying it - seems like the reverse proxies just reduce throughput and don’t really add much value in this case :stuck_out_tongue: