Is OpenLiteSpeed a good Apache Replacement?

Not being able to answer is what’s weak here


You’re welcome to visit here, we don’t bite (in general)

Thanks bro.

Ok, back to benchmarks :

Considering trying it out for a few of my sites. When I tried installing it alongside Nginx + PHP FPM on Debian, it messed up the config for the FPM packages (locked them so they couldn’t be upgraded), so it seems like Litespeed’s custom PHP build doesn’t like sitting alongside other PHP builds.

There’s an Apache module to add LSAPI support to Apache (part of CloudLinux: which should in theory give Apache a similar performance boost for PHP. I haven’t tried that though.


I installed OLS and then CyberPanel on December, but nginx performed so much better.

Too many bugs on CyberPanel.

Will try again in a couple of months

Still that wouldn’t address the fact that Apache is a bloated resource hog

Try LSWS with Plesk or DirectAdmin, it works flawlessly

Emm… no money for that :rofl:

I will keep my beautiful nginx meanwhile.


Let’s compare .conf - I hated it, but I have grown so used to debian Apache configuration that I don’t know if I could cat everything together and live with it.

I’d like to throw nginx in front, and then squid, and then BunnyCDN… and then…

1 Like

I used to use Varnish + nginx for Wordpress, but suddenly (I don’t know why) Wordpress started to take a lot of time while saving the posts (and clearing Varnish cache) that I had to get rid of Varnish.

Pure nginx now, but A LOT slower. Still faster than OLS.

1 Like

I tested openlitespeed some time ago the performance didn’t seem that much better than Nginx.

I currently got Nginx for static and Apache behind it for PHP, for my own sites I could just use Nginx but am hosting a few other people sites family and friends It just too much of a pain with pure Nginx since I need to convert all their htaccess, it easier to just put Apache in the stack so htaccess works.

1 Like

This one is very nice too, fast as hell

The thing with that graph is that it doesn’t say which Apache MPM module is being used. If they tested with something like prefork, then of course it’ll be slower. I don’t think Apache has an MPM module that uses non blocking IO, though :frowning_face:

Also, why would the server load go down slightly with 50 connections to LSWS compared to 20?

What I’ve found to work best is WP Super Cache plus Nginx configured to serve the super cache HTML files directly. Last I checked, WP Super Cache didn’t come with an example Nginx configuration out-of-the-box, so my config was based on an example in the WordPress wiki.


set $cache_uri $request_uri;

# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
  set $cache_uri 'null cache';
if ($query_string != "") {
  set $cache_uri 'null cache';

# Don't cache uris containing the following segments
if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php
                      |[a-z0-9_-]+-sitemap([0-9]+)?.xml)") {

  set $cache_uri 'null cache';

# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") {
  set $cache_uri 'null cache';

set $cachefile "/wp-content/cache/supercache/$http_host/$cache_uri/index.html";
if ($https ~* "on") {
    set $cachefile "/wp-content/cache/supercache/$http_host/$cache_uri/index-https.html";


index index.php;

# Deny all attempts to access hidden files such as .htaccess, .htpasswd, .DS_Store (Mac).
location ~ /\. {
  deny all;
# Block PHP files in uploads, content, and includes directory.
location ~* /(?:uploads|files|wp-content|wp-includes)/.*\.php$ {
  deny all;

# Far future expires for autoptimize combined/minified files
location /wp-content/cache/autoptimize/ {
  expires 1y;

# Regular content
location /wp-content/uploads/ {
  expires 1w;

location ~ \.php$ {
  include fastcgi_params;
  fastcgi_pass php7;

location / {
  try_files $cachefile $uri $uri/ /index.php?$args;


include /etc/nginx/snippets/wordpress/wp-super-cache.conf;
include /etc/nginx/snippets/wordpress/common.conf;

Then I have each WordPress site configured like this (eg. in /etc/nginx/sites-available/

server {
        listen 443 http2;
        ssl_certificate /etc/letsencrypt/live/;
        ssl_certificate_key /etc/letsencrypt/live/;
        root /var/www/;
        index index.php;

        include snippets/wordpress.conf;

# Redirect to HTTPS
server {
        listen 80;
        return 301 https://$host$request_uri;

I use some options from W3 Total Cache (broweser, object and database cache). For Page Cache I make the configuration directly con Nginx with fastcgi_cache. But Varnish was better.

I don’t understand these panel’s limiting the ram we can use

This is unfair and biased as hell. They conveniently only tested Nginx without any sort of caching - and we all know WordPress does terribly without caching. This is called bullshit marketing.


Which Apache MPM? I don’t suppose they tested the “event” MPM, but instead used the very old and bad “prefork”.

don’t “suppose” or “assume”

run both on a seperate but identical VPS and see it for yourself

While OLS is pretty fast I found the problems arising from it back then (client websites acting up etc) not worth the hassle compared to Apache (+nginx) setup.