Mustafa Can Yücel
blog-post-1

Switching to Caddy Server

Why Caddy over Apache

Caddy and Apache2 are both popular web servers, but there are several reasons why Caddy stands out as a better server for many use cases.

Firstly, Caddy has a more user-friendly and intuitive configuration. Its configuration file uses simple and human-readable syntax, making it easier for developers and system administrators to set up and manage their web servers. On the other hand, Apache2's configuration can be complex and overwhelming, especially for beginners. Caddy's configuration also supports automatic HTTPS by default, which simplifies the process of setting up secure connections for websites.

Another advantage of Caddy is its built-in support for modern web technologies. Caddy comes bundled with many useful features out of the box, such as HTTP/2, WebSocket, and reverse proxy. These features enable developers to build high-performance and interactive web applications without having to manually configure and integrate additional modules. Apache2, while highly customizable through modules, requires additional setup and configuration to achieve similar functionality.

Caddy's focus on security is another reason why it is often preferred over Apache2. Caddy automatically obtains and manages SSL/TLS certificates using Let's Encrypt, ensuring that websites are encrypted by default. This simplifies the process of securing websites, especially for those who are not well-versed in managing certificates. Additionally, Caddy provides advanced security features like HTTP Strict Transport Security (HSTS) and Content Security Policy (CSP) that help protect against common web vulnerabilities.

Furthermore, Caddy excels in terms of performance and resource efficiency. Its lightweight design and minimal memory footprint make it ideal for environments with limited resources or high-traffic websites. Caddy's modular architecture allows it to load only the necessary modules for each request, reducing overhead and improving performance compared to Apache2, which often loads a larger set of modules even if they are not required.

Lastly, Caddy's active and responsive community plays a significant role in its success. The Caddy community is known for providing excellent support and regularly contributing new features and improvements. The project has a strong focus on documentation and a comprehensive website, making it easier for users to find answers and resources to their questions.

While Apache2 remains a powerful and widely used web server, Caddy offers a more streamlined and user-friendly experience, robust security features, better performance, and an active community. These factors make Caddy an excellent choice for those seeking a modern, efficient, and hassle-free web server solution.

For all these reasons, we are going to switch to Caddy Server. If you are configuring these services for the first time, you can follow the Caddy configurations in the following sections. This will be a quite long journey, so buckle up!

How to Remove Apache2

First and foremost, it is a good idea to take a snapshot of your server. Most of the cloud providers offer this feature. This allows it to revert to the previous state if something goes wrong. Then we backup the Apache2 configuration files:

cp -r /etc/apache2/sites-available /home/user/apache-backup
Now our Apache2 configuration files are backed up to our user home folder. We can remove and purge Apache2 with all its dependencies and configuration files:
sudo service apache2 stop
sudo apt-get purge apache2 apache2-utils apache2-bin apache2.2-common
sudo apt-get autoremove

Installing Caddy

Note From the Future

The post below contains details on bare-metal installation of Caddy; i.e. it is installed on the operating system. This option has advantages such as lower resource usage and better performance. However, there is another option to install Caddy as a Docker container. This option has advantages such as easier management and better isolation. If you are interested in this option, you can check out the newer post.

We can install the stable release of Caddy with the following commands:

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
We can test the installation by running the following command:
caddy version

Caddy Configuration Basics

Caddy can be configured either by its own Caddyfile or by using the JSON config file; you can see a comparison of the two here. We will use the Caddyfile, because they are easier to craft and read, and kind of fun to write. On the other hand, if you are planning to automate the configuration or have niche requirements, you might want to use the JSON config file.

The default caddyfile is located at /etc/caddy/Caddyfile. We will use this file to configure our server. The file is divided into blocks, each block starts with a domain name or an IP address. The first block is the default block, which is used when no other block matches the request. The blocks are separated by empty lines. Each block can have multiple directives, which are used to configure the server. The directives are in the form of directive_name argument1 argument2 ... The arguments are separated by spaces. The arguments can be strings or other directives. The strings are enclosed in double-quotes. The directives can be nested, but the nesting is limited to 10 levels. The directives can be commented out by prepending a # to the line. The comments can be nested, but the nesting is limited to 10 levels. The comments can be used to describe the configuration or to disable a directive. For more information about the Caddyfile, see here.

The very basic static file server configuration looks like this:

example.com {
root * /var/www/example.com
file_server
}
As usual, the first line is the domain name. The second line is the root directive, which specifies the root directory for the server. The * is a matcher, which matches all requests. The third line is the file_server directive, which enables the file server. The file server serves the files from the root directory. We will have multiple subdomains, and one way to deal with this is by using wildcardswildcard certificates. However, Let's Encrypt does not issue wildcard certificates, therefore we will use a separate block for every subdomain.

Configuring PHP

First, we install the PHP dependencies for Caddy (assuming we already have PHP8.2 installed):

sudo apt-get install php-fpm php-mysql php-curl php-gd php-mbstring php-common php-xml php-xmlrpc -y
Then we configure PHP-FPM (note that we have PHP 8.2 installed):
sudo vi /etc/php/8.2/fpm/pool.d/www.conf
We change the following lines:
user = caddy
group = caddy
listen.owner = caddy
listen.group = caddy
Save and close the file then restart the PHP-FPM service to apply the changes:
sudo systemctl restart php8.2-fpm

Dealing With the Existing SSL Certificates

By default, Caddy can request and install SSL certificates for any site using Let's Encrypt. If you already have certificates for your domains (and subdomains), you can either revoke these existing certificates and let Caddy handle the rest, or you can use the existing certificates. We will revoke the existing certificates because it is easier to let Caddy handle the certificates. If you want to use the existing certificates, you can skip this section.

With certbot, you can list the existing certificates (if you do not have certbot installed, check out our post):

sudo certbot certificates
You can revoke the certificates with the following command:
sudo certbot revoke --cert-name domain.example.com

Configuring Our Home Page

Let's start with our home page:

example.com {
    root * /var/www/example.com
    file_server
}
That's it. We can test the configuration by running:
sudo caddy reload
Now it should be working at https://example.com. See, that was much easier than Apache.

Configuring Firefly-III

We will revoke our existing certificate for Firefly III and let Caddy handle the certificate:

sudo certbot revoke --cert-name firefly.example.com
We will use the following configuration for Firefly III (remember that the configuration file for Caddy is located at /etc/caddy/Caddyfile):
firefly.example.com {
    root * /var/www/firefly-iii/public
    file_server
    encode gzip
    php_fastcgi unix//run/php/php8.2-fpm.sock
}
We restart the caddy service (note that we do not give a caddyfile path, so we have to be in the /etc/caddy directory):
sudo caddy reload
Now our Firefly III should be working at https://firefly.example.com.

Transferring Ownership

If you have any previous data (accounts, transactions...etc.) in your Firefly app, you will get an error as permission denied when you try to access this data (oddly enough, the home page will work but the page will crash on accounts, reports...etc.), because all the cache files are created by the user www-data and group www-data. To fix this, we need to change the owner and group of the files to caddy. We can do this by running:

sudo chown -R caddy:caddy /var/www/firefly-iii

Configuring NextCloud

We will revoke our existing certificate for NextCloud and let Caddy handle the certificate:

sudo certbot revoke --cert-name nextcloud.example.com
We will use the following configuration for NextCloud (remember that the configuration file for Caddy is located at /etc/caddy/Caddyfile):
nextcloud.example.com {
root * /var/www/nextcloud
encode gzip
file_server
header Strict-Transport-Security "max-age=15552000;"
php_fastcgi unix//run/php/php8.2-fpm.sock
redir /.well-known/carddav /remote.php/dav 301
redir /.well-known/caldav /remote.php/dav 301

# files that need to be hidden from the outside world
@forbidden {
        path /.htaccess
        path /data/*
        path /config/*
        path /db_structure
        path /.xml
        path /README
        path /3rdparty/*
        path /lib/*
        path /templates/*
        path /occ
        path /console.php
}

respond @forbidden 404
}
As per usual, we need to give ownership of this directory to caddy:
sudo chown -R caddy:caddy /var/www/nextcloud
Additionally, Nextcloud prefers to have more than 512MB for PHP memory limit. We can change this by editing the php.ini file:
sudo nano /etc/php/8.2/fpm/php.ini
We change the following line:
memory_limit = 128M
to whatever you want, for example:
memory_limit = 2048M
Moreover, fpmpool.d/www.conf has a setting for the PATH variable, which is not set by default. We can set this by editing the file:
sudo nano /etc/php/8.2/fpm/pool.d/www.conf
We change uncomment the following line by removing ";" and changing it to:
env[PATH] = /usr/local/bin:/usr/bin:/bin
We restart the PHP service, and the Caddy service (note that we do not give a caddyfile path, so we have to be in the /etc/caddy directory):
sudo systemctl reload php8.2-fpm && sudo caddy reload
Now our NextCloud should be working at https://nextcloud.example.com.

Configuring Syncthing

We will revoke our existing certificate for Syncthing and let Caddy handle the certificate:

sudo certbot revoke --cert-name syncthing.example.com
If you have followed my post to set up Syncthing, we have used our custom service file to run Syncthing at the start (under /etc/systemd/system/syncthing@.service). Syncthing comes with its pre-configured service file (under /lib/systemd/system/)configuration, and we will switch to that, too (note that we have to change the user name in the service file to our user name):
sudo systemctl disable syncthing@user.service
sudo rm /etc/systemd/system/syncthing@.service
cd /lib/systemd/system/
sudo systemctl enable syncthing@user.service
sudo systemctl start syncthing@user.service
Now we can check if the Syncthing is running; the result of the following command should be Active: active (running):
systemctl status syncthing@user.service
Now, instead of redirecting to a port as we have done in Apache, we are going to use a reverse proxy so that Syncthing can be accessed at https://syncthing.example.com and this request will be forwarded to the Syncthing port in localhost. In this way, we do not need to open any additional ports in the firewall. Moreover, we can use our HTTPS certificate for the subdomain, and the connection between Syncthing and Caddy can be unencrypted, since this connection takes place within the localhost. We will use the following configuration for Syncthing (remember that the configuration file for Caddy is located at /etc/caddy/Caddyfile):
syncthing.example.com {
    reverse_proxy localhost:8384 {
        header_up Host {upstream_hostport}
    }
}
Since our previous Syncthing setup serves the dashboard over https://ip:8384, we now can change it to localhost:8384, disable HTTP (because now we are using Caddy to serve HTTPS, and the interaction between caddy and Syncthing is within the localhost). To do this, we need to edit the Syncthing configuration file, which is located at ~/.config/syncthing/config.xml. We need to change the following lines:
<gui enabled="true" tls="false" debugging="false">
    <address>127.0.0.1:8384</address>
    <apikey>...</apikey>
    ...
</gui>
We restart the Syncthing and the caddy service (note that we do not give a caddyfile path, so we have to be in the /etc/caddy directory):
sudo systemctl restart syncthing@user.service
sudo caddy reload
Since we are not using port 8384 anymore, we can close this port in the firewall:
sudo ufw delete allow 8384
Now our Syncthing should be working at https://syncthing.example.com.

Configuring Grafana

We will revoke our existing certificate for Grafana and let Caddy handle the certificate:

sudo certbot revoke --cert-name grafana.example.com
Similar to Syncthing, we will create a reverse proxy to access Grafana at https://dashboard.example.com so that we do not have to expose port 3000 to the internet. This also means that since we are using Caddy to serve HTTPS, we do not need to enable HTTPS in Grafana simply because its connection with the Caddy server will be within the localhost. Note that since we are changing our subdomain from admin to dashboard, we need to add an A record in our DNS provider for this subdomain pointing to our server's IP address.

For this, first, we need to update our Grafana configuration in the /etc/grafana/grafana.ini file. We need to change the following lines:


#################################### Server ####################################
[server]
http_addr =
http_port = 3000
domain = dashboard.example.com
root_url =
cert_key =
cert_file =
enforce_domain = False
protocol = http
The Caddy configuration for Grafana is as follows (remember that the configuration file for Caddy is located at /etc/caddy/Caddyfile):
dashboard.example.com {
    reverse_proxy localhost:3000 {
    }
}
We restart the Grafana and the caddy service (note that we do not give a caddyfile path, so we have to be in the /etc/caddy directory):
sudo systemctl restart grafana-server.service
sudo caddy reload
Since we are not using port 3000 anymore, we can close this port in the firewall:
sudo ufw delete allow 3000
Now our Grafana should be working at https://dashboard.example.com.