Mustafa Can Yücel
blog-post-1

Authelia and Caddy in Containers

To Containerize or not to Containerize, That Is the Question

Running Caddy, whether in a Docker container or on bare-metal, involves trade-offs. Deploying Caddy within a Docker container offers portability and encapsulation benefits. Docker's isolation enables consistent deployment across various systems, simplifies scaling, and facilitates updates. Resource optimization in Docker allows for efficient utilization and better scalability, particularly in cloud environments. However, Docker adds complexity and slight performance overhead due to abstraction layers and container management requirements.

On the other hand, running Caddy directly on bare-metal systems maximizes performance and resource utilization. It eliminates containerization overhead, simplifying deployment and management. Administrators interact directly with the operating system, streamlining customization and troubleshooting. However, bare-metal deployments lack the flexibility and portability of containerized environments. They may require manual intervention for scaling and configuration management, posing challenges in maintaining consistency across multiple servers.

In conclusion, the choice between Docker containerization and bare-metal deployment for Caddy hinges on factors such as performance needs, scalability requirements, and operational preferences. Docker offers flexibility, portability, and resource optimization but introduces complexity and slight performance overhead. Conversely, bare-metal deployment provides maximum performance and simplicity but sacrifices the flexibility and scalability advantages of containerized environments. Ultimately, the decision should align with the specific needs and constraints of your infrastructure and application.

Single Sign Ons and Self-Hosted Services

Many of the self hosted services have their own login systems. However, some of them don’t (e.g. Silverbullet, image hosting systems, etc.). Caddy provides a very primitive solution for this: basicauth directive. However, it does not have a specific login page; it uses browser’s dialog. Moreover, it has no additional features like cookie auth, tokens...etc. Also it has to be configured per handle basis; it does not support SSO. You have to log in to your services individually and this can be a cumbersome approach if you have many services.

There are a lot of SSO or similar solutions on the web; Authentik, Authelia, Caddy modules...etc. One of the easy-to-use alternative is Authelia; it has a Docker stack and it integrates into the `forward_auth` feature of Caddy. However, even though it is easier to use than its alternatives, the setup can still be a little...problematic. For this reason, we will go through the setup of Authelia with Caddy on a Debian server for a minimally complex setup.

In this post, we will create a Docker stack that runs Authelia and Caddy together. However, if you want to run only the Caddy container, you can skip the Authelia part; the Caddy container is just a single service in the stack. Removing the Authelia service will not affect the Caddy service and you can still use the Caddy container as a reverse proxy. In this case, you can skip the Authelia parts and implement the Caddy parts; every header contains which applications require that step in paranthesis.

This post assumes that you already have a Debian server with Docker and Docker Compose installed. If you don’t have Docker installed, you can follow the instructions on the official Docker documentation.

Many of the code and practices below are based on this post. Kudos to Whitestrake for creating a base for people to work on.

Preparations

Authelia and Caddy requires several files to be in place to work. Some of these are secret files that contain various passwords and keys, others are configuration files that define the behavior of the services. Below, we will create the necessary files and directories.

Project Directory (Authelia, Caddy)

Let's create a project directory that will hold all the necessary files and folders. This directory will be used to store the configuration files, secrets, and Docker Compose files. For this example, we will assume that the project directory is located at ~/containers/authelia.

Secret files (Authelia)

The following secret files are required (official documentation):

  • JWT secret
  • Session secret
  • Storage encryption key
  • Storage password
  • Redis password
We will put these files under config/secrets. These have to contain long strings, so we can use different services to generate them:
mkdir -p config/secrets
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/JWT_SECRET
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/SESSION_SECRET
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/STORAGE_PASSWORD
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/STORAGE_ENCRYPTION_KEY
tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/REDIS_PASSWORD
We need to note the REDIS_PASSWORD and STORAGE_PASSWORD as we will use them in the configuration file. You also need to create a file named SMTP_PASSWORD which contains a password to an SMTP mail server. This is used to send emails for password resets and other notifications. If you want to use GMail, you can create an application password from your account settings, and paste this password to the file in the same directory with above (config/secrets).

Configuration files (Authelia)

The default configuration file is...daunting, to say the least. It has ~1500 lines of configuration options. However, you don't need to configure all of them. You can start with a minimal configuration and add more as you need. Below is a minimal configuration file that you can use to start with:

# Miscellaneous https://www.authelia.com/configuration/miscellaneous/introduction/
theme: auto
default_redirection_url: https://auth.example.com/ # Change me!

# First Factor https://www.authelia.com/configuration/first-factor/file/
authentication_backend:
    file:
    path: /config/users_database.yml

# Second Factor https://www.authelia.com/configuration/second-factor/introduction/
totp:
    issuer: example.com # Change me!

# Security https://www.authelia.com/configuration/security/access-control/
access_control:
    default_policy: two_factor

# Session https://www.authelia.com/configuration/session/introduction/
# Set also AUTHELIA_SESSION_SECRET_FILE
session:
    domain: example.com # Change me!

    # https://www.authelia.com/configuration/session/redis/
    # Set also AUTHELIA_SESSION_REDIS_PASSWORD_FILE if appropriate
    redis:
    host: redis
    port: 6379

# Storage https://www.authelia.com/configuration/storage/postgres/
# Set also AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE
# Set also AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE
storage:
    postgres:
    host: database
    database: authelia
    username: authelia

# SMTP Notifier https://www.authelia.com/configuration/notifications/smtp/
# Set also AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE
notifier:
    smtp:
    host: smtp.gmail.com                     # Change me!
    port: 587                                  # Change me!
    username: example@gmail.com                  # Change me!
    sender: "Authelia <authelia@example.com>"  # Change me!

In the above minimal configuration, you have to change the appropriate fields. Additionally, these are the things you need to know:

  • default_redirection_url: Authelia remembers the original URL of your request and redirects you to this return url after a successful authentication. The default redirection URL is the URL where users are redirected when Authelia cannot detect the target URL where the user waws heading.
  • First Factor: The example above uses a file that contains the users and their passwords. For the contents of this file, see below.
  • Second Factor: This section contains the 2FA configuration. The sample above contains the default TOTP that comes with Authelia. You can use any code generator such as Aegis, Google Authenticator, Authy, etc. to generate the 2FA code. You set up your application with a QR code when you first log in to Authelia.
  • Session: This section contains the session configuration. The domain is the domain of your Authelia instance. The redis section contains the connection information for the Redis server. The host is the name of the Redis service in the Docker stack.
  • Storage: This section contains the storage configuration. The postgres section contains the connection information for the PostgreSQL database. The host is the name of the PostgreSQL service in the Docker stack.
  • SMTP Notifier: This section contains the SMTP configuration. The sample above contains the values for a GMail account.

User File (Authelia)

A simple YAML file can be used to contain the users and their (hashed) passwords. The following is an example of a user file that contains a single user:


# User file database https://www.authelia.com/reference/guides/passwords/#yaml-format
# Generate passwords https://www.authelia.com/reference/guides/passwords/#passwords
# docker run --rm -it authelia/authelia:latest authelia crypto hash generate argon2
users:
    username: # change username
        password: [hashed password]  # hash as per instructions above
        displayname: "My User" # change
        email: changeme@example.com # change
As it can be seen in the content of the file above, the password should be hashed with Argon2. You can use the following command to hash a password:
docker run --rm -it authelia/authelia:latest authelia crypto hash generate argon2
This command spins up a temporary Authelia container and hashes the password you provide (it then removes this temporary container). You can then copy this hash to the user file. Do not forget to change the username, displayname, and email fields.

File Review (Authelia)

In the end, the project directory should look like this:

$ sudo tree ~/containers/authelia/config
/home/user/contaniners/authelia/config
├── configuration.yml
├── secrets
│   ├── JWT_SECRET
│   ├── REDIS_PASSWORD
│   ├── SESSION_SECRET
│   ├── SMTP_PASSWORD
│   ├── STORAGE_ENCRYPTION_KEY
│   └── STORAGE_PASSWORD
└── users_database.yml

1 directory, 8 files
For additional security, this directory structure can be locked down with the following command:
sudo chown -R root:root ~/containers/authelia/config
sudo chmod -R 600 ~/containers/authelia/config

Caddy Configuration (Authelia, Caddy)

In order to be able to diagnose the problems if something goes south, we will start with a simple Caddy configuration that contains a single endpoint. We will contain all the Caddy-related files under caddy directory:

mkdir -p ~/containers/authelia/caddy
We will create a simple Caddyfile for now that contains only the authentication endpoint:
nano ~/containers/authelia/caddy/Caddyfile
The content of the file should be:
auth.example.com {
    reverse_proxy app:9091
  }
This configuration file will redirect the requests to the app service on port 9091. We will create this service in the Docker Compose file. Note hat here we are using the domain name that we used in the Authelia docker compose file.

Docker Compose file (Authelia, Caddy)

Now that we have all the necessary files and directories, we can create the Docker Compose file that will run the Authelia and Caddy services. The sample files in Authelia repo are a good reference, however, they are slightly more complex than what is required as a minimum working stack. We will create a file named docker-compose.yml under the project directory:

nano ~/containers/authelia/docker-compose.yml
The content of the file should be:
name: "authelia"
services:

  proxy:
    container_name: caddy
    image: caddy:latest
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /home/user/containers/authelia/caddy/data:/data # Update to Caddy data directory
      - /home/user/containers/authelia/caddy/Caddyfile:/etc/caddy/Caddyfile # Update to Caddyfile location
      - /var/www:/var/www # Update to your web root
    extra_hosts:
      - "host.docker.internal:host-gateway" # See Caddy section below for more info

  app:
    container_name: authelia
    image: authelia/authelia:latest
    restart: unless-stopped
    depends_on:
      - database
      - redis
    volumes:
      - /home/user/containers/authelia/config:/config # Update to Authelia config directory
    environment:
      AUTHELIA_JWT_SECRET_FILE: /config/secrets/JWT_SECRET
      AUTHELIA_SESSION_SECRET_FILE: /config/secrets/SESSION_SECRET
      AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE: /config/secrets/SMTP_PASSWORD
      AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE: /config/secrets/STORAGE_ENCRYPTION_KEY
      AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE: /config/secrets/STORAGE_PASSWORD
      AUTHELIA_SESSION_REDIS_PASSWORD_FILE: /config/secrets/REDIS_PASSWORD

  database:
    container_name: auth_database
    image: postgres:15
    restart: unless-stopped
    volumes:
      - /home/user/containers/authelia/postgres:/var/lib/postgresql/data # Update to PostgreSQL data directory
    environment:
      POSTGRES_USER: "authelia"
      POSTGRES_PASSWORD: "<postgres password here>" # Change me!

  redis:
    image: redis:7
    container_name: auth_redis
    restart: unless-stopped
    command: "redis-server --save 60 1 --loglevel warning --requirepass <redis password here>" # Change me!
    volumes:
      - /home/user/containers/authelia/redis:/data # Update to Redis data directory
    working_dir: /var/lib/redis
This is a minimal Docker Compose file that contains the Authelia and Caddy services. The Authelia service depends on the database and Redis services. The database service uses the PostgreSQL image and the Redis service uses the Redis image. The Caddy service uses the Caddy image. The volumes are mounted to the appropriate directories in the project directory. The environment variables are set to the secret files that we created earlier. The database and Redis services have their own volumes for data persistence.

Testing Individual Services & First Run (Authelia, Caddy)

Instead of running the complete stack at once and trying to find out if anything is wrong in a haystack, it is better to test the services individually. This way, you can find out the problems in the services and fix them before running the complete stack. Below, we will test the Authelia and storage services individually:

docker compose up app database redis
Since we do not start these services a daemons, we will see all the outputs and logs in the terminal. If there are any errors, you can see them in the terminal. If everything is fine, you can stop the services with Ctrl+C. Now we check the Caddy service:
docker compose up proxy
If no errors pop up, you can stop the service with Ctrl+C. Now you can run the complete stack:
docker compose up -d

First Login (Authelia)

After the stack is up and running, you can navigate to the domain that you set in the Caddyfile (auth.example.com). Note that you should already have set your DNS record in your DNS provider so that this CNAME points to your IP. You will be redirected to the Authelia login page. You can log in with the credentials that you set in the user file. After the login, you will be redirected to 2FA setup.

Intermission: Caddy in a Container (Caddy)

We have discussed how to use Caddy on bare-metal in a previous post. When running Caddy in a container, there are some differences. The most important difference is that the Caddy container does not have access to the host's network. This means that the Caddy container cannot access the host's network interfaces, ports, or services. This can be a problem if you want to use Caddy as a reverse proxy for services running on the host, which is one of the most common use cases.

Reverse Proxy to Services in Other Containers

When you want to reverse proxy to another service which also runs as a container, the target container's name can be used as the target address, provided that the Caddy container and the target container are on the same Docker network.

If you already have running containers, you can connect them to the same network without stopping them via the following command:

docker network connect <common network name> <target container name>
If your target container is not running and it uses docker compose, you can alter the compose file as below so that it uses the same network as the Caddy container:
target_container_name:
    container_name: target_container_name
    image: foo:bar
    ...
    networks:
      - common_network
  
  networks:
    common_network:
      external: true
If you are using good old docker run command, you can use the following command to run the container in the same network as the Caddy container:
docker run --network <common network name> --name foo foo:bar

This allows to use the name of the container in the Caddyfile as the host. A very important point is now the internal ports of the containers have to be used while defining the reverse proxy. If you do not know the internal port of a container, you can inspect its Dockerfile. In a non-containerized environment, the ports are exposed to the host and the host can access them (i.e. "12345:80" exposes the port 80 of the container to the port 12345 of the host), and we use the host port in Caddyfile ("12345"). However, when we are pointing to a container with its name, we bind to the internal port of the container ("80"):

sub.example.com {
  reverse_proxy target_container_name:80
}
In the sake of reiterating the obvious, for this to work, both the Caddy container and the target container have to be on the same docker network. Luckily a container can be in many networks, so just add it to the common network using the first command above.

Reverse Proxy to Services on Host

For a Caddy server running on a container, the localhost in a Caddyfile refers to the container itself, not the host. This means that if you want to reverse proxy to a service running on the host, you cannot use localhost as the target address. Instead, we need to tap into a "bridge" network that connects the host to all the containers, and we need a special DNS name to refer to the host.

By default, Docker creates a network named bridge, which is a special network that connects the host to all the containers; which solves the first part of our problem.

As default, the bridge network has 172.17.0.0/16 subnet and 172.17.0.1 gateway; these are our tickets for traversing from the containers to the host. To verify these addresses, you can use

docker network inspect bridge
and check the IPAM.Config section of the response. This gateway can be referenced in a compose file with a special address: host.docker.internal, however, it should be defined in a compose file so that it is added as an extra host in the Caddy container. Below is the relevant part of the compose file for the Caddy service:
server:
  container_name: caddy
  image: caddy:latest
  restart: unless-stopped
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - ./data:/data # Update as necessary
    - ./Caddyfile:/etc/caddy/Caddyfile # Update as necessary
    - /var/www:/var/www # Update as necessary
  extra_hosts:
    - "host.docker.internal:host-gateway" # New line
  networks:
    - bridge

networks:
  bridge:
    external: true
When we (re)create the Caddy service, this will add an extra host to the hosts file of the container. This allows us to make a connection to the host computer. As an example, the following is a sample directive in the Caddyfile that reverse proxies to a bare-metal Cockpit installation (which listens to the port 9090 on the host by default):
cockpit.example.com {
    reverse_proxy host.docker.internal:9090
}

It looks nice, right? However, our work is not over; if you have a firewall (which you should), it will block the connections to the host from the containers and you will see 502 errors on the browser (and timeout errors in Caddy logs). We need to allow the connection from the bridge network subnet to the host. Remember the subnet IP range we have seen above? We will use it here to allow the connections. Below is an example for ufw:

sudo ufw allow from 172.17.0.0/16
This will allows the connections only from Docker containers to any port on the host. If you want to allow the connections to a specific port (let's say 9090), you can use the following command:
sudo ufw allow from 172.17.0.0/16 to any port 9090
Once we restart the Caddy container, the service will be able to connect to the host and reverse proxy to the service running on the host.

If you still see 502 errors, you can check the logs of the Caddy container with the following command:

docker logs caddy
This will show the logs of the Caddy container. If you see any errors, you can diagnose them and fix them accordingly. For example, if you see the following error:
2024/04/16 12:34:56 [ERROR] dial tcp 172.17.0.1:9090: i/o timeout
it means the Caddy container cannot connect to the host. Few possible causes can be:
  • The firewall blocks the connection
  • The service is not running on the host
  • The service is not listening on the correct port
  • The Caddy container is not on the bridge network

Good Old File Servers

If you want to serve static files that are located on the host, the easiest and most convenient way is to mount the web root on the host computer to the web root of the Caddy container. This way, the Caddy container can access the files on the host and serve them to the clients. If these files are truly static, you can mount them read-only so that the container cannot change them. Below is an example of how to mount the web root on the host to the web root of the Caddy container (note that it is not read-only):

server:
    container_name: caddy
    image: caddy:latest
    restart: unless-stopped
    ports:
        - "80:80"
        - "443:443"
    volumes:
        - /path/to/web/root:/var/www # Update as necessary
        - ./Caddyfile:/etc/caddy/Caddyfile # Update as necessary
    extra_hosts:
        - "host.docker.internal:host-gateway" # New line
      networks:
        - bridge
    
    networks:
      bridge:
        external: true
Now we can use the classic file server block:
example.com {
root * /var/www/example.com
file_server
}

Let's Continue: Securing Endpoints (Authelia)

Now that the SSO service (Authelia) and the containerized reverse proxy (Caddy) are up and running, we can secure the endpoints.

Caddy Authentication Block (Authelia)

The authentication (secure) block in the Caddyfile will change if our target service is in the same host with the Caddy container or in another host. Assuming that all our target services run on the same host, we add the following block to the top of our Caddyfile:

(secure) {
    forward_auth {args.0} app:9091 {
      uri /api/verify?rd=https://auth.example.com
      copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
    }
}
This block will authenticate the requests to the services that are defined in the Caddyfile. The forward_auth directive will forward the requests to the Authelia service. The uri parameter is the URL where the Authelia service is located. The copy_headers parameter will copy the headers from the Authelia service to the target service. This is useful if the target service requires the user information in the headers.

If the authentication service runs on a different host than the services, we need to point to it with a full URL:

(secure) {
    forward_auth {args.0} https://auth.example.com {
      uri /api/verify?rd=https://auth.example.com
      copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
      header_up Host {upstream_hostport}
    }
}

Securing Entire Subdomain

If you want to completely secure a subdomain with Authelia, you can call the secure block with a wildcard:

foo.example.net {
    import secure *
    reverse_proxy upstream:8080
}
In the above example, upstream is another container connected to the same network (authelia_default) as the Caddy container, and 8080 is the internal port that this target container listens. For more details on how to reverse proxy to services in another container or running on host, see the previous section.

Securing a Subpath

If you want to secure only a subpath of a domain, you can use the secure block with the subpath:

bar.example.net {
    import secure /api
    reverse_proxy backend:8080
}
The above example will secure only the /api path of the bar.example.net domain. The backend is another container connected to the same network as the Caddy container, and 8080 is the internal port that this target container listens.

Securing Based on Matchers

One of the most common use cases is to secure the endpoints based on matchers. For example, you may want to secure the endpoints that contain the word admin in the path, or you are hosting an image hosting service and you want to secure the endpoints that allows uploading the images, but keep the viewing endpoints open. You can use the secure block with matchers.

Let's demonstrate the second use case above. Assume we have the following patterns:

  • The root URL (imgur.example.com) is for uploading images
  • The view URLs have the following pattern: imgur.example.com/filename.ext
We can create a matcher so that any URL that points to a file is open while all others are secured:
imgur.example.com {
@isRoot {
    not path_regexp \..+$
}
import secure @isRoot
reverse_proxy pictshare:80
}
Here, pictshare refers to the image hosting service that runs on the same network as the Caddy container. The path_regexp \..+$ checks if the path contains a dot followed by at least one character. If it does, the matcher is true and the request is forwarded to the pictshare service. If it does not, the request is forwarded to the Authelia service for authentication.

An alternative option is using handle blocks. For example, the sample below allows open access to URLs that have at least two subpaths, and secures the rest:

dav.example.com {
    @hasMultipleSubpaths path_regexp ^/[^/]+/[^/]+(/.*)?$
    handle @hasMultipleSubpaths {
      reverse_proxy radicale:5232
    }
    handle {
      import secure *
      reverse_proxy radicale:5232
    }
  }
Here, the radicale service is the CalDAV & CardDAV server that runs on the same network as the Caddy container. The path_regexp ^/[^/]+/[^/]+(/.*)?$ checks if the path contains at least two subpaths. If it does, the request is forwarded to the radicale service. If it does not, the request is forwarded to the Authelia service for authentication.

Conclusions

In conclusion, the choice between Docker containerization and bare-metal deployment for Caddy hinges on factors such as performance needs, scalability requirements, and operational preferences. Docker offers flexibility, portability, and resource optimization but introduces complexity and slight performance overhead. Conversely, bare-metal deployment provides maximum performance and simplicity but sacrifices the flexibility and scalability advantages of containerized environments. Ultimately, the decision should align with the specific needs and constraints of your infrastructure and application.

Securing endpoints that do not have their own login systems can be a cumbersome task. Caddy provides a basic authentication mechanism, but it lacks advanced features like SSO and token-based authentication. Authelia is a powerful solution that integrates with Caddy to provide robust authentication and authorization capabilities. By following the steps outlined in this post, you can set up a secure and efficient authentication system for your self-hosted services.