Mustafa Can Yücel
blog-post-19

Containerization Rediscovered

Containerization Spaghetti

Along the previous posts, we have added a multitude of services as containers over Docker. These services span various topics ranging from simple to-do's to complex authorization and authentication flows. Each stack creates its own volumes, networks and other resources, therefore, in the end it may become too convoluted to track who has access to what and how the data is being stored. In this post, we will rediscover the containerization process and how to back up the data in a more organized way.

First we will create the necessary preprocessing steps, then we will update docker compose files to have a better overall structure. We will also discuss how these containers will communicate with each other. Finally, we will have a few words about backing up the data and restoring it in case of a disaster but full implementations of different alternatives to our previous post will be discussed in a future post.

This will be a long long post, so here is a breakdown of what we will discuss:


Preparations

In this post, we assume that the installations will be done on a freshly-installed Debian Bookworm system. Therefore we will start with the necessary installations.

Reading Log Volumes

In this blog, we will spin a lot of containers and inadvertantly we will encounter failed containers. To debug them, we can use log volumes. Reading these log volumes, however, is not as straightforward as it seems.Therefore we will see how to read the contents of a log volume for tracing errors or looking for information.

  1. Find the MountPoint of the volume:
    $ sudo docker volume inspect volumeName
    [
        {
            "CreatedAt": "2024-05-24T14:20:37+03:00",
            "Driver": "local",
            "Labels": {
                "com.docker.compose.project": "app",
                "com.docker.compose.version": "2.27.0",
                "com.docker.compose.volume": "app_logs"
            },
            -> "Mountpoint": "/var/lib/docker/volumes/linkace_linkace_logs/_data", <-
            "Name": "linkace_linkace_logs",
            "Options": null,
            "Scope": "local"
        }
    ]
  2. List the log files in the mount directory:
    $ sudo ls /var/lib/docker/volumes/linkace_linkace_logs/_data
    
    laravel-2024-05-24.log
    
  3. Open or view the appropriate log file:
    $ sudo cat /var/lib/docker/volumes/linkace_linkace_logs/_data/laravel-2024-05-24.log
    
    [2024-05-24 11:37:47] production.ERROR: file_put_contents(/app/.env): Failed to open stream: Permission denied {"exception":"[object] (ErrorException(code: 0): file_put_contents(/app/.env): Failed to open stream: Permission denied at /app/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:204)

Creating Script Files

We will create a few script files that will help us. To create a script file, we create a file with .sh extension and make it executable using the following command:

$ chmod +x script.sh
Now the script file can be executed using the following command:
$ ./script.sh

Installing Docker

The apt repository includes the Docker package, however, most of the time it will be out of date and it is recommended to install the package from the official Docker repository. To do this, you can follow the latest steps from the [official Docker documentation](https://docs.docker.com/engine/install/debian/). Simply put, you can run the following commands to install Docker:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
    "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
    $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
    sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# verify the installation
sudo docker run hello-world

Creating Networks

When a stack is created using docker compose, Docker will create a default network for the services to communicate with each other. These networks will all use bridge as driver, and they will not be isolated. For this reason, we will create two shared custom networks for three purposes (remember that one container can connect to many networks at the same time):

  • internet_network: The services that require internet access will be connected to this network, therefore it will not be isolated.
  • caddy_network: This is an isolated (no outside access) for reverse-proxying services. The containers that serve a UI or that will be accessed from outside (e.g. APIs) will be connected to this network. Note that this network will not have internet access.

Aside from these shared networks, docker stacks that require inter-container communication will have their own isolated networks that are defined in their own compose files. For example, a stack that includes a database and an API service will have a network that is shared between these two services.

Instead of creating these networks manually, we will use the following bash script. This will also allow us to use the same script in different environments and Ansible playbooks.

#!/bin/bash
# Create isolated network for Caddy reverse proxy container
docker network create --internal caddy_network
# Create network with internet connection
docker network create internet_network

Caddy

Caddy is a modern web server that can be used as a reverse proxy. It is easy to configure and it can automatically obtain and renew SSL certificates from Let's Encrypt. This time, we will install Caddy as a separate container (rather than part of a stack), and we will use it to reverse-proxy the services that require outside access. We will also use Caddy to serve static files, our own websites and other services that require a web server.

Docker Compose File

We will use the following docker compose file for caddy service:

services:
    caddy:
        image: caddy:latest
        container_name: caddy
        restart: unless-stopped
        ports:
        - "80:80"
        - "443:443"
        volumes:
        - ./config/Caddyfile:/etc/caddy/Caddyfile:ro
        - /var/www:/var/www:ro
        - ./certificates:/data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/
        - caddy_data:/data
        - caddy_config:/config
        extra_hosts:
        - "host.docker.internal:host-gateway"
        networks:
        - caddy_network
        - auth_network
        - internet_network

    volumes:
        caddy_data:
        caddy_config:

    networks:
        caddy_network:
            external: true
        auth_network:
            external: true
        internet_network:
            external: true
Let's discuss a few points about this compose file:
  • ports: We will expose the ports 80 and 443 to the host machine. This will allow the Caddy server to serve HTTP and HTTPS requests.
  • volumes: We will mount the Caddyfile, the static files and the certificates to the container. The certificates will be stored in a separate volume, so that they can be backed up and restored easily. We will also need these certificates for several of other services (such as Technitium)
  • extra_hosts: This is a special configuration for Docker for Mac and Windows. It will allow the container to access the host machine. This will allow to reverse proxy to bare-metal installed services such as Cockpit.
  • networks: We will connect the Caddy container to the three networks that we have created in the previous step.

To start the Caddy service, we can run the following command in the same directory as the docker compose file:

docker-compose up -d
To reload Caddy after making changes to the Caddyfile, we can run the following command:
docker compose exec -w /etc/caddy caddy caddy reload
To see Caddy's 1000 most recent logs, and follow them in real time, we can run the following command:
docker compose logs caddy -n=1000 -f

For the Caddyfile, the following sample handle directive can be used for serving static files:

example.com {
    root * /var/www/example.com
    file_server
  }
This will serve the files in the /var/www/example.com directory for the domain example.com. The file_server directive will serve the files as they are, without any processing. Remember that the Caddyfile should be mounted to the container as /etc/caddy/Caddyfile and the static files should be mounted to the container as /var/www.

For the Caddyfile, the following sample handle directive can be used for reverse proxying to a service named 'service' that is connected to the caddy_network and has internal port 80:

service.example.com {
    reverse_proxy service:80 {
        header_up X-Real-IP {remote_host}
    }
}
This will reverse proxy the requests to service.example.com to the service named service that is connected to the caddy_network and has internal port 80. This handle block also sets the X-Real-IP header to the IP address of the client; some services require this header to be set. If not required this line can be removed.

Backup and Restore

The only item that needs to be backed up for Caddy is the Caddyfile. As Caddy uses Let's Encrypt for certificates, it can reissue them if they are lost.

Restoring a Caddy container backup is as simple as copying the Caddyfile to the appropriate directory and reloading the Caddy service. The certificates will be reissued automatically.

Actual

Actual is a local-first personal finance tool. It is 100% free and open-source, written in NodeJS, it has a synchronization element so that all your changes can move between devices without any heavy lifting.

Docker Compose File

services:
    actual_server:
      image: docker.io/actualbudget/actual-server:latest-alpine
      container_name: actual-server
      volumes:
        - ./data:/data
      networks:
        - caddy_network
      restart: unless-stopped
  
  networks:
    caddy_network:
      external: true
Actual by itself does not require internet; it will be connected over Caddy, therefore it is only added to the caddy_network.

Backup and Restore

Actual holds all its data in the /data directory. You can back up this directory. However, Actual is also an offline-first application, so you can use the device that has last accessed to Actual to re-sync the data instead. To do this, you can close the "budget file" in the web application and re-open it. If you have enabled synchronization password, it will need to be reset. After setting up a new sync password, you can re-sync the data.

Caddy Handle Block

Since Actual has its own authentication the following handle block can be used to reverse proxy directly to Actual:

actual.example.com {
    reverse_proxy actual-server:5006 {
      header_up X-Real-IP {remote-host}
    }
}

Technitium

Technitium is a recursive DNS server that can be used to block ads and trackers. It is a great tool to use to block unwanted content. It also allows to prevent DNS hijacking and to have a more secure DNS resolution across all your devices.

PFX Export Script

To use DNS-over-TLS (DoT) in Technitium, we need the TLS certificate of the domain we are using (e.g. dns.example.com) in PFX format (since Technitium is written for .NET with C#). The TLS certificates will be issued automatically by Caddy and they will be within the Caddy container in PEM format in normal conditions. But we have mounted the certificates to a volume, so we can export them to PFX format using the following script:

#!/bin/bash

    # Variables
    CERTIFICATE_FILE="/home/users/containers/caddy/certificates/dns.example.com/dns.example.com.crt"
    PRIVATE_KEY_FILE="/home/users/containers/caddy/certificates/dns.example.com/dns.example.com.key"
    OUTPUT_PFX_FILE="/home/users/containers/technitium/tls-cert/certificate.pfx"
    EXPORT_PASSWORD="secureString"
    
    # Check if the certificate file exists
    if [[ ! -f "$CERTIFICATE_FILE" ]]; then
        echo "Error: Certificate file '$CERTIFICATE_FILE' not found."
        exit 1
    fi
    
    # Check if the private key file exists
    if [[ ! -f "$PRIVATE_KEY_FILE" ]]; then
        echo "Error: Private key file '$PRIVATE_KEY_FILE' not found."
        exit 1
    fi
    
    # Create the .pfx file
    openssl pkcs12 -export -out "$OUTPUT_PFX_FILE" -inkey "$PRIVATE_KEY_FILE" -in "$CERTIFICATE_FILE" -passout pass:"$EXPORT_PASSWORD"
    
    # Check if the openssl command was successful
    if [[ $? -eq 0 ]]; then
        echo "Successfully created '$OUTPUT_PFX_FILE'."
    else
        echo "Error: Failed to create '$OUTPUT_PFX_FILE'."
        exit 1
    fi
Note that the TLS certificates issued by Let’s Encrypt are usually renewed every 3 months, therefore it is a good idea to create a cronjob that runs this script file with the same interval.

Docker Compose File

services:
    dns-server:
      container_name: dns-server
      hostname: dns-server
      image: technitium/dns-server:latest
      ports:
        - "53:53/udp" #DNS service
        - "53:53/tcp" #DNS service
        - "853:853/tcp" #DNS-over-TLS service
      networks:
        - caddy_network
        - internet_network
      environment:
        - DNS_SERVER_DOMAIN=serverName #The primary domain name used by this DNS Server to identify itself.
      volumes:
        - config:/etc/dns
        - ./tls-cert:/var/cert:ro
      restart: unless-stopped
      sysctls:
        - net.ipv4.ip_local_port_range=1024 65000
  
  volumes:
      config:
  
  networks:
    caddy_network:
      external: true
    internet_network:
      external: true
Let's discuss a few points about this compose file:
  • ports: We will expose the ports 53 and 853 to the host machine. This will allow the Technitium server to serve DNS and DNS-over-TLS requests.
  • environment: The DNS_SERVER_DOMAIN environment variable is used to set the primary domain name used by this DNS Server to identify itself. This should be set to the domain name that is used in the TLS certificate.
  • volumes: We will mount the configuration directory and the TLS certificates to the container. The configuration directory will be used to store the configuration files of the Technitium server.
  • sysctls: This is a special configuration for the Technitium server. It will allow the server to use ports below 1024. This is required for the DNS-over-TLS service.
  • networks: We need to connect to the caddy network because we will serve the UI and redirect the DNS-over-HTTPS requests. We also need to connect to the internet network for resolving the DNS addresses that are not in cache.
  • Backup and Restore

    The configuration of Technitium can be downloaded from UI or from the configuration directory, therefore the configuration directory should be backed up. To restore the configuration, the configuration directory should be restored and the Technitium service should be restarted. Don't forget to re-export the TLS certificates to PFX format and mount them to the container.

    Caddy Handle Block

    The handle block should distinguish between DoH and UI requests; the following handle block can be used for reverse proxying to Technitium:

    dns.example.com {
        handle /dns-query* {
          reverse_proxy dns-server:80 {
            header_up X-Real-IP {remote_host}
          }
        }
        handle {
          reverse_proxy dns-server:5380 {
            header_up X-Real-IP {remote_host}
          }
        }
      }

    Vaultwarden

    Vaultwarden is a password manager that is compatible with Bitwarden. It is a great tool to use to store your passwords securely and to share them with your family or team. It also allows to store secure notes and other sensitive information.

    Docker Compose File

    services:
        vaultwarden:
          image: vaultwarden/server:latest
          container_name: vaultwarden
          restart: unless-stopped
          volumes:
            - ./data/:/data/
          networks:
            - caddy_network
      
      networks:
        caddy_network:
          external: true
    Since Vaultwarden does not require internet, it will be connected over Caddy, therefore it is only added to the caddy_network.

    Backup and Restore

    Save everything under the mapped data directory (or project directory itself). If you have an encrypted rsync backup, you have to reinstall it with the same keys and use restore. If you have plain text backups, copy the backup directory to the server with scp:

    scp -P port -r "C:\Users\user\vaultwarden" user@server.example.com:/home/user/containers

    Caddy Handle Block

    Since Vaultwarden has its own authentication, it does not require a secure block. The internal port of Vaultwarden is 80, so the following handle block can be used for reverse proxying to Vaultwarden:

    vaultwarden.example.com {
        reverse_proxy vaultwarden:80 {
            header_up X-Real-IP {remote_host}
        }
    }

    Umami

    Umami is a simple, fast, privacy-focused alternative to Google Analytics. It is a simple, easy to use, self-hosted web analytics solution, and a great tool to use to track your website visitors and to see how they interact with your website. It also allows to see the most popular pages and the most popular referrers.

    Docker Compose File

    services:
    umami:
        image: ghcr.io/umami-software/umami:postgresql-latest
        container_name: umami
        environment:
        DATABASE_URL: postgresql://umami:umami@db:5432/umami
        DATABASE_TYPE: postgresql
        APP_SECRET: replace-me-with-a-random-string
        depends_on:
        db:
            condition: service_healthy
        restart: unless-stopped
        healthcheck:
        test: ["CMD-SHELL", "curl http://localhost:3000/api/heartbeat"]
        interval: 5s
        timeout: 5s
        retries: 5
        networks:
        - caddy_network
        - umami_network
    
    db:
        image: postgres:15-alpine
        container_name: umami-db
        environment:
        POSTGRES_DB: umami
        POSTGRES_USER: umami
        POSTGRES_PASSWORD: umami
        volumes:
        - umami-db-data:/var/lib/postgresql/data
        restart: unless-stopped
        healthcheck:
        test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"]
        interval: 5s
        timeout: 5s
        retries: 5
        networks:
        - umami_network
    volumes:
    umami-db-data:
    
    networks:
    caddy_network:
        external: true
    umami_network:
        internal: true
        driver: bridge
    As usual, let's discuss a few points about this compose file:
    • We are using PostgreSQL version of the Umami image.
    • As Umami works by websites making requests to the Umami server, it does not require direct internet access, it will be accessed over caddy network. It also needs to access the database, but the database itself does not require any further access, we create an isolated network for the database. The app is connected to both networks, and the database will only be connected to the isolated network.
    • Umami has its own authentication, therefore it does not require any further authentication.
    Important: If you are going to restore a backup, do not start the umami service before restoring the database.

    Backup and Restore

    The database is the most important part of Umami, so it should be backed up regularly. The database can be backed up using PostgreSQL database dump. The details will be discussed in a future post.

    Restoring the database is a little more involved than copying files. Let's do a quick overview of the steps:

    1. Copy the backup file to the remote computer with scp (or any other means):
      scp -P port "C:\Users\...\umami_backup.sql" user@server.example.com:/home/user/containers/umami/data/umami_backup.sql
    2. Run only the database service of the compose file:
      docker-compose up -d db
    3. Since the database is a container, copy the backup file to the database container:
      sudo docker cp data/umami_backup.sql umami-db:/tmp/database_dump.sql
    4. Create an interactive terminal session to the database container (this container has bash installed):
      sudo docker exec -it umami-db /bin/bash
    5. If the backup is an SQL script text file (which is if it is a db dump), pg_restore will not work. Instead, we will use psql to restore the database. However, psql requires a user to be present, so we create a new admin user for umami:
      psql -U umami -d umami -c "CREATE ROLE postgres WITH SUPERUSER LOGIN;"
    6. Now we apply the dump file to the database:
      psql -U postgres -d umami < /tmp/database_dump.sql
    7. Now we can exit the container and start the whole stack. As a precaution, we can omit d flag to check if any errors occur:
      docker-compose up -d
      If you see errors like ERROR: relation "_prisma_migrations" already exists or a duplicate key, it means you have started umami service and it has already created the database (the migration file does not only contain the data but also the database table definitions). In this case, stop the services (sudo docker compose down), and manually delete the volume umami-db-data (sudo docker volume remote umami-db-data). Then restart from step 1.

    Caddy Handle Block

    As Umami has its own authentication, it does not require a secure block. The internal port of Umami is 3000, so the following handle block can be used for reverse proxying to Umami:

    umami.example.com {
        reverse_proxy umami:3000 {
            header_up X-Real-IP {remote-host}
        }
    }

    Authelia

    The services we have configured up to now have their own authentication mechanisms. However, some of the services that we will set up does not have this option. Moreover, you may prefer to have a single sign on (SSO) for all your services. Therefore, we will configure an SSO service named Authelia. Authelia is a self-hosted authentication server that brings two-factor authentication and single sign-on to all your applications. It also allows to have a secure access to your services and to have a secure access to your services.

    Configuration

    Authelia requires several configuration files to work. In the project root folder, we require the following directory structure:

    authelia/
        ├── config
        │   ├── configuration.yml
        │   ├── secrets
        │   │   ├── JWT_SECRET
        │   │   ├── REDIS_PASSWORD
        │   │   ├── SESSION_SECRET
        │   │   ├── SMTP_PASSWORD
        │   │   ├── STORAGE_ENCRYPTION_KEY
        │   │   └── STORAGE_PASSWORD
        │   └── users_database.yml
        └── docker-compose.yml
    You can find the definitions and details of these files in the official documentation or in my previous post.

    We can create the secret files (except SMTP_SECRET)with the following set of commands:

    tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/JWT_SECRET
    tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/SESSION_SECRET
    tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/STORAGE_PASSWORD
    tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/STORAGE_ENCRYPTION_KEY
    tr -cd '[:alnum:]' < /dev/urandom | fold -w 64 | head -n 1 | tr -d '\n' > config/secrets/REDIS_PASSWORD
    The SMTP_SECRET file is created manually and it contains only the app password for the GMail account. Take a note of the contents of REDIS_PASSWORD and STORAGE_PASSWORD files as we are going to use them in the docker compose file.

    The Authelia configuration file template is huge, but we need only a very small part of it. The following is a sample configuration file that can be used for the Authelia service:

    theme: auto
    default_redirection_url: https://authelia.example.com/ # Change me!
    
    
    authentication_backend:
        file:
            path: /config/users_database.yml
    
    totp:
        issuer: example.com # Change me!
    
    access_control:
        default_policy: two_factor
    
    session:
        domain: example.com # Change me!
        name: authelia_session
        redis:
            host: redis
            port: 6379
    
    storage:
        postgres:
            host: database
            database: authelia
            username: authelia 
    
    notifier:
        smtp:
            host: smtp.gmail.com                             # Change me!
            port: 587                                        # Change me!
            username: example@gmail.com                      # Change me!
            sender: "Authelia <authelia@example.com>"  # Change me!

    Docker Compose File

    services:
        authelia:
          container_name: authelia
          image: authelia/authelia:latest
          restart: unless-stopped
          networks:
            - caddy_network
            - auth_backend
            - internet_network
          depends_on:
            - database
            - redis
          volumes:
            - ./config:/config
          environment:
            AUTHELIA_JWT_SECRET_FILE: /config/secrets/JWT_SECRET
            AUTHELIA_SESSION_SECRET_FILE: /config/secrets/SESSION_SECRET
            AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE: /config/secrets/SMTP_PASSWORD
            AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE: /config/secrets/STORAGE_ENCRYPTION_KEY
            AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE: /config/secrets/STORAGE_PASSWORD
            AUTHELIA_SESSION_REDIS_PASSWORD_FILE: /config/secrets/REDIS_PASSWORD
      
        database:
          container_name: auth_database
          image: postgres:15
          restart: unless-stopped
          networks:
            - auth_backend
          volumes:
            - ./postgres:/var/lib/postgresql/data
          environment:
            POSTGRES_USER: "authelia"
            POSTGRES_PASSWORD: "" # Change me!
      
        redis:
          image: redis:7
          container_name: auth_redis
          restart: unless-stopped
          networks:
            - auth_backend
          command: "redis-server --save 60 1 --loglevel warning --requirepass " # Change me!
          volumes:
            - ./redis:/data # Update to Redis data directory
          working_dir: /var/lib/redis
      
      networks:
        caddy_network:
          external: true
        auth_network:
          external: true
        internet_network:
          external: true
        auth_backend:
          internal: true
          driver: bridge
    Let's discuss a few points about this compose file:
    • We need an isolated backend network for components of the stack to communicate with each other, so we create a new network named auth_backend
    • Authelia needs to connect to the SMTP server, therefore it requires internet access. It also needs to be on the caddy network, therefore it is connected to all 3 networks.

    Backing up and Restoring

    In order to back up the whole Authelia service, you need to back up the configuration files and the database. The configuration files can be backed up by copying the config directory. The database can be backed up by using the pg_dump command. The details will be discussed in a future post.

    Restoring the Authelia service has two main steps:

    1. Restore the configuration files by copying the config directory.
    2. Restore the database by using the pg_restore command. You can follow the same flow given in the previous section by changing the relevant sections.

    Caddy Handle Block

    The authentication UI of Authelia is served on port 9091 of its container, so the following handle block can be used for reverse proxying to Authelia:

    authelia.example.com {
        reverse_proxy authelia:9091 {
            header_up X-Real-IP {remote_host}
          }
    }

    For the services that do not have their own authentication, we need a secure block in the Caddyfile that can be reused:

    (secure) {
        forward_auth {args.0} authelia:9091 {
          uri /api/verify?rd=https://authentication.example.com
          copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
        }
    }
    This block will authenticate the requests to the services that are defined in the Caddyfile. The forward_auth directive will forward the requests to the Authelia service. The uri parameter is the URL where the Authelia service is located. The copy_headers parameter will copy the headers from the Authelia service to the target service. This is useful if the target service requires the user information in the headers.

    If the authentication service runs on a different host than the services, we need to point to it with a full URL:

    (secure) {
        forward_auth {args.0} https://auth.example.com {
            uri /api/verify?rd=https://auth.example.com
            copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
            header_up Host {upstream_hostport}
        }
    }

    Securing Entire Subdomains

    If you want to completely secure a subdomain with Authelia, you can call the secure block with a wildcard:

    foo.example.net {
        import secure *
        reverse_proxy upstream:8080
    }
    In the above example, upstream is another container connected to the same network (authelia_default) as the Caddy container, and 8080 is the internal port that this target container listens. For more details on how to reverse proxy to services in another container or running on host, see the previous section.

    Securing a Subpath

    If you want to secure only a subpath of a domain, you can use the secure block with the subpath:

    bar.example.net {
        import secure /api
        reverse_proxy backend:8080
    }
    The above example will secure only the /api path of the bar.example.net domain. The backend is another container connected to the same network as the Caddy container, and 8080 is the internal port that this target container listens.

    Securing Based on Matchers

    One of the most common use cases is to secure the endpoints based on matchers. For example, you may want to secure the endpoints that contain the word admin in the path, or you are hosting an image hosting service and you want to secure the endpoints that allows uploading the images, but keep the viewing endpoints open. You can use the secure block with matchers.

    Let's demonstrate the second use case above. Assume we have the following patterns:

    • The root URL (imgur.example.com) is for uploading images
    • The view URLs have the following pattern: imgur.example.com/filename.ext
    We can create a matcher so that any URL that points to a file is open while all others are secured:
    imgur.example.com {
    @isRoot {
        not path_regexp \..+$
    }
    import secure @isRoot
    reverse_proxy pictshare:80
    }
    Here, pictshare refers to the image hosting service that runs on the same network as the Caddy container. The path_regexp \..+$ checks if the path contains a dot followed by at least one character. If it does, the matcher is true and the request is forwarded to the pictshare service. If it does not, the request is forwarded to the Authelia service for authentication.

    An alternative option is using handle blocks. For example, the sample below allows open access to URLs that have at least two subpaths, and secures the rest:

    dav.example.com {
        @hasMultipleSubpaths path_regexp ^/[^/]+/[^/]+(/.*)?$
        handle @hasMultipleSubpaths {
          reverse_proxy radicale:5232
        }
        handle {
          import secure *
          reverse_proxy radicale:5232
        }
      }
    Here, the radicale service is the CalDAV & CardDAV server that runs on the same network as the Caddy container. The path_regexp ^/[^/]+/[^/]+(/.*)?$ checks if the path contains at least two subpaths. If it does, the request is forwarded to the radicale service. If it does not, the request is forwarded to the Authelia service for authentication.

    FreshRSS

    FreshRSS is a self-hosted RSS feed reader. It is a great tool to use to keep up with your favorite websites and to read the latest news.

    Docker Compose File

    services:
        freshrss:
          image: freshrss/freshrss:latest
          container_name: freshrss
          restart: unless-stopped
          networks:
            - caddy_network
            - internet_network
          environment:
            - TZ=Europe/Istanbul
            - CRON_MIN=0
          volumes:
            - freshrss_data:/var/www/FreshRSS/data
            - freshrss_extensions:/var/www/FreshRSS/extensions
          logging:
            driver: "json-file"
            options:
              max-size: "10m"
      
      volumes:
        freshrss_data:
        freshrss_extensions:
      
      networks:
        caddy_network:
          external: true
        internet_network:
          external: true
    After the container is run, open the webpage and finish the installation (after adding Caddy handle block). For authentication, either Authelia or native support can be used; we will prefer Authelia for consistency.

    Backup and Restore

    FreshRSS stores all its data in two volumes that can be backed up. To restore the FreshRSS service, you can restore these volumes. Detailed instructions will be given in a future post.

    Caddy Handle Block

    freshrss.example.com {
        import secure *
        reverse_proxy freshrss:80 {
            header_up X-Real-IP {remote_host}
        }
    }

    CalibreWeb

    CalibreWeb is a web app providing a clean interface for browsing, reading and downloading eBooks using an existing Calibre database. It is also possible to integrate google drive and edit metadata and your calibre library through the app itself. This software is a fork of the original library (as it does not support Docker) and licensed under the GPL v3 License.

    Docker Compose File

    Note that the backup restoration should be done before spinning up the docker containers.

    services:
        calibreweb:
          image: lscr.io/linuxserver/calibre-web:latest
          container_name: calibreweb
          networks:
            - caddy_network
            - internet_network
          environment:
            - PUID=1000
            - PGID=1000
            - TZ=Europe/Istanbul
            - DOCKER_MODS=linuxserver/mods:universal-calibre #optional
          volumes:
            - ./config:/config
            - ./library:/books
          restart: unless-stopped
      
      networks:
        caddy_network:
          external: true
        internet_network:
          external: true

    Let's discuss a few points about this compose file:
    • The PUID and PGID are the user and group IDs of the user that will own the files. You can find these values by running the id command in the terminal.
    • The container requires internet access for various tasks such as fetching book information and metadata, sending books to devices over SMTP...etc.

    Backup and Restore

    The config and library directories should be saved to back up the CalibreWeb service.

    To restore the CalibreWeb service, you can copy the backed up directories to the server. If you have encrypted rsync backups, you have to restore them with the same keys and use restore. If you have plain text backups, copy the backup directories to the server with scp:

    scp -P port -r "C:\Users\user\calibreweb" user@server.example.com:/home/user/temp
    The backup will cdontain the config and library directories. The library folder should have a subfolder named Calibre, which contains the books and the database.

    Note that the restoration should be done before spinning up the container.

    Caddy Handle Block

    As CalibreWeb has its own authentication, it does not require a secure block. The internal port of CalibreWeb is 8083, so the following handle block can be used for reverse proxying to CalibreWeb:

    calibre.example.com {
        reverse_proxy calibreweb:8083 {
            header_up X-Real-IP {remote_host}
            }
    }

    Silverbullet

    Silverbullet is a fantastic markdown-based note-taking app.

    Docker Compose File

    services:
        silverbullet:
          image: zefhemel/silverbullet
          container_name: silverbullet
          restart: unless-stopped
          networks:
            - caddy_network
          volumes:
            - ./space:/space
      
      networks:
        caddy_network:
          external: true
    Let's discuss a few points about this compose file:
    • The space directory is where the notes are stored. It should be backed up regularly.
    • The container does not require internet access, so it is only connected to the caddy network.

    Backup and Restore

    Silverbullet saves the notes in the space directory as ordinary markdown files. Therefore, the space directory should be backed up regularly. To restore the Silverbullet service, you can copy the backed up space directory to the server. If you have encrypted rsync backups, you have to restore them with the same keys and use restore. If you have plain text backups, copy the backup directory to the server with scp.

    Caddy Handle Block

    Since Silverbullet does not have its own authentication, it requires a secure block. The internal port of Silverbullet is 3000, so the following handle block can be used for reverse proxying to Silverbullet:

    silverbullet.example.com {   
        import secure *
        reverse_proxy silverbullet:3000
    }

    Radicale

    Radicale is a simple CalDAV (calendar) and CardDAV (contact) server that runs on your server. It is a great tool to use to sync your calendars and contacts across all your devices. It does not have an official docker image, so we use tomsquest/docker-radicale. It is extensible, but we will use the default version; the possible web client RadicaleInfCloud is quite outdated in terms of UI and there are better alternatives.

    Configuration and Users

    Radicale uses a config file and a users file for its configuration and user database. We create these files under the /config directory in the host folder, and map them to their appropriate locations within the container.

    Creating an Encrypted User File

    We want to use an encrypted user file since plain text is quite insecure. Radicale uses htpasswd, which is an Apache feature. As we don’t want to install Apache, we use a temporary container:

    sudo docker run --name htpasswd-temp -d httpd:latest tail -f /dev/null
    Then we create a user file:
    docker exec htpasswd-temp htpasswd -c -m -b /tmp/users myuser mypassword
    Now we copy the file to the host:
    docker cp htpasswd-temp:/tmp/users /path/to/your/host/users
    Finally, we remove the temporary container:
    docker stop htpasswd-temp 
    docker rm htpasswd-temp
    In the end the file is on config/users on the host (relative to the compose file).

    Creating a Configuration File

    The config file states that Radicale looks for the user configuration in /etc/radicale/config, but the container version looks for it in /config/config. So we map it accordingly in the compose file. The file is named config under the config/ directory relative to the compose file:

    # Config file for Radicale - A simple calendar server
    #
    # Place it into /etc/radicale/config (global) <- THIS IS NOT VALID FOR CONTAINER
    # or ~/.config/radicale/config (user)
    #
    # The current values are the default ones
    
    [server]
    
    # CalDAV server hostnames separated by a comma
    # IPv4 syntax: address:port
    # IPv6 syntax: [address]:port
    # For example: 0.0.0.0:9999, [::]:9999
    #hosts = localhost:5232
    hosts = 0.0.0.0:5232
    
    # Max parallel connections
    #max_connections = 8
    
    # Max size of request body (bytes)
    #max_content_length = 100000000
    
    # Socket timeout (seconds)
    #timeout = 30
    
    # SSL flag, enable HTTPS protocol
    #ssl = False
    
    # SSL certificate path
    #certificate = /etc/ssl/radicale.cert.pem
    
    # SSL private key
    #key = /etc/ssl/radicale.key.pem
    
    # CA certificate for validating clients. This can be used to secure
    # TCP traffic between Radicale and a reverse proxy
    #certificate_authority =
    
    [encoding]
    
    # Encoding for responding requests
    #request = utf-8
    
    # Encoding for storing local collections
    #stock = utf-8
    
    
    [auth]
    
    # Authentication method
    # Value: none | htpasswd | remote_user | http_x_remote_user
    type = htpasswd
    
    # Htpasswd filename
    htpasswd_filename = /config/users
    
    # Htpasswd encryption method
    # Value: plain | bcrypt | md5
    # bcrypt requires the installation of radicale[bcrypt].
    htpasswd_encryption = md5
    
    # Incorrect authentication delay (seconds)
    # delay = 1
    
    # Message displayed in the client when a password is needed
    # realm = Radicale - Password Required
    
    [rights]
    
    # Rights backend
    # Value: none | authenticated | owner_only | owner_write | from_file
    # type = owner_only
    
    # File for rights management from_file
    #file = /etc/radicale/rights
    
    
    [storage]
    
    # Storage backend
    # Value: multifilesystem | multifilesystem_nolock
    #type = multifilesystem
    
    # Folder for storing local collections, created if not present
    #filesystem_folder = /var/lib/radicale/collections
    filesystem_folder = /data/collections
    
    # Delete sync token that are older (seconds)
    #max_sync_token_age = 2592000
    
    # Command that is run after changes to storage
    # Example: ([ -d .git ] || git init) && git add -A && (git diff --cached --quiet || git commit -m "Changes by "%(user)s)
    #hook =
    
    
    [web]
    
    # Web interface backend
    # Value: none | internal
    # type = none
    
    
    [logging]
    
    # Threshold for the logger
    # Value: debug | info | warning | error | critical
    level = info
    
    # Don't include passwords in logs
    #mask_passwords = True
    
    
    [headers]
    
    # Additional HTTP headers
    #Access-Control-Allow-Origin = *

    Docker Compose File

    services:
        radicale:
            image: tomsquest/docker-radicale
            container_name: radicale
            init: true
            read_only: true
            networks:
            - caddy_network
            security_opt:
            - no-new-privileges:true
            cap_drop:
            - ALL
            cap_add:
            - SETUID
            - SETGID
            - CHOWN
            - KILL
            deploy:
            resources:
                limits:
                memory: 256M
                pids: 50
            healthcheck:
            test: curl -f http://127.0.0.1:5232 || exit 1
            interval: 30s
            retries: 3
            restart: unless-stopped
            volumes:
            - /home/user/containers/radicale/data:/data
            - /home/user/containers/radicale/config/config:/config/config:ro
            - /home/user/containers/radicale/config/users:/config/users:ro
        
        networks:
        caddy_network:
            external: true
        
    Let's discuss a few points about this compose file:
    • The read_only flag is set to true to prevent the container from writing to the filesystem.
    • The security_opt and cap_drop flags are set to restrict the container's capabilities.
    • The deploy section is used to limit the resources of the container.
    • The healthcheck section is used to check the health of the container.
    • The container does not need internet access, so it is connected only to caddy network.

    Backup and Restore

    Radicale stores all its data in the /data directory, so it should be backed up regularly. To restore the Radicale service, you can copy the backed up /data directory to the server. If you have encrypted rsync backups, you have to restore them with the same keys and use restore. If you have plain text backups, copy the backup directory to the server with scp.

    Caddy Handle Block

    As Radicale has its own authentication, it does not require a secure block. The internal port of Radicale is 5232, so the following handle block can be used for reverse proxying to Radicale:

    radicale.example.com {
        reverse_proxy radicale:5232 {
            header_up X-Real-IP {remote_host}
        }
    }

    SearXNG

    SearXNG is a privacy-respecting, hackable metasearch engine. It is a great tool to use to search the web without being tracked by big tech companies.

    Docker Compose File

    services:
        redis:
          container_name: searxng-redis
          image: cgr.dev/chainguard/valkey:latest
          command: --save 30 1 --loglevel warning
          restart: unless-stopped
          networks:
            - searxng
          volumes:
            - valkey-data:/data
          cap_drop:
            - ALL
          cap_add:
            - SETGID
            - SETUID
            - DAC_OVERRIDE
          logging:
            driver: "json-file"
            options:
              max-size: "1m"
              max-file: "1"
      
        searxng:
          container_name: searxng
          image: docker.io/searxng/searxng:latest
          restart: unless-stopped
          networks:
            - searxng
            - caddy_network
            - internet_network
          volumes:
            - ./data:/etc/searxng:rw
          environment:
            - SEARXNG_BASE_URL=https://search.example.com/
          cap_drop:
            - ALL
          cap_add:
            - CHOWN
            - SETGID
            - SETUID
          logging:
            driver: "json-file"
            options:
              max-size: "1m"
              max-file: "1"
      
      networks:
        searxng:
          internal: true
          driver: bridge
        caddy_network:
          external: true
        internet_network:
          external: true
      
      volumes:
        valkey-data:
    Let's discuss a few points about this compose file:
    • The redis service is used as a cache for the SearXNG service. It is connected to the searxng network.
    • The searxng service requires internet access for fetching search results, so it is connected to the internet_network.
    • The searxng service is connected to the caddy_network to communicate with the Caddy server.

    Backup and Restore

    As SearXNG keeps the user settings on clients as cookies, there is nothing to back up.

    Caddy Handle Block

    SearXNG uses the 8888 internal port. The following handle block can be used for reverse proxying to SearXNG:

    search.example.com {
        reverse_proxy searxng:8888 {
            header_up X-Real-IP {remote_host}
        }
    }

    Memos

    Memos is a A privacy-first, lightweight note-taking service. I use it as a private diary to keep memoirs, and share my thoughts with my future self (or sometimes with public using public link feature).

    Docker Compose File

    services:
        memos:
            image: neosmemo/memos:stable
            container_name: memos
            networks:
            - caddy_network
            volumes:
            - ./memos/:/var/opt/memos
            restart: unless-stopped
        
        networks:
        caddy_network:
            external: true
    Let's discuss a few points about this compose file:
    • The memos service does not require internet access, so it is only connected to the caddy network.

    Backup and Restore

    Memos stores all data to the mapped ./memos directory, thus it should be backed up regularly. To restore the Memos service, you can copy the backed up ./memos directory to the server. If you have encrypted rsync backups, you have to restore them with the same keys and use restore. If you have plain text backups, copy the backup directory to the server with scp.

    Caddy Handle Block

    Memos has its own authentication and uses the 5230 internal port. The following handle block can be used for reverse proxying to Memos:

    memos.example.com {
        reverse_proxy memos:5230 {
            header_up X-Real-IP {remote_host}
        }
    }

    Stirling-PDF

    Stirling-PDF is a robust, locally hosted web-based PDF manipulation tool.

    Configuration

    For OCR, the tesseract file for the target language should be downloaded and mapped to the container. See the official documentation for more information.

    Docker Compose File

    services:
        stirling-pdf:
            image: frooodle/s-pdf:latest
            container_name: stirling
            networks:
            - caddy_network
            - internet_network
            volumes:
            - ./tessdata:/usr/share/tessdata #Required for extra OCR languages
            - ./configs:/configs
        #      - /location/of/customFiles:/customFiles/
        #      - /location/of/logs:/logs/
            environment:
            - DOCKER_ENABLE_SECURITY=false
            - INSTALL_BOOK_AND_ADVANCED_HTML_OPS=false
            - LANGS=tr_TR
        
        networks:
        caddy_network:
            external: true
        internet_network:
            external: true
    Let's discuss a few points about this compose file:
    • The stirling-pdf service requires internet access for downloading various resources such as fonts, so it is connected to the internet_network. If you do not enable internet access, the container will fail to start.
    • The stirling-pdf service is connected to the caddy_network to communicate with the Caddy server.

    Caddy Handle Block

    Stirling-PDF has its own authentication, but we want it to be open to the public, so we do not define any authentication. The internal port of Stirling-PDF is 8080, so the following handle block can be used for reverse proxying to Stirling-PDF:

    pdf.example.com {
        reverse_proxy stirling:8080 {
            header_up X-Real-IP {remote_host}
        }
    }

    LinkAce

    LinkAce is a self-hosted bookmark archive. It is a great tool to use to save your bookmarks and access them from anywhere.

    Configuration

    The .env file should be created in the same directory as the compose file. The file should contain the following variables:

    COMPOSE_PROJECT_NAME=linkace
    # The app key is generated later, please leave it like that
    APP_KEY=
    
    ## Configuration of the database connection
    ## Attention: Those settings are configured during the web setup, please do not modify them now.
    # Set the database driver (mysql, pgsql, sqlsrv, sqlite)
    DB_CONNECTION=mysql
    # Set the host of your database here
    DB_HOST=db
    # Set the port of your database here
    DB_PORT=3306
    # Set the database name here
    DB_DATABASE=linkace
    # Set both username and password of the user accessing the database
    DB_USERNAME=linkace
    # Wrap your password into quotes (") if it contains special characters
    DB_PASSWORD=ChangeThisToASecurePassword!
    
    ## Redis cache configuration
    # Set the Redis connection here if you want to use it
    REDIS_HOST=redis
    REDIS_PASSWORD=ChangeThisToASecurePassword!
    REDIS_PORT=6379

    The .env file should be writable by the container as in the first run the app key is generated and written to this file. Additionally, the first login setup also modifies this file. For this reason, we make the file writeable:

    chmod 666 .env
    If you get a database error when you navigate to the app page, most likely the container cannot write to the .env file. You can check the logs with the method above to see the following error:
    production.ERROR: file_put_contents(/app/.env): Failed to open stream: Permission denied {"exception":"[object] (ErrorException(code: 0): file_put_contents(/app/.env): Failed to open stream: Permission denied at /app/vendor/laravel/framework/src/Illuminate/Filesystem/Filesystem.php:204)

    Docker Compose File

    services:
        # --- MariaDB
        db:
          image: docker.io/library/mariadb:11.2
          container_name: linkace-db
          restart: unless-stopped
          command: mariadbd --character-set-server=utf8mb4 --collation-server=utf8mb4_bin
          environment:
            - MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
            - MYSQL_USER=${DB_USERNAME}
            - MYSQL_PASSWORD=${DB_PASSWORD}
            - MYSQL_DATABASE=${DB_DATABASE}
          volumes:
            - db:/var/lib/mysql
          networks:
            - linkace_network
      
        # --- LinkAce Image with PHP and nginx
        app:
          image: docker.io/linkace/linkace:simple
          container_name: linkace
          restart: unless-stopped
          networks:
            - caddy_network
            - linkace_network
            - internet_network
          depends_on:
            - db
          #ports:
            #- "0.0.0.0:80:80"
            #- "0.0.0.0:443:443"
          volumes:
            - ./.env:/app/.env
            - ./backups:/app/storage/app/backups
            - linkace_logs:/app/storage/logs
            # Remove the hash of the following line if you want to use HTTPS for this container
            #- ./nginx-ssl.conf:/etc/nginx/conf.d/default.conf:ro
            #- /path/to/your/ssl/certificates:/certs:ro
      
      volumes:
        linkace_logs:
        db:
          driver: local
      
      networks:
        caddy_network:
          external: true
        internet_network:
          external: true
        linkace_network:
          internal: true
          driver: bridge
    Let's discuss a few points about this compose file:
    • The db service is used as the database for the LinkAce service. It is connected to the linkace_network.
    • The app service requires internet access for fetching various resources such as website thumbnails, so it is connected to the internet_network. If you want to completely isolate, you can remove this network connection.
    • The app service is connected to the caddy_network to communicate with the Caddy server.

    Backup and Restore

    Starting with LinkAce 1.10.4, backups to the local file system are enabled by default. Backing up this directory should be enough to restore the LinkAce service. To restore the LinkAce service, you can copy the backed up backups directory to the server. If you have encrypted rsync backups, you have to restore them with the same keys and use restore. If you have plain text backups, copy the backup directory to the server with scp.

    Caddy Handle Block

    LinkAce has its own authentication, and the internal port of LinkAce is 80, so the following handle block can be used for reverse proxying to LinkAce:

    linkace.example.com {
        reverse_proxy linkace:80 {
            header_up X-Real-IP {remote_host}
        }
    }

    Post-Installation Setup

    Once the container is running and the Caddy server is restarted after the handle block is added, navigate to the webpage and finish the installation. The app key is generated and written to the .env file. The first login setup also modifies this file. After the setup is complete, the .env file should be made read-only:

    chmod 644 .env

    Picsur

    Picsur is a self-hosted image hosting service. It is a great tool to use to upload and share images. Its best feature is allowing only registered users to upload, but allowing anyone to view these images.

    Docker Compose File

    services:
        picsur:
          image: ghcr.io/caramelfur/picsur:latest
          container_name: picsur
          environment:
            PICSUR_DB_HOST: picsur_postgres
      
            ## The default username is admin, this is not modifyable
            PICSUR_ADMIN_PASSWORD: changePassword
      
            ## Maximum accepted size for uploads in bytes
            # PICSUR_MAX_FILE_SIZE: 128000000
      
            ## Warning: Verbose mode might log sensitive data
            # PICSUR_VERBOSE: "true"
          restart: unless-stopped
          networks:
            - caddy_network
            - picsur_network
      
        picsur_postgres:
          image: postgres:14-alpine
          container_name: picsur_postgres
          environment:
            POSTGRES_DB: picsur
            POSTGRES_PASSWORD: picsur
            POSTGRES_USER: picsur
          restart: unless-stopped
          networks:
            - picsur_network
          volumes:
            - ./data:/var/lib/postgresql/data
      
      networks:
        caddy_network:
          external: true
        picsur_network:
          internal: true
          driver: bridge
    Let's discuss a few points about this compose file:
    • The picsur service is connected to the caddy_network to communicate with the Caddy server.
    • The picsur_postgres service is used as the database for the Picsur service. It is connected to the picsur_network.
    • I have changed the db volume from a Docker volume (in the official page) to a mapped directory.

    Backup and Restore

    Picsur stores all its data in the ./data directory, so it should be backed up regularly. To restore the Picsur service, you can copy the backed up ./data directory to the server. If you have encrypted rsync backups, you have to restore them with the same keys and use restore. If you have plain text backups, copy the backup directory to the server with scp.

    Caddy Handle Block

    Picsur has its own authentication, and the internal port of Picsur is 8080, so the following handle block can be used for reverse proxying to Picsur:

    picsur.example.com {
        reverse_proxy picsur:8080 {
            header_up X-Real-IP {remote_host}
        }
    }

    MMDL (Manage My Damn Life)

    MMDL is a self hosted front end for managing your CalDAV tasks and calendars. The available UI in Radicale is extremely archaic, so we use MMDL to manage our calendars and tasks.

    Configuration File

    The config file is named .env.local, and the most important part in the config file is disabling registrations. The file can be in the same directory as the compose file:

    ############################################################
    ## The following variables NEED to be set before execution.
    ############################################################
    
    ## Database variables.
    DB_HOST=db
    DB_USER=user # Change
    DB_PASS=password # Change
    DB_PORT="3306"
    DB_NAME=sample_install_mmdm
    DB_CHARSET="utf8mb4"
    DB_COLLATE="utf8mb4_0900_ai_ci"
    
    ## AES Encryption Password
    ## This is used to encrypt CalDAV passwords in the database.
    
    AES_PASSWORD=PASSWORD
    
    ############################################################
    ## The following variables aren't required for basic functionality,
    ## but might be required to be set for some additional features.
    ############################################################
    
    ## SMTP Settings
    SMTP_HOST=host
    SMTP_USERNAME=username
    SMTP_PASSWORD=password
    SMTP_FROMEMAIL=test@example.com
    SMTP_PORT=25
    SMTP_USESECURE=false
    
    ## Enable NextAuth.js for third party authentication. It's highly recommended that you use a third party authentication service. Please note that third party authentication will eventually become the default option in the future versions of MMDL (probably by v1.0.0).
    
    # The following variable's name has changed in v0.4.1
    USE_NEXT_AUTH=false
    
    # This is a variable used by NextAuth.js. This must be same as NEXT_PUBLIC_BASE_URL.
    NEXTAUTH_URL="http://localhost:3000/"
    
    # This is a variable used by NextAuth.js. Must be generated.
    # https://next-auth.js.org/configuration/options#nextauth_secret
    NEXTAUTH_SECRET="REALLY_SUPER_STRONG_SECRET_KEY"
    
    ##  Refer to docs for guide to set following variables. Ignore if NEXT_PUBLIC_USE_NEXT_AUTH is set to false. Uncomment as required.
    
    # KEYCLOAK_ISSUER_URL="http://localhost:8080/realms/MMDL"
    # KEYCLOAK_CLIENT_ID="mmdl-front-end"
    # KEYCLOAK_CLIENT_SECRET="SAMPLE_CLIENT_SECRET"
    
    # GOOGLE_CLIENT_ID=""
    # GOOGLE_CLIENT_SECRET=""
    
    # AUTHENTIK_CLIENT_ID=""
    # AUTHENTIK_CLIENT_SECRET=""
    # AUTHENTIK_ISSUER=""
    
    
    
    ############################################################
    ## The following variables aren't required to be set,
    ## but affect behaviour that you might want to customise.
    ############################################################
    
    # User Config
    NEXT_PUBLIC_DISABLE_USER_REGISTRATION=true  # <-- IMPORTANT
    
    # After this value, old ssid will be deleted.
    MAX_CONCURRENT_LOGINS_ALLOWED=3
    
    # Maxium length of OTP validity, in seconds.
    MAX_OTP_VALIDITY=1800
    
    # Maximum length of a login session in seconds.
    MAX_SESSION_LENGTH=2592000
    
    # Enforce max length of session.
    ENFORCE_SESSION_TIMEOUT=true
    
    ############################################################
    ## The following variables are advanced settings,
    ## and must be only changed in case you're trying something
    ## specific.
    ############################################################
    
    #Whether user is running install from a docker image.
    DOCKER_INSTALL="false"
    
    ## General Config
    NEXT_PUBLIC_API_URL="http://localhost:3000/api"
    
    ## Debug Mode
    NEXT_PUBLIC_DEBUG_MODE=true
    
    #Max number of recursions for finding subtasks. Included so the recursive function doesn't go haywire.
    #If subtasks are not being rendered properly, try increasing the value.
    NEXT_PUBLIC_SUBTASK_RECURSION_CONTROL_VAR=100
    
    ## Test Mode
    NEXT_PUBLIC_TEST_MODE=false

    Docker Compose File

    Even though MMDL and Radicale run on the same host, MMDL is written to communicate with a remote CalDAV server. For this reason, it should reach the Radicale server through the internet network. The compose file is as follows:

    services:
        app:
          image: intriin/mmdl:latest
          container_name: mmdl
          depends_on:
            - db
          networks:
            - caddy_network
            - internet_network
            - mmdl_network
          restart: always
          environment:
            DB_HOST: db
          env_file:
            - .env.local
      
        db:
          image: mysql
          container_name: mmdl-db
          restart: always
          expose:
            - 3306
          networks:
            - mmdl_network
          environment:
          ############################################################
          ## The following variables NEED to be set before execution.
          ############################################################
            #DB_NAME and MYSQL_DATABASE must be the same.
            MYSQL_DATABASE: sample_install_mmdm
      
            # This is the user name and password of your mysql user that will be created. These values must be used in DB_USER and DB_PASS variables in the .env file that you will create.
            MYSQL_USER: user   # Change
            MYSQL_PASSWORD: password # Change
      
            # This defines the root password of mysql in the container. You can use the root user too.
            MYSQL_ROOT_PASSWORD: root
          ############################################################
          ## The following variables are advanced settings,
          ## and must be only changed in case you're trying something
          ## specific.
          ############################################################
            MYSQL_ALLOW_EMPTY_PASSWORD: ok
            MYSQL_ROOT_HOST: '%'
      
      networks:
        caddy_network:
          external: true
        internet_network:
          external: true
        mmdl_network:
          internal: true
          driver: bridge

    Backup and Restore

    Since MMDL is only a front-end, there is not much need for backups, except for account details that are defined within the app. For this, a dump of the MySQL server within the db container can be taken. To restore the MMDL service, you can copy the backed up MySQL dump to the server, and use this dump to restore the database.

    Caddy Handle Block

    For now MMDL has its own authentication (it may be removed in the future). The internal port of MMDL is 3000, so the following handle block can be used for reverse proxying to MMDL:

    mmdl.example.com {
        reverse_proxy mmdl:3000 {
            header_up X-Real-IP {remote_host}
        }
    }

    First Run and DAV Server Configuration

    Once the container is running and the Caddy server is restarted after the handle block is added, navigate to mmdl.example.com/install page to create an admin user and finish the installation.

    Once an admin user is created, log in as the user and go to Settings (gear icon) > CalDAV Accounts > Add:

    • Account Name: The name of the account for MMDL. This is just a name to distinguish different accounts.
    • Server URL: This is the full URL of the Radicate DAV server without any user suffix; e.g. https://calendar.example.com
    • Username: Your Radicale account username
    • Password: Your Radicale account password

    As an additional note, you should uncheck "Allow User Registration" under settings, as sometimes setting the environment variable does not work.

    Paperless-NGX

    Paperless-NGX is a document management system that transforms your physical documents into a searchable online archive so you can keep, well, less paper.

    Configuration Files

    Paperless-NGX requires two configuration files; .env and docker-compose.env.

    .env File

    ## .env
    COMPOSE_PROJECT_NAME=paperless

    docker-compose.env File

    ## Docker-compose.env
    # The UID and GID of the user used to run paperless in the container. Set this
    # to your UID and GID on the host so that you have write access to the 
    # consumption directory.
    #USERMAP_UID=1000
    #USERMAP_GID=1000
    
    # Additional languages to install for text recognition, separated by a
    # whitespace. Note that this is
    # different from PAPERLESS_OCR_LANGUAGE (default=eng), which defines the
    # language used for OCR.
    # The container installs English, German, Italian, Spanish and French by
    # default.
    # See https://packages.debian.org/search?keywords=tesseract-ocr-&searchon=names&suite=buster
    # for available languages.
    PAPERLESS_OCR_LANGUAGES=tur en
    
    ###############################################################################
    # Paperless-specific settings                                                 #
    ###############################################################################
    
    # All settings defined in the paperless.conf.example can be used here. The
    # Docker setup does not use the configuration file.
    # A few commonly adjusted settings are provided below.
    
    # This is required if you will be exposing Paperless-ngx on a public domain
    # (if doing so please consider security measures such as reverse proxy)
    PAPERLESS_URL=https://paperless.example.com
    
    # Adjust this key if you plan to make paperless available publicly. It should
    # be a very long sequence of random characters. You don't need to remember it.
    PAPERLESS_SECRET_KEY=fsome long secret string
    
    # Use this variable to set a timezone for the Paperless Docker containers. If not specified, defaults to UTC.
    PAPERLESS_TIME_ZONE=Europe/Istanbul
    
    # The default language to use for OCR. Set this to the language most of your
    # documents are written in.
    PAPERLESS_OCR_LANGUAGE=tur
    
    # Set if accessing paperless via a domain subpath e.g. https://domain.com/PATHPREFIX and using a reverse-proxy like traefik or nginx
    #PAPERLESS_FORCE_SCRIPT_NAME=/PATHPREFIX
    #PAPERLESS_STATIC_URL=/PATHPREFIX/static/ # trailing slash required

    Docker Compose File

    The following compose file uses SQLite as the database. If you are planning to host a large number of documents, you should consider using a MySQL or PostgreSQL database.

    services:
        broker:
            image: docker.io/library/redis:7
            container_name: paperless-redis
            restart: unless-stopped
            networks:
            - paperless_network
            volumes:
            - redisdata:/data
        
        webserver:
            image: ghcr.io/paperless-ngx/paperless-ngx:latest
            container_name: paperless
            restart: unless-stopped
            networks:
            - caddy_network
            - paperless_network
            depends_on:
            - broker
            volumes:
            - data:/usr/src/paperless/data
            - media:/usr/src/paperless/media
            - ./export:/usr/src/paperless/export
            - ./consume:/usr/src/paperless/consume
            env_file: docker-compose.env
            environment:
            PAPERLESS_REDIS: redis://broker:6379
        
    volumes:
        data:
        media:
        redisdata:
        
    networks:
        caddy_network:
            external: true
        paperless_network:
            internal: true
        

    In the above compose file, no service is connected to a network with internet access. However, if you have enabled OCR for any non-default language in the configuration file, Paperless needs to download the language files. As this will fail, the container will fail to start. To fix this, you can connect the webserver service to the internet network temporarily after running the stack:

    sudo docker network connect internet_network paperless
    Once the server is up and running, you can disconnect the service from the internet network:
    sudo docker network disconnect internet_network paperless

    Backup and Restore

    The best option to back up Paperless data is using its Document Exporter tool. For the most basic approach, backing up everything to the mapped ./export directory is enough. To create backups at regular intervals, the following command can be saved as a script and run with a cron job:

    docker compose exec -T paperless document_exporter ../export
    Note that this exports everything within the service, including the user accounts. Important: The above command exports all the documents within the service with no encryption. If you have sensitive data, protect the results.

    The backups created by the document exporter tool can be restored using the Document Importer tool by following the steps below:

    1. Upload the directory that contains the results of an export to the server using scp or any other tool:
      scp -P port -r "C:\Users\user\Paperless\backup" user@server.example.com:/home/user/containers/paperless/backup
    2. Move the contents of this uploaded folder into the mapped ./export directory:
      mv -r backup/* export/
    3. Launch an interactive terminal within the paperless container:
      sudo docker exec -it paperless /bin/bash
    4. The export directory resides on /usr/src/paperless/export/ within the container. Verify the contents of this folder with ls so that it contains the mounted backup files.
    5. If the contents are OK, start the import process:
      document_importer /usr/src/paperless/export
      If the versions of the export and the importer tool are different, you may get a response similar to the following:
      Version mismatch: Currently 2.9.0, importing 2.7.2. Continuing, but import may fail.
      If the database schemas did not change between the versions, importing should finish without any errors. If there are breaking changes, you may need to consult to the official documentation or create a new issue on the GitHub page.
    Note that this process will restore the user accounts within the backup.

    Caddy Handle Block

    As Paperless has its own authentication, the following handle block can be used for reverse proxying to Paperless:

    paperless.example.com {
        reverse_proxy paperless:8000 {
            header_up X-Real-IP {remote_host}
        }
    }

    Conclusions

    We now have a server with many services that are organized in a better way. In next blogs, we will discuss how to back up securely, and how to monitor the services.