Mustafa Can Yücel
blog-post-1

Setting Up Your Debian Server 3: Let's Encrypt, GitHub Actions

Note From the Future

I strongly advise you to use Caddy instead of Apache. It is much easier to set up and use. For setting up Caddy, you can skip to Switching to Caddy Server post. If you still want to use Apache, you can continue reading this post.

Configuration with Apache

It is always better to have HTTPS instead of HTTP. In this part, we will use Let's Encrypt to get a free SSL certificate for our site, update our site to use HTTPS, and redirect HTTP requests to HTTPS.

First, we need to install Let's Encrypt. The easiest way to do this is to use the certbot script, which is available as a snap package. But for this, we need to have snapd installed. So let's install snapd first:

sudo apt install snapd
sudo snap install core 
sudo snap refresh core
In order to see if snapd is installed correctly, we can install the "Hello World" test snap:
sudo snap install hello-world
If we get the message "hello-world 6.3 from Canonical✓ installed", then we have installed snapd correctly. If not, closing the terminal and reconnecting may help. If it still doesn't work, you can check out the snapcraft forum.

Now we can install certbot:

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
We can now use certbot to get a free SSL certificate for our site. Certbot has two options; it can create and install a certificate, or it can create a certificate and give us the files. We will demonstrate both options, but for our website, we will use the first option for now:
sudo certbot --apache
We will be asked to enter our email address and agree to the terms of service. Then follow the instructions on the screen. When we are asked to select the domain name, we can select the domain name we want to get the certificate for. Once the process is completed, our site will have an HTTPS address. But the HTTP address will still be available. We can redirect HTTP requests to HTTPS by adding the following lines to the virtual host file of our site:
<IfModule mod_ssl.c>
<VirtualHost *:80>
    ServerName example.com
    ServerAlias example.com
    Redirect permanent / https://example.com/
</VirtualHost>
<VirtualHost *:443>
    ServerAdmin admin@test
    ServerName example.com
    ServerAlias example.com
    DocumentRoot /var/www/example.com
    ErrorLog ${APACHE_LOG_DIR}/example.com.error.log
    CustomLog ${APACHE_LOG_DIR}/example.com.access.log combined
    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
    Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>
Now we can restart Apache and check if everything is working correctly. If not, we can check our custom log files.

Updating A Website by Github Actions

At the current state, if we want to update our website, we have to either update the files manually, or we have to use FTP. But we can automate this process by using GitHub Actions. We can create a Github Action that will automatically update our website whenever we push a commit to our GitHub repository. We can create a Github Action by creating a file named "main.yml" in the ".github/workflows" folder of our repository. This action will be triggered whenever we push a commit to the master branch. The action will first install the dependencies, then it will build the website, and finally, it will upload the files to our server. Note that the GitHub action will use SFTP for files, therefore we have to have the sftp server working (as explained in the previous post). Since our FTP user account has only access to a specific folder, we will create a symbolic link of this folder to the Apache site root folder. This way, we can update our website by pushing a commit to our GitHub repository. The content of the "main.yml" file is as follows:
name: 🚀 Deploy website on push

on:
    push:
        branches:
            - master
    workflow_dispatch:
        inputs:
            sync:
                description: "File synchronization"
                required: true
                default: "full"

jobs:
    deploy-master:
        name: 🎉 Deploy
        if: ${{ github.ref == 'refs/heads/master' }}
        runs-on: ubuntu-latest
        timeout-minutes: 30
        steps:
            - name: "Checkout"
              uses: actions/checkout@v3
              with:
                fetch-depth: 0
            - name: "Deploy"
              uses: milanmk/actions-file-deployer@master
              with:
                remote-protocol: "sftp"
                remote-host: ${{ secrets.FTP_SERVER }}
                remote-user: ${{ secrets.FTP_USERNAME }}
                remote-password: ${{ secrets.FTP_PASSWORD }}
                remote-path: "/home/ftpuser/mywebsite.com"
                sync: "full"
The above configuration uses a Github Action by milanmk/actions-file-deployer. The action needs to contain the address, username, and password of the FTP server, but it is not a good idea to store this information in the GitHub repository. Therefore we go to the settings of our repository, and add the following secrets:
  • FTP_SERVER: The ip address of our server
  • FTP_USERNAME: The username of our ftp user
  • FTP_PASSWORD: The password of our ftp user
The above configuration performs a full synchronization at every run; you can change it to delta to update the changes only, but it sometimes fails to find the changed files.

This action configuration uploads our files to the /home/ftpuser/mywebsite.com directory of the server, therefore we have to create a symbolic link of this directory to the Apache site root directory. We can do this by running the following command on our server:

sudo ln -s /home/ftpuser/mywebsite.com /var/www/mywebsite.com
Now every time we push a commit to our GitHub repository, the website will be updated automatically.

In the next part, we will install Prometheus and Grafana to monitor our server.