Setting up a blog with Astro - Part 2: Gitlab, VPS, CICD

Automating the site's deployment to the VPS using Gitlab's pipelines Published:

Why a VPS?

When it comes to hosting a site, I can think of four options:

  1. Use a 100% managed solution: Push your repo to some provider, magic happens and you get a site
  2. Use a managed “web” hosting solution: Slightly more setup, but most of the work is done for you (DNS, certs…)
  3. Run your own VPS (Virtual Private Server): You have to do almost everything, but you get the most control (and can reuse resources)
  4. Home lab, which is a whole different beast.

Picking option 3 lets me feel like a (young) SysAdmin again, so that’s a bonus.

Getting a domain and setting up the records

Most cloud providers will offer some way of purchasing a domain.

Once the setup is done (which may take a couple of minutes) you’ll have a console through which you can edit the DNS records. There will probably be default record like www. IN A <some IP from your provider>.

You can edit that record, to remove the www. and use @ instead, which will point to the root domain. You may also set the IP to that of the VPS you purchased.

Before you start messing with the system

There’s a high probability that your cloud provider offers guidelines for this. If they do, it’s probably a good idea to follow them, and research what you don’t understand.

Be careful when editing your ssh / firewall configuration! When you apply a change in settings that blocks traffic, remove a method of authentication or anything else susceptible of terminating your current session, you might lock yourself out of the VPS.

At this point you’ll either have to use the web console (if that’s an option) or nuke it and start over.

Set a strong password for the main account — nothing more to say about that.

If you have no idea how GNU/Linux works, I suggest you mess around on a local virtual machine instead of jumping straight into production.

Securing SSH access

Setting up SSH authentication

The default SSH port is 22. We can change that. We will also disable password authentication, in favour of a private-key.

First we generate a key pair:

local machine
# Use a modern algorithm
ssh-keygen -t ed25519
# you can give the key a name, so it's easier to remember which key does what
# use a passphrase here so the key itself isn't enough
cat .ssh/id_mykey.pub

Then we deploy it on the server (copy / paste)

on the vps
# Add a line with your public key in the file
vim ~/.ssh/authorized_keys

At this point, you should be able to log in using the key:

local machine
ssh user@your-vps-ip -i .ssh/id_mykey

If that works, good job, now we can change the default port and nuke password authentication.

on the vps
# Change the port - keep a note in case you forget
sudo vim /etc/ssh/sshd_config
Port 2222
# We'll then have to use the keys
PasswordAuthentication no

Now to apply the configuration:

on the vps
sudo systemctl restart ssh

Logout and back in for good measure, this time adding a -p 2222 to the ssh command.

Firewall

Ubuntu comes with ufw which stands for Uncomplicated FireWall. It is dead simple to use for basic functionality, so we’ll go with that.

on the vps
sudo apt install ufw
# Basic rules
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 2222/tcp comment 'SSH'
sudo ufw allow 80/tcp comment 'HTTP'
sudo ufw allow 443/tcp comment 'HTTPS'
sudo ufw enable
sudo ufw reload
sudo ufw status verbose

What the above does is forbid all incoming traffic, allow outgoing, then selectively re-enable incoming traffic:

  • 2222 is the SSH port, which you really don’t want to forget
  • 80 for basic HTTP traffic (which we will later re-route to HTTPS)
  • 443 for HTTPS

Fail2ban

Fail2ban serves as a deterrent against repeated attempts at accessing the machine. Repeated failures result in a temporary ban. This does not reduce the attack surface (as any vulnerable service will eventually get exploited), but helps prevent log pollution.

on the vps
sudo apt install fail2ban

And you’re almost done. Fail2ban comes with a bunch of enabled filters for SSH, HTTP servers and much, much more. Here’s the fail2ban wiki.

Serving pages and forcing HTTPS

nginx

We’ll use a simple setup for serving HTTP requests on port 80.

on the vps
sudo apt install nginx

Now if you access the domain, you should get nginx’s default page. Now there are a couple of things we need to do:

  • Add a configuration file for our site
  • Have a proper folder to host the files
  • Have a dedicated service account (a user) that will be responsible for deploying the files.

The service account and directory

On your local machine, generate a new key pair, with a clear name and no passphrase. This will be used by the service account.

On the VPS, create the user account.

on the vps
# Create the user account
sudo useradd -m -s /bin/bash blog
# Add a .ssh directory with authorized_keys
sudo mkdir -p /home/blog/.ssh
sudo vim /home/blog/.ssh/authorized_keys
# ^ You want to paste the new public key you generated, like you did for your
main account.
# Fix the file permissions
sudo chmod 700 /home/blog/.ssh
sudo chmod 600 /home/blog/.ssh/authorized_keys
sudo chown -R blog:blog /home/blog/
sudo chown -R blog:blog /home/blog/.ssh
# Create a deployment folder
sudo mkdir -p /var/www/blog
sudo chown -R blog:blog /var/www/blog

At this point, if you should be able to log into the VPS using ssh blog@<your domain> -p 2222 -i .ssh/id_blog_service_account.

Now, you’ll notice that if you run su <your main account> from the service account, you may actually switch if you have the password. We can prevent that by restricting the use of the su.

This was done on an Ubuntu 25 machine, if you have a different system, please refer to the documentation before proceeding.

on the vps
sudo vim /etc/pam.d/su
# Uncomment this line
auth required pam_wheel.so

Now, going back to the service account and attempting to su again should be met with su: Permission denied.

Adding a configuration for our site

The excerpt below is all that is needed to serve pages on port 80.

/etc/nginx/sites-available/mydomain.com
server {
server_name mydomain.com;
root /var/www/blog/;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}

Now to activate the configuration and reload nginx.

on the vps
# Create the symlink from "available" to "enabled"
sudo ln -s /etc/nginx/sites-available/mydomain.conf /etc/nginx/sites-enabled/mydomain.conf
sudo systemctl reload nginx

Certbot

That’s well and good, but we still need to use HTTPS. Certbot makes this entirely trivial:

on the vps
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d mydomain.com --redirect

Now if you check the configuration file in sites-available, you’ll notice a couple more lines added to handle ssl and the redirection from HTTP to HTTPS.

Certbot also adds a cron to renew the certificates automatically, which you can confirm by running cat /etc/cron.d/certbot.

Setting up CI / CD in Gitlab

Variables

In Gitlab, go to your project, and in the left-hand-side menu, go to settings > CI/CD.

In there you can set variables, we’ll create three of them:

  • VPS_IP will hold the public IPv4 IP of our VPS
  • SSH_PORT is the port we set earlier (2222)
  • DEPLOY_SSH_KEY is the private key we created for the service account

While the first two are trivial to set up, pay attention to the private key: You want to make sure that it is Protected, Masked and Hidden. These settings ensure that the key cannot be retrieved through Gitlab’s interface, and make it harder to steal from the pipeline’s logs.

Gitlab’s GUI will inform you that it cannot store whitespace in variables. We’ll work around that by using base64 encoding on the key, and base64 -d in the pipeline.

base64 IS NOT a security measure!

on the vps
cat .ssh/id_mydomain_deploy | base64

Pipeline

Here’s the gitlab-ci.yml file I used. It runs two stages, one for building the site, another to use rsync over SSH. The artifact lives for an hour, which is more than enough.

.gitlab-ci.yml
stages:
- build
- deploy
build:
stage: build
image: node:24.14.1-alpine3.23
script:
- npm install -g pnpm && pnpm install && pnpm astro telemetry disable
- pnpm build
artifacts:
paths:
- dist/
expire_in: 1 hour
only:
- main
deploy:
stage: deploy
image: debian:bookworm-slim
before_script:
- apt-get update -qq && apt-get install -y -qq rsync openssh-client
- mkdir -p ~/.ssh
- echo "$DEPLOY_SSH_KEY" | base64 -d > ~/.ssh/id_ed25519
- chmod 600 ~/.ssh/id_ed25519
- ssh-keyscan -p $SSH_PORT -H $VPS_IP >> ~/.ssh/known_hosts
script:
- rsync -avz --delete -e "ssh -p $SSH_PORT -i ~/.ssh/id_ed25519" dist/
blog@$VPS_IP:/var/www/blog/
only:
- main

Now when you commit on main (change it to “master” if that is what your repo uses) the build will trigger, and your changes will apply after a minute or so.

Enjoy!