Why a VPS?
When it comes to hosting a site, I can think of four options:
- Use a 100% managed solution: Push your repo to some provider, magic happens and you get a site
- Use a managed “web” hosting solution: Slightly more setup, but most of the work is done for you (DNS, certs…)
- Run your own VPS (Virtual Private Server): You have to do almost everything, but you get the most control (and can reuse resources)
- Home lab, which is a whole different beast.
Picking option 3 lets me feel like a (young) SysAdmin again, so that’s a bonus.
Getting a domain and setting up the records
Most cloud providers will offer some way of purchasing a domain.
Once the setup is done (which may take a couple of minutes) you’ll have a
console through which you can edit the DNS records. There will probably be
default record like www. IN A <some IP from your provider>.
You can edit that record, to remove the www. and use @ instead, which will
point to the root domain. You may also set the IP to that of the VPS you
purchased.
Before you start messing with the system
There’s a high probability that your cloud provider offers guidelines for this. If they do, it’s probably a good idea to follow them, and research what you don’t understand.
Be careful when editing your ssh / firewall configuration! When you apply a change in settings that blocks traffic, remove a method of authentication or anything else susceptible of terminating your current session, you might lock yourself out of the VPS.
At this point you’ll either have to use the web console (if that’s an option) or nuke it and start over.
Set a strong password for the main account — nothing more to say about that.
If you have no idea how GNU/Linux works, I suggest you mess around on a local virtual machine instead of jumping straight into production.
Securing SSH access
Setting up SSH authentication
The default SSH port is 22. We can change that. We will also disable password authentication, in favour of a private-key.
First we generate a key pair:
# Use a modern algorithmssh-keygen -t ed25519# you can give the key a name, so it's easier to remember which key does what# use a passphrase here so the key itself isn't enoughcat .ssh/id_mykey.pubThen we deploy it on the server (copy / paste)
# Add a line with your public key in the filevim ~/.ssh/authorized_keysAt this point, you should be able to log in using the key:
ssh user@your-vps-ip -i .ssh/id_mykeyIf that works, good job, now we can change the default port and nuke password authentication.
# Change the port - keep a note in case you forgetsudo vim /etc/ssh/sshd_configPort 2222
# We'll then have to use the keysPasswordAuthentication noNow to apply the configuration:
sudo systemctl restart sshLogout and back in for good measure, this time adding a -p 2222 to the ssh
command.
Firewall
Ubuntu comes with ufw which stands for Uncomplicated FireWall. It is dead
simple to use for basic functionality, so we’ll go with that.
sudo apt install ufw
# Basic rulessudo ufw default deny incomingsudo ufw default allow outgoing
sudo ufw allow 2222/tcp comment 'SSH'sudo ufw allow 80/tcp comment 'HTTP'sudo ufw allow 443/tcp comment 'HTTPS'
sudo ufw enablesudo ufw reloadsudo ufw status verboseWhat the above does is forbid all incoming traffic, allow outgoing, then selectively re-enable incoming traffic:
- 2222 is the SSH port, which you really don’t want to forget
- 80 for basic HTTP traffic (which we will later re-route to HTTPS)
- 443 for HTTPS
Fail2ban
Fail2ban serves as a deterrent against repeated attempts at accessing the machine. Repeated failures result in a temporary ban. This does not reduce the attack surface (as any vulnerable service will eventually get exploited), but helps prevent log pollution.
sudo apt install fail2banAnd you’re almost done. Fail2ban comes with a bunch of enabled filters for SSH, HTTP servers and much, much more. Here’s the fail2ban wiki.
Serving pages and forcing HTTPS
nginx
We’ll use a simple setup for serving HTTP requests on port 80.
sudo apt install nginxNow if you access the domain, you should get nginx’s default page. Now there are a couple of things we need to do:
- Add a configuration file for our site
- Have a proper folder to host the files
- Have a dedicated service account (a user) that will be responsible for deploying the files.
The service account and directory
On your local machine, generate a new key pair, with a clear name and no passphrase. This will be used by the service account.
On the VPS, create the user account.
# Create the user accountsudo useradd -m -s /bin/bash blog
# Add a .ssh directory with authorized_keyssudo mkdir -p /home/blog/.sshsudo vim /home/blog/.ssh/authorized_keys# ^ You want to paste the new public key you generated, like you did for yourmain account.
# Fix the file permissionssudo chmod 700 /home/blog/.sshsudo chmod 600 /home/blog/.ssh/authorized_keyssudo chown -R blog:blog /home/blog/sudo chown -R blog:blog /home/blog/.ssh
# Create a deployment foldersudo mkdir -p /var/www/blogsudo chown -R blog:blog /var/www/blogAt this point, if you should be able to log into the VPS using
ssh blog@<your domain> -p 2222 -i .ssh/id_blog_service_account.
Now, you’ll notice that if you run su <your main account> from the service
account, you may actually switch if you have the password. We can prevent that
by restricting the use of the su.
This was done on an Ubuntu 25 machine, if you have a different system, please refer to the documentation before proceeding.
sudo vim /etc/pam.d/su
# Uncomment this lineauth required pam_wheel.soNow, going back to the service account and attempting to su again should be
met with su: Permission denied.
Adding a configuration for our site
The excerpt below is all that is needed to serve pages on port 80.
server { server_name mydomain.com; root /var/www/blog/; index index.html;
location / { try_files $uri $uri/ /index.html; }}Now to activate the configuration and reload nginx.
# Create the symlink from "available" to "enabled"sudo ln -s /etc/nginx/sites-available/mydomain.conf /etc/nginx/sites-enabled/mydomain.confsudo systemctl reload nginxCertbot
That’s well and good, but we still need to use HTTPS. Certbot makes this entirely trivial:
sudo apt install -y certbot python3-certbot-nginxsudo certbot --nginx -d mydomain.com --redirectNow if you check the configuration file in sites-available, you’ll notice a
couple more lines added to handle ssl and the redirection from HTTP to HTTPS.
Certbot also adds a cron to renew the certificates automatically, which you can
confirm by running cat /etc/cron.d/certbot.
Setting up CI / CD in Gitlab
Variables
In Gitlab, go to your project, and in the left-hand-side menu, go to
settings > CI/CD.
In there you can set variables, we’ll create three of them:
- VPS_IP will hold the public IPv4 IP of our VPS
- SSH_PORT is the port we set earlier (2222)
- DEPLOY_SSH_KEY is the private key we created for the service account
While the first two are trivial to set up, pay attention to the private key: You want to make sure that it is Protected, Masked and Hidden. These settings ensure that the key cannot be retrieved through Gitlab’s interface, and make it harder to steal from the pipeline’s logs.
Gitlab’s GUI will inform you that it cannot store whitespace in variables. We’ll
work around that by using base64 encoding on the key, and base64 -d in the
pipeline.
base64 IS NOT a security measure!
cat .ssh/id_mydomain_deploy | base64Pipeline
Here’s the gitlab-ci.yml file I used. It runs two stages, one for building the
site, another to use rsync over SSH. The artifact lives for an hour, which is
more than enough.
stages: - build - deploy
build: stage: build image: node:24.14.1-alpine3.23 script: - npm install -g pnpm && pnpm install && pnpm astro telemetry disable - pnpm build artifacts: paths: - dist/ expire_in: 1 hour only: - main
deploy: stage: deploy image: debian:bookworm-slim before_script: - apt-get update -qq && apt-get install -y -qq rsync openssh-client - mkdir -p ~/.ssh - echo "$DEPLOY_SSH_KEY" | base64 -d > ~/.ssh/id_ed25519 - chmod 600 ~/.ssh/id_ed25519 - ssh-keyscan -p $SSH_PORT -H $VPS_IP >> ~/.ssh/known_hosts script: - rsync -avz --delete -e "ssh -p $SSH_PORT -i ~/.ssh/id_ed25519" dist/ blog@$VPS_IP:/var/www/blog/ only: - mainNow when you commit on main (change it to “master” if that is what your repo uses) the build will trigger, and your changes will apply after a minute or so.
Enjoy!