Why I Run a Homelab on Three Single-Board Computers
A few years ago I realized I was spending more time configuring cloud services than actually building things. So I bought a Raspberry Pi, plugged it into my router, and started self-hosting. Three single-board computers later, I run my own DNS, media server, monitoring stack, and the portfolio you're reading this on.
This isn't a flex. It's a learning machine.
The Setup
Three ARM boards, each with a clear responsibility:
srv-core (Raspberry Pi 5) handles everything that must stay up. PiHole for DNS, Restic for backups. This box is LAN-only — no external traffic touches it. If it goes down, the whole network loses DNS resolution. It's the one I worry about.
srv-apps (Raspberry Pi 5) runs the non-critical but publicly accessible services. This portfolio site, my CV manager (a Next.js app with Puppeteer PDF generation), a dashboard listing all services, and a monitoring stack (Prometheus + Grafana). Everything runs in Docker containers.
srv-media (Rock 5B+) is the beast — 16TB of storage running Jellyfin, Radarr, Sonarr (dual-language EN+FR), qBittorrent, and Prowlarr. This one I treat with extreme care. Private tracker ratios are at stake, and re-downloading 8TB of media isn't something I want to do twice.
Getting Traffic In
The boards sit behind my home network with no public IP. External traffic comes in through a VPS running an nginx SNI router and WireGuard. The VPS terminates nothing — it inspects the SNI header and forwards encrypted traffic through the WireGuard tunnel to the right service. Certbot handles certificates on the VPS side.
Internet → VPS (SNI router + WireGuard) → Home network → srv-apps / srv-media
It sounds complicated, but it's remarkably stable. The WireGuard tunnel reconnects automatically, and nginx routing is stateless.
Everything Is Code
Every server is managed through Ansible. No SSH-and-pray. The entire state of each machine — Docker Compose stacks, nginx configs, environment variables — lives in git. If a board dies, I flash a new SD card, run the playbook, and I'm back up.
I even built custom generators in TypeScript:
- nginx config generator reads a
services.yamldefinition and generates reverse proxy configs. It's a git submodule shared across srv-apps and srv-media. - Grafana dashboard generator reads the same service definitions and produces Prometheus scrape configs and Grafana dashboards automatically.
The generators run on my Mac during build.sh. The generated configs get committed to the server repos. The servers never generate anything — they just pull and run.
Why Bother
The honest answer: I learn more from debugging a failing Docker container on an ARM board than from any tutorial. When your PiHole goes down and your entire household loses internet, you develop a very practical understanding of high availability.
But beyond the learning, there's something satisfying about knowing exactly where your data lives. My media library isn't in someone else's cloud. My DNS queries aren't logged by a third party. My portfolio isn't subject to a platform's terms of service.
Self-hosting isn't for everyone. It requires time, patience, and a tolerance for 2 AM debugging sessions when an automated update breaks something. But for engineers who want to understand infrastructure — not just use it — there's no better teacher than running it yourself.
What I'd Do Differently
If I started over:
- Start with a UPS. Power outages corrupt SD cards. I learned this the hard way.
- Use SSDs from day one. SD cards are fine for the OS, terrible for databases. I moved PostgreSQL to an SSD early on, but I should have planned for it.
- Document the network topology first. I spent too long with an ad-hoc setup before drawing the architecture properly. Once I had a clear diagram, everything became easier to reason about.
The homelab keeps growing. I'm currently experimenting with automated deployment triggers — push to main, CI builds the Docker image, and Ansible pulls it to the right server. The goal is zero-touch deploys for everything.
If you're thinking about starting a homelab, my advice is simple: buy one board, install Docker, and self-host one thing. Pick something you actually use daily. The rest will follow naturally.