StuRa HTW Dresden Infrastructure - NixOS Configuration#
Declarative infrastructure management for StuRa HTW Dresden using NixOS and a flake-based configuration. This repository replaces the hand-configured FreeBSD relay system with a modern, reproducible infrastructure.
Architecture#
Overview#
This infrastructure uses a flake-based approach with automatic host discovery:
- Centralized reverse proxy: HAProxy at 141.56.51.1 routes all traffic via SNI inspection and HTTP host headers
- IPv6 gateway: Hetzner VPS at 2a01:4f8:1c19:96f8::1 forwards IPv6 traffic to the IPv4 proxy
- Automatic host discovery: Each subdirectory in
hosts/becomes a NixOS configuration viabuiltins.readDir - Global configuration: Settings in
default.nixare automatically applied to all hosts - ACME certificates: All services use Let’s Encrypt certificates managed locally on each host
Network#
- Network: 141.56.51.0/24
- Gateway: 141.56.51.254
- DNS: 141.56.1.1, 141.56.1.2 (HTW internal)
- Domain: htw.stura-dresden.de
Repository Structure#
stura-infra/
├── flake.nix # Main flake configuration with auto-discovery
├── default.nix # Global settings applied to all hosts
├── hosts/ # Host-specific configurations
│ ├── proxy/ # Central reverse proxy (HAProxy)
│ │ ├── default.nix
│ │ ├── hardware-configuration.nix
│ │ ├── hetzner-disk.nix
│ │ └── README.md
│ ├── v6proxy/ # IPv6 gateway (Hetzner VPS)
│ │ ├── default.nix
│ │ ├── hardware-configuration.nix
│ │ ├── hetzner-disk.nix
│ │ └── README.md
│ ├── git/ # Forgejo git server
│ │ └── default.nix
│ ├── wiki/ # MediaWiki instance
│ │ └── default.nix
│ ├── nextcloud/ # Nextcloud instance
│ │ └── default.nix
│ └── redmine/ # Redmine project management
│ └── default.nix
└── README.md # This fileHost Overview#
| Host | IP | Type | Services | Documentation |
|---|---|---|---|---|
| proxy | 141.56.51.1 | VM | HAProxy, SSH Jump | hosts/proxy/ |
| v6proxy | 178.104.18.93 (IPv4)2a01:4f8:1c19:96f8::1 (IPv6) | Hetzner VPS | HAProxy (IPv6 Gateway) | hosts/v6proxy/ |
| git | 141.56.51.7 | LXC | Forgejo, Nginx | hosts/git/ |
| wiki | 141.56.51.13 | LXC | MediaWiki, MariaDB, Apache | hosts/wiki/ |
| redmine | 141.56.51.15 | LXC | Redmine, Nginx | hosts/redmine/ |
| nextcloud | 141.56.51.16 | LXC | Nextcloud, PostgreSQL, Redis, Nginx | hosts/nextcloud/ |
Deployment Methods#
Method 1: Initial Installation with nixos-anywhere (Recommended)#
Use nixos-anywhere for initial system installation. This handles disk partitioning (via disko) and bootstrapping automatically.
For VM hosts (proxy):
nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1For LXC containers (git, wiki, redmine, nextcloud):
nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7This method is ideal for:
- First-time installation on bare metal or fresh VMs
- Complete system rebuilds
- Migration to new hardware
Method 2: Container Tarball Deployment to Proxmox#
Build and deploy LXC container tarballs for git, wiki, redmine, and nextcloud hosts.
Step 1: Build container tarball locally
nix build .#containers-git
# Result will be in result/tarball/nixos-system-x86_64-linux.tar.xzStep 2: Copy to Proxmox host
scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/Step 3: Create container on Proxmox
# Example for git host (container ID 107, adjust as needed)
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
--hostname git \
--net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
--memory 2048 \
--cores 2 \
--rootfs local-lvm:8 \
--unprivileged 1 \
--features nesting=1
# Configure storage and settings via Proxmox web interface if neededStep 4: Start container
pct start 107Step 5: Post-deployment configuration
- Access container:
pct enter 107 - Follow host-specific post-deployment steps in each host’s README.md
Available container tarballs:
nix build .#containers-gitnix build .#containers-wikinix build .#containers-redminenix build .#containers-nextcloud
Note: The proxy host is a full VM and does not have a container tarball. Use Method 1 or 3 for proxy deployment.
Method 3: ISO Installer#
Build a bootable ISO installer for manual installation on VMs or bare metal.
Build ISO:
nix build .#installer-iso
# Result will be in result/iso/nixos-*.isoBuild VM for testing:
nix build .#installer-vmDeployment:
- Upload ISO to Proxmox storage
- Create VM and attach ISO as boot device
- Boot VM and follow installation prompts
- Run installation commands manually
- Reboot and remove ISO
Method 4: Regular Updates#
For already-deployed systems, apply configuration updates:
Option A: Using nixos-rebuild from your local machine
nixos-rebuild switch --flake .#<hostname> --target-host root@<ip>Example:
nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1Note: This requires an SSH config entry for the proxy (uses port 1005):
# ~/.ssh/config
Host 141.56.51.1
Port 1005Option B: Using auto-generated update scripts
The flake generates convenience scripts for each host:
nix run .#git-update
nix run .#wiki-update
nix run .#redmine-update
nix run .#nextcloud-update
nix run .#proxy-updateThese scripts automatically extract the target IP from the configuration.
Option C: Remote execution (no local Nix installation)
If Nix isn’t installed locally, run the command on the target system:
ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"Replace proxy with the appropriate hostname and adjust the IP address.
Required DNS Records#
The following DNS records must be configured for the current infrastructure:
| Name | Type | IP | Service |
|---|---|---|---|
| *.htw.stura-dresden.de | CNAME | proxy.htw.stura-dresden.de | Reverse proxy |
| proxy.htw.stura-dresden.de | A | 141.56.51.1 | Proxy IPv4 |
| proxy.htw.stura-dresden.de | AAAA | 2a01:4f8:1c19:96f8::1 | IPv6 Gateway (v6proxy) |
Note: All public services point to the proxy IPs. The IPv4 proxy (141.56.51.1) handles SNI-based routing to backend hosts. The IPv6 gateway (v6proxy at 2a01:4f8:1c19:96f8::1) forwards all IPv6 traffic to the IPv4 proxy. Backend IPs are internal and not exposed in DNS.
Additional services managed by the proxy (not in this repository):
- stura.htw-dresden.de → Plone
- tix.htw.stura-dresden.de → Pretix
- vot.htw.stura-dresden.de → OpenSlides
- mail.htw.stura-dresden.de → Mail server
Development#
Code Formatting#
Format all Nix files using the RFC-style formatter:
nix fmtTesting Changes#
Before deploying to production:
- Test flake evaluation:
nix flake check - Build configurations locally:
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel - Review generated configurations
- Deploy to test systems first if available
Adding a New Host#
Create host directory:
mkdir hosts/newhostnameCreate
hosts/newhostname/default.nix:{ config, lib, pkgs, modulesPath, ... }: { imports = [ "${modulesPath}/virtualisation/proxmox-lxc.nix" # For LXC containers # Or for VMs: # ./hardware-configuration.nix ]; networking = { hostName = "newhostname"; interfaces.eth0.ipv4.addresses = [{ # or ens18 for VMs address = "141.56.51.XXX"; prefixLength = 24; }]; defaultGateway.address = "141.56.51.254"; firewall.allowedTCPPorts = [ 80 443 ]; }; # Add your services here services.nginx.enable = true; # ... system.stateVersion = "25.11"; }The flake automatically discovers the new host via
builtins.readDir ./hostsIf the host runs nginx, the proxy automatically adds forwarding rules (you still need to add DNS records)
Deploy:
nix run github:nix-community/nixos-anywhere -- --flake .#newhostname --target-host root@141.56.51.XXX
Repository Information#
- Repository: https://codeberg.org/stura-htw-dresden/stura-infra
- ACME Email: cert@stura.htw-dresden.de
- NixOS Version: 25.11
- Architecture: x86_64-linux
Flake Inputs#
nixpkgs: NixOS 25.11authentik: Identity provider (nix-community/authentik-nix)mailserver: Simple NixOS mailserver (nixos-25.11 branch)sops: Secret management (Mic92/sops-nix)disko: Declarative disk partitioning
Common Patterns#
Network Configuration#
All hosts follow this pattern:
networking = {
hostName = "<name>";
interfaces.<interface>.ipv4.addresses = [{
address = "<ip>";
prefixLength = 24;
}];
defaultGateway.address = "141.56.51.254";
};- LXC containers use
eth0 - VMs/bare metal typically use
ens18
Nginx + ACME Pattern#
For web services:
services.nginx = {
enable = true;
virtualHosts."<fqdn>" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# service config
};
};
};This automatically:
- Integrates with the proxy’s ACME challenge forwarding
- Generates HAProxy backend configuration
- Requests Let’s Encrypt certificates
Firewall Rules#
Hosts only need to allow traffic from the proxy:
networking.firewall.allowedTCPPorts = [ 80 443 ];SSH ports vary:
- Proxy: port 1005 (admin access)
- Other hosts: port 22 (default)