StuRa HTW Dresden Infrastructure - NixOS Configuration#

Declarative infrastructure management for StuRa HTW Dresden using NixOS and a flake-based configuration. This repository replaces the hand-configured FreeBSD relay system with a modern, reproducible infrastructure.

Architecture#

Overview#

This infrastructure uses a flake-based approach with automatic host discovery:

  • Centralized reverse proxy: HAProxy at 141.56.51.1 routes all traffic via SNI inspection and HTTP host headers
  • IPv6 gateway: Hetzner VPS at 2a01:4f8:1c19:96f8::1 forwards IPv6 traffic to the IPv4 proxy
  • Automatic host discovery: Each subdirectory in hosts/ becomes a NixOS configuration via builtins.readDir
  • Global configuration: Settings in default.nix are automatically applied to all hosts
  • ACME certificates: All services use Let’s Encrypt certificates managed locally on each host

Network#

  • Network: 141.56.51.0/24
  • Gateway: 141.56.51.254
  • DNS: 141.56.1.1, 141.56.1.2 (HTW internal)
  • Domain: htw.stura-dresden.de

Repository Structure#

stura-infra/
├── flake.nix              # Main flake configuration with auto-discovery
├── default.nix            # Global settings applied to all hosts
├── hosts/                 # Host-specific configurations
│   ├── proxy/            # Central reverse proxy (HAProxy)
│   │   ├── default.nix
│   │   ├── hardware-configuration.nix
│   │   ├── hetzner-disk.nix
│   │   └── README.md
│   ├── v6proxy/          # IPv6 gateway (Hetzner VPS)
│   │   ├── default.nix
│   │   ├── hardware-configuration.nix
│   │   ├── hetzner-disk.nix
│   │   └── README.md
│   ├── git/              # Forgejo git server
│   │   └── default.nix
│   ├── wiki/             # MediaWiki instance
│   │   └── default.nix
│   ├── nextcloud/        # Nextcloud instance
│   │   └── default.nix
│   └── redmine/          # Redmine project management
│       └── default.nix
└── README.md             # This file

Host Overview#

HostIPTypeServicesDocumentation
proxy141.56.51.1VMHAProxy, SSH Jumphosts/proxy/
v6proxy178.104.18.93 (IPv4)2a01:4f8:1c19:96f8::1 (IPv6)Hetzner VPSHAProxy (IPv6 Gateway)hosts/v6proxy/
git141.56.51.7LXCForgejo, Nginxhosts/git/
wiki141.56.51.13LXCMediaWiki, MariaDB, Apachehosts/wiki/
redmine141.56.51.15LXCRedmine, Nginxhosts/redmine/
nextcloud141.56.51.16LXCNextcloud, PostgreSQL, Redis, Nginxhosts/nextcloud/

Deployment Methods#

Use nixos-anywhere for initial system installation. This handles disk partitioning (via disko) and bootstrapping automatically.

For VM hosts (proxy):

nix run github:nix-community/nixos-anywhere -- --flake .#proxy --target-host root@141.56.51.1

For LXC containers (git, wiki, redmine, nextcloud):

nix run github:nix-community/nixos-anywhere -- --flake .#git --target-host root@141.56.51.7

This method is ideal for:

  • First-time installation on bare metal or fresh VMs
  • Complete system rebuilds
  • Migration to new hardware

Method 2: Container Tarball Deployment to Proxmox#

Build and deploy LXC container tarballs for git, wiki, redmine, and nextcloud hosts.

Step 1: Build container tarball locally

nix build .#containers-git
# Result will be in result/tarball/nixos-system-x86_64-linux.tar.xz

Step 2: Copy to Proxmox host

scp result/tarball/nixos-system-x86_64-linux.tar.xz root@proxmox-host:/var/lib/vz/template/cache/

Step 3: Create container on Proxmox

# Example for git host (container ID 107, adjust as needed)
pct create 107 /var/lib/vz/template/cache/nixos-system-x86_64-linux.tar.xz \
  --hostname git \
  --net0 name=eth0,bridge=vmbr0,ip=141.56.51.7/24,gw=141.56.51.254 \
  --memory 2048 \
  --cores 2 \
  --rootfs local-lvm:8 \
  --unprivileged 1 \
  --features nesting=1

# Configure storage and settings via Proxmox web interface if needed

Step 4: Start container

pct start 107

Step 5: Post-deployment configuration

  • Access container: pct enter 107
  • Follow host-specific post-deployment steps in each host’s README.md

Available container tarballs:

  • nix build .#containers-git
  • nix build .#containers-wiki
  • nix build .#containers-redmine
  • nix build .#containers-nextcloud

Note: The proxy host is a full VM and does not have a container tarball. Use Method 1 or 3 for proxy deployment.

Method 3: ISO Installer#

Build a bootable ISO installer for manual installation on VMs or bare metal.

Build ISO:

nix build .#installer-iso
# Result will be in result/iso/nixos-*.iso

Build VM for testing:

nix build .#installer-vm

Deployment:

  1. Upload ISO to Proxmox storage
  2. Create VM and attach ISO as boot device
  3. Boot VM and follow installation prompts
  4. Run installation commands manually
  5. Reboot and remove ISO

Method 4: Regular Updates#

For already-deployed systems, apply configuration updates:

Option A: Using nixos-rebuild from your local machine

nixos-rebuild switch --flake .#<hostname> --target-host root@<ip>

Example:

nixos-rebuild switch --flake .#proxy --target-host root@141.56.51.1

Note: This requires an SSH config entry for the proxy (uses port 1005):

# ~/.ssh/config
Host 141.56.51.1
    Port 1005

Option B: Using auto-generated update scripts

The flake generates convenience scripts for each host:

nix run .#git-update
nix run .#wiki-update
nix run .#redmine-update
nix run .#nextcloud-update
nix run .#proxy-update

These scripts automatically extract the target IP from the configuration.

Option C: Remote execution (no local Nix installation)

If Nix isn’t installed locally, run the command on the target system:

ssh root@141.56.51.1 "nixos-rebuild switch --flake git+https://codeberg.org/stura-htw-dresden/stura-infra#proxy"

Replace proxy with the appropriate hostname and adjust the IP address.

Required DNS Records#

The following DNS records must be configured for the current infrastructure:

NameTypeIPService
*.htw.stura-dresden.deCNAMEproxy.htw.stura-dresden.deReverse proxy
proxy.htw.stura-dresden.deA141.56.51.1Proxy IPv4
proxy.htw.stura-dresden.deAAAA2a01:4f8:1c19:96f8::1IPv6 Gateway (v6proxy)

Note: All public services point to the proxy IPs. The IPv4 proxy (141.56.51.1) handles SNI-based routing to backend hosts. The IPv6 gateway (v6proxy at 2a01:4f8:1c19:96f8::1) forwards all IPv6 traffic to the IPv4 proxy. Backend IPs are internal and not exposed in DNS.

Additional services managed by the proxy (not in this repository):

  • stura.htw-dresden.de → Plone
  • tix.htw.stura-dresden.de → Pretix
  • vot.htw.stura-dresden.de → OpenSlides
  • mail.htw.stura-dresden.de → Mail server

Development#

Code Formatting#

Format all Nix files using the RFC-style formatter:

nix fmt

Testing Changes#

Before deploying to production:

  1. Test flake evaluation: nix flake check
  2. Build configurations locally: nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
  3. Review generated configurations
  4. Deploy to test systems first if available

Adding a New Host#

  1. Create host directory:

    mkdir hosts/newhostname
  2. Create hosts/newhostname/default.nix:

    { config, lib, pkgs, modulesPath, ... }:
    {
      imports = [
        "${modulesPath}/virtualisation/proxmox-lxc.nix"  # For LXC containers
        # Or for VMs:
        # ./hardware-configuration.nix
      ];
    
      networking = {
        hostName = "newhostname";
        interfaces.eth0.ipv4.addresses = [{  # or ens18 for VMs
          address = "141.56.51.XXX";
          prefixLength = 24;
        }];
        defaultGateway.address = "141.56.51.254";
        firewall.allowedTCPPorts = [ 80 443 ];
      };
    
      # Add your services here
      services.nginx.enable = true;
      # ...
    
      system.stateVersion = "25.11";
    }
  3. The flake automatically discovers the new host via builtins.readDir ./hosts

  4. If the host runs nginx, the proxy automatically adds forwarding rules (you still need to add DNS records)

  5. Deploy:

    nix run github:nix-community/nixos-anywhere -- --flake .#newhostname --target-host root@141.56.51.XXX

Repository Information#

Flake Inputs#

  • nixpkgs: NixOS 25.11
  • authentik: Identity provider (nix-community/authentik-nix)
  • mailserver: Simple NixOS mailserver (nixos-25.11 branch)
  • sops: Secret management (Mic92/sops-nix)
  • disko: Declarative disk partitioning

Common Patterns#

Network Configuration#

All hosts follow this pattern:

networking = {
  hostName = "<name>";
  interfaces.<interface>.ipv4.addresses = [{
    address = "<ip>";
    prefixLength = 24;
  }];
  defaultGateway.address = "141.56.51.254";
};
  • LXC containers use eth0
  • VMs/bare metal typically use ens18

Nginx + ACME Pattern#

For web services:

services.nginx = {
  enable = true;
  virtualHosts."<fqdn>" = {
    forceSSL = true;
    enableACME = true;
    locations."/" = {
      # service config
    };
  };
};

This automatically:

  • Integrates with the proxy’s ACME challenge forwarding
  • Generates HAProxy backend configuration
  • Requests Let’s Encrypt certificates

Firewall Rules#

Hosts only need to allow traffic from the proxy:

networking.firewall.allowedTCPPorts = [ 80 443 ];

SSH ports vary:

  • Proxy: port 1005 (admin access)
  • Other hosts: port 22 (default)