nixos system configuration
Go to file
Ulli Kehrle 88489a80c7
vscode: inital commit
2023-12-16 19:22:53 +01:00
containers lololol 2023-11-02 20:38:00 +01:00
hosts blub 2023-12-07 21:09:42 +01:00
lib remove digga 2023-12-03 19:46:02 +01:00
modules fix disko 2023-12-03 18:34:01 +01:00
pkgs improve glowing bear 2023-12-09 10:12:38 +01:00
profiles chromium: add ublock config 2023-12-16 19:20:36 +01:00
secrets-all fixup 2023-10-15 15:46:13 +02:00
shell devshell: add sops 2023-05-16 22:27:23 +02:00
users vscode: inital commit 2023-12-16 19:22:53 +01:00
.editorconfig plasma: configure locations 2023-01-11 13:17:20 +01:00
.envrc pkgs-lib/shell: update to new homeConfigurations 2021-04-11 10:28:28 -07:00
.gitignore gitignore: add flk command results 2021-06-03 22:42:45 -07:00
.sops.yaml fabric: simplify 2023-06-10 13:49:49 +02:00
COPYING pkgs-lib/shell: update to new homeConfigurations 2021-04-11 10:28:28 -07:00
README.md fix ssh hostkey path in readme 2022-08-14 19:50:37 +02:00
flake.lock vscode: inital commit 2023-12-16 19:22:53 +01:00
flake.nix vscode: inital commit 2023-12-16 19:22:53 +01:00
host-registry.nix fig: cleanup 2023-12-03 19:30:09 +01:00

README.md

nixos-configuration

Deployment, how does it work?

  1. You commit changes and push them to the main branch.
  2. Gitea triggers the flake-update webhook, which updates the flake inputs of the main branch and pushes the result to the flake-update branch. The flake-update branch is also updated daily.
  3. Hydra picks up the commit and starts building the configurations.
  4. When the advanceGate aggregate job succeeds, a Hydra runcommand hook
    • advances the hydra-built branch to that commit
    • tries to deploy some hosts using deploy-rs. If an activation script fails or a host becomes unreachable after deployment, all hosts (including the failed one) will be rolled back and the administrator will be notified of that failure. Otherwise the production branch advances.
  5. Some hosts (desktops/laptops) cannot be deployed this way because they aren't always running or don't have static addresses. They periodically try to update themselves to the newest version built by hydra using autoupdate.

Also, all hosts can update themselves using sudo update-from-hydra. Using a local checkout, hosts can also be (temporarily) deployed with deploy .#hostname -s. This configuration won't persist as automatic updates continue.

Installation

desktop

  1. Boot the latest nixos installation image
  2. Create the desired disk setup:
sudo -i
export disk=/dev/nvme0n1
export hostname=banana
export encryption=yes

nix-shell -p nixUnstable # this is still neccessary as of 2022-04-06
cd /tmp
git clone https://git.hrnz.li/Ulli/nixos
cd nixos

sgdisk -Z $disk
sgdisk -n 0:0:+2G -t 0:ef00 $disk # efi system partition
sgdisk -n 0:0:+10G -t 0:8200 $disk # swap
sgdisk -n 0:0:0 -t 0:8300 $disk # zfs

mkfs.vfat ${disk}p1
mkswap ${disk}p2
swapon ${disk}p2
if [ x$encryption == "xyes"]; then
    zpool create -o ashift=12 -o autotrim=on -O canmount=off -O compression=on -O dnodesize=auto -O normalization=formD -O relatime=on -O acltype=posix -O xattr=sa -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphras e rpool ${disk}p3
else 
    zpool create -o ashift=12 -o autotrim=on -O canmount=off -O compression=on -O dnodesize=auto -O normalization=formD -O relatime=on -O acltype=posix -O xattr=sa rpool ${disk}p3
fi
zfs create -o refreservation=1G -o mountpoint=none -o canmount=off rpool/reserved
zfs create -o mountpoint=none -o canmount=off rpool/local
zfs create -o mountpoint=legacy -o canmount=on rpool/local/root
zfs create -o mountpoint=legacy -o canmount=on -o atime=off rpool/local/nix
zfs create -o mountpoint=legacy -o canmount=on rpool/data
zfs create -o mountpoint=legacy -o canmount=on rpool/persist
zfs snapshot rpool/local/root@blank
mount -t zfs rpool/local/root /mnt
mkdir /mnt/{boot,nix,data,persist}
mount ${disk}p1 /mnt/boot
mount -t zfs rpool/local/nix /mnt/nix
mount -t zfs rpool/data /mnt/data
mount -t zfs rpool/persist /mnt/persist

Now you want to restore /persist (and possibly) /data from your backups. If this is impossible because you donkey have lost them all, you need to rekey using your sops master key (you did not lose all copies of that one either, right?).

If you don't want to restore from backups (bootstrapping new host?), you will at the very least need to create some directories:

mkdir -p /mnt/{data,persist}/ulli
mkdir -p /mnt/persist/ulli/{.config,.local,.cache}
mkdir -p /mnt/persist/ssh # otherwise generation of ssh host keys will fail
mkdir -p /mnt/persist/ulli/.local/share/mozilla # needed for home-manager activation (.mozilla is a mixture of persistent state and declarative config)
chown -R 1000 /mnt/{data,persist}/ulli
  1. install the system.

Before installing, you might want to enable the option hrnz.graphical.minimal temporarily to speed up the installation (this will exclude stuff like texlive for now).

nixos-generate-config --root /mnt --show-hardware-config > hosts/${hostname/hardware-configuration.nix
git add -N hosts/${hostname}/hardware-configuration.nix
ulimit -n 50000
nixos-install --flake .#${hostname}

After installing, you might want to save the changes done to the configuration (either commit and push or move the repository to the installed system (most likely to /data/ulli/repos/nixos)).

Provision a new Hetzner cloud server

Create a new Ubuntu 20.04 server, select your ssh key and paste the following into the user data field:

#!/bin/sh
curl https://raw.githubusercontent.com/elitak/nixos-infect/master/nixos-infect | NIX_CHANNEL=nixos-21.05 bash 2>&1 | tee /tmp/infect.log

Nixos will be installed automatically and the server should boot into Nixos.

Meanwhile copy one of the host configurations of another hetzner server (pitaya), remove all the specialized profiles, and adjust the ip addresses and interface names. Then deploy with

deploy '.#pitaya' -s --auto-rollback false --hostname 162.55.56.196 --ssh-user root

(adjust hostname and ip address accordingly) Then

  • Add DNS records for the host
  • fill out host-registry

secrets

add the output of

nix run nixos#ssh-to-age -- -i /persist/ssh/ssh_host_ed25519_key.pub
age1kaquplfm5quvz5yt9fjmjr7hjrx3tgt5ak8qn70grnze882nlchqlvgf7w

to .sops.yaml