Development machines made easy

From lazyness to awesomeness

Let’s dig into how I manage creating and managing development machines. The Why and the How.

For work, I sometimes need to create and use development machines to hack on a specific case. There is multiple reason I would need to create those machines instead of working directly on my laptop or desktop. Let’s look at those, and how I try to automate the hell out of it (because I am really lazy).

Use case

Let’s look into some use-case that are useful to me

  • Create and/or test packages for a specific distribution — most likely RPM-based (Fedora, RHEL, …) and Debian-based (Ubuntu, Debian, …).
  • From scratch machine,
    • to make sure some documentation are complete for people to start hacking on a project, and using a tool.
    • to make demo, recording or something 👼
  • Cluster machines for Kubernetes or Openshift.

Some requirements and nice-to-have:

  • Automate provisioning of these machines.
  • Use virtual machine for most case (libvirt, qemu+kvm).
  • Auto updates of the “provisioning”

Targeted system are, for now :

  • Fedora, RHEL
  • Debian, Ubuntu
  • NixOS

Base images

I initially wanted to use packer and it has its uses. I mainly need to build images for virtual machines, and it’s even simpler than that, qemu based virtual machine (using libvirt). For those cases, there is simpler solution:

TODO virt-builder

TODO NixOS nixos-generators

TODO Provisionning

Now that we have base images, we can start to play around, most likely with ansible to easily and quickly provision setups based on those. Those setups can includes :

  • development environment
    • Nix-based (aka Nixos or Nixpkgs on other machines)
    • Native-based (aka no Nix), Fedora
  • multiple node testing environment

TODO Development machine

There is currently only one, my main development machine, to hack on containers and orchestration tooling, using mainly go.

  • It doesn’t need to be a “workstation”, can be a server, I just need a headless fedora on which I can install stuff and run services.
  • It needs to be able to run minikube or crc, so nested virtualization is required.
  • It needs to require the less effort to reset/recreate.
  • It needs to have the same IP on the network, always (meaning same MAC address)
  • It needs to be “updatable” using provisionning (aka I change the provisionning part, I apply and go !)

TODO Create the virtal machine

I want to re-use a disk for /home (or at least $HOME/src) so that I can just re-attach it later on. We are assuming we have a base image working here, see virt-builder.

virsh --connect=qemu+tcp://wakasu.home:16509/system list
Id Name State
2 fedora-dev running

TODO Base system

  • zsh
  • bash
  • exa
  • htop
  • ssh/shhd

TODO Virtualization tooling

  • nested virt => @virtualization, libvirt-devel

TODO Containers tooling

  • podman-docker
  • buildah
  • skopeo
  • ko
  • kubectl
  • google-cloud-sdk

TODO Developers tooling

TODO Nix setup ?

TODO Kubernetes cluster




Let’s use packer with qemu for those cases — and let’s create a repository where we’re gonna write the development machine recipes : vdemeester/machines.

NixOS recipes

The initial source of packer recipes comes from nix-community/nixbox, but I’m tailoring them to my needs

  "builders": [
      "boot_wait": "40s",
      "boot_command": [
        "echo http://{{ .HTTPIP }}:{{ .HTTPPort}} > .packer_http<enter>",
        "mkdir -m 0700 .ssh<enter>",
        "curl $(cat .packer_http)/ > .ssh/authorized_keys<enter>",
        "systemctl start sshd<enter>"
      "http_directory": "scripts",
      "iso_checksum_type": "sha256",
      "shutdown_command": "shutdown -h now",
      "ssh_private_key_file": "./scripts/install_rsa",
      "ssh_port": 22,
      "ssh_username": "root",
      "type": "qemu",
      "iso_url": "",
      "iso_checksum": "cc7c399c5fe4672383fe54cb1d648854a0d6732765fe1a61bb38b3fe3b7c6d2f",
      "disk_interface": "virtio-scsi",
      "qemuargs": [
  "provisioners": [
      "type": "shell",
      "script": "./scripts/"

Let’s look at the provisioning script. We don’t want to create a full specific configuration for these images as we will use ansible for the final provisioning.

  • scripts/

    #!/bin/sh -e
    packer_http=$(cat .packer_http)
    # Partition disk
    cat <<FDISK | fdisk /dev/sda
    # Create filesystem
    mkfs.ext4 -j -L nixos /dev/sda1
    # Mount filesystem
    mount LABEL=nixos /mnt
    # Setup system
    nixos-generate-config --root /mnt
    curl -sf "$packer_http/machine.nix" > /mnt/etc/nixos/machine.nix
    curl -sf "$packer_http/builders/$PACKER_BUILDER_TYPE.nix" > /mnt/etc/nixos/hardware-builder.nix
    curl -sf "$packer_http/configuration.nix" > /mnt/etc/nixos/configuration.nix
    curl -sf "$packer_http/custom-configuration.nix" > /mnt/etc/nixos/custom-configuration.nix
    ### Install ###
    ### Cleanup ###
    curl "$packer_http/" | nixos-install
  • scripts/

    # Make sure we are totally up to date
    nix-channel --add nixos
    nix-channel --update
    nixos-rebuild switch --upgrade
    # Cleanup any previous generations and delete old packages that can be
    # pruned.
    for x in $(seq 0 2) ; do
      nix-env --delete-generations old
      nix-collect-garbage -d
    # Remove install ssh key
    rm -rf /root/.ssh /root/.packer_http
    # Zero out the disk (for better compression)
    dd if=/dev/zero of=/EMPTY bs=1M
    rm -rf /EMPTY
  • scripts/machine.nix

    # This file is overwritten by the vagrant-nixos plugin
    { config, pkgs, ... }:
      networking.hostName = "nixos-machine";
  • scripts/configuration.nix

    { config, pkgs, ... }:
      imports =
        [ # Include the results of the hardware scan.
      # Use the GRUB 2 boot loader.
      boot.loader.grub.enable = true;
      boot.loader.grub.version = 2;
      boot.loader.grub.device = "/dev/sda";
      # remove the fsck that runs at startup. It will always fail to run, stopping
      # your boot until you press *.
      boot.initrd.checkJournalingFS = false;
      # Services to enable:
      # Enable the OpenSSH daemon.
      services.openssh.enable = true;
      # Enable DBus
      services.dbus.enable    = true;
      # Replace nptd by timesyncd
      services.timesyncd.enable = true;
      # Packages for Vagrant
      environment.systemPackages = with pkgs; [
      # Creates a "vincent" users with password-less sudo access
      users = {
        extraGroups = [ { name = "vincent"; } ];
        extraUsers  = [
          # Try to avoid ask password
          { name = "root"; password = "vincent"; }
            description     = "Vincent User";
            name            = "vincent";
            group           = "vincent";
            extraGroups     = [ "users" "wheel" ];
            password        = "vincent";
            home            = "/home/vincent";
            createHome      = true;
            useDefaultShell = true;
            openssh.authorizedKeys.keys = [
              "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDO1sx5h44xnK/k0ODnQ3aQR8+nr7HC7u94fS3OhwQ6AvjqDGLnI6EP4sr4Yh2eXf8lHX+lkg8iZ6Z+y9dVnnzwveZfqbfOyh6t8Hg+M1nl26rwdYv+guU8khvh+Kzl9Vdb5dexf/hWQ/LcWvsuPO+tBmqajNTLYbGinqrMm3Bw2jJS/+DitgoT8hiuSTU1smY1CGzggHEdsx4+oDMuDMvRYwOBBHrUF00lZLx3zB3nGl1VFYD2St3vzlmzoZNrW7Rx8TRg02BTVAwd4qPHOMz8Kg+JmDhVig9yeqHo4FCwXxQ8+jk54Cd2el6TjfaA5HD2+e4FYLP6bMSLIabLTfLP vincent@wakasu"
      security.sudo.configFile =
          Defaults:root,%wheel env_keep+=LOCALE_ARCHIVE
          Defaults:root,%wheel env_keep+=NIX_PATH
          Defaults:root,%wheel env_keep+=TERMINFO_DIRS
          Defaults env_keep+=SSH_AUTH_SOCK
          Defaults lecture = never
          root   ALL=(ALL) SETENV: ALL
          %wheel ALL=(ALL) NOPASSWD: ALL, SETENV: ALL
  • scripts/custom-configuration.nix

    { config, pkgs, ... }:
    # Place here any custom configuration specific to your organisation (locale, ...)
    # if you want it to be part of the packer base image to be used with vagrant.
  • scripts/builders/qemu.nix

    { modulesPath, ... }:
      imports = [
        "${toString modulesPath}/profiles/qemu-guest.nix"

And to build this image, a simple packer build nixos.json is required.