Proxmox



 ███████████  ███████████      ███████    █████ █████ ██████   ██████    ███████    █████ █████
░░███░░░░░███░░███░░░░░███   ███░░░░░███ ░░███ ░░███ ░░██████ ██████   ███░░░░░███ ░░███ ░░███ 
 ░███    ░███ ░███    ░███  ███     ░░███ ░░███ ███   ░███░█████░███  ███     ░░███ ░░███ ███  
 ░██████████  ░██████████  ░███      ░███  ░░█████    ░███░░███ ░███ ░███      ░███  ░░█████   
 ░███░░░░░░   ░███░░░░░███ ░███      ░███   ███░███   ░███ ░░░  ░███ ░███      ░███   ███░███  
 ░███         ░███    ░███ ░░███     ███   ███ ░░███  ░███      ░███ ░░███     ███   ███ ░░███ 
 █████        █████   █████ ░░░███████░   █████ █████ █████     █████ ░░░███████░   █████ █████
░░░░░        ░░░░░   ░░░░░    ░░░░░░░    ░░░░░ ░░░░░ ░░░░░     ░░░░░    ░░░░░░░    ░░░░░ ░░░░░ 
                                                                                               
                                                                                               
                                                                                               

Small Beginnings

This past year I have been working on my home-lab and a few months ago I deployed my Proxmox Virtual Environment. In my long and challenging journey of getting PCIE passthrough via IOMMU to work correctly, I wanted to explore the world of Proxmox qm to start automating my virtual machine development environment. Proxmox qm, short for QEMU Manager, is the main CLI tool to interact with your PVE outside of the GUI. This, paired with some basic Bash scripting, leads to a powerful automation tool that makes sure all my VM’s are deployed in the same state. They also all get the same tricky PCIE passthrough configurations and my most common packages, so they are ready for me right on startup. This was my first step into some basic IaC and I wanted to share my journey so far, as well as my future goals for what I want to learn moving forward.

Why qm and Bash?

I chose qm and Bash since qm is a first party tool designed for Proxmox and is already installed once your PVE is configured, Bash is simple and great for CLI commands and already configured as well (No extra packages like Python). I thought it was also a nice chance to learn more about Bash.

I chose ‘qm’ and Bash because qm is a first party tool specifically designed for Proxmox that already came installed with the default Proxmox installation, and Bash because of how simple and great for it is for CLI commands. Not to mention, Bash also comes installed with Proxmox. Less work and also a great chance to learn Bash.

The Script

Note

This just goes over the general script workflow, I am still learning the best ways to approach certain tasks so this script might evolve and change over time. This post is more of a snapshot of the MVP I was able to put together.

.env

First we create a .env file to set our parameters; this is a temp file that can be removed after the VM’s creation.

# .env Example:
#
# DEV_VM_CIUSER=
# DEV_VM_CIPASSWORD=
#
# DEV_VM_ID=
# DEV_VM_NAME=
#
# DEV_VM_UNAME_R=
# DEV_VM_PCI_GPU=
#
# NVIDIA_CONTAINER_TOOLKIT_VERSION=
# TAILSCALE_AUTH_KEY

Long-term I would like all the secrets to be stored in a secrets manager (Such is HashiCorp Vault) to fetch as my home-lab evolves.

qm set commands

We then use the qm set commands to create all of the modifiers for our VM:

CategorySetting / Change
VM CreationCreate VM $DEV_VM_ID, network virtio,bridge=vmbr0
Disk Cloud imageImport Debian 12 cloud image → local-lvm:vm-$DEV_VM_ID-disk-0 (scsi0)
Attach DriveAdd Cloud-Init drive on local-lvm:cloudinit (scsi1)
Add StorageResize scsi0 by +6G
Boot OrderBoot from scsi0 (boot order c)
Firmware/UEFIOVMF (UEFI), EFI disk 4M, Secure Boot disabled (pre-enrolled-keys=0)
CPU1 socket × 2 cores, --cpu host
Memory8192 MB RAM, ballooning disabled
Chipset/BIOSMachine type q35, BIOS ovmf
GPUPCIe passthrough of GPU $DEV_VM_PCI_GPU (hostpci0)
IntegrationQEMU guest agent enabled
ConsoleSerial console on serial0=socket
NetworkingVirtIO NIC bridged to vmbr0, IP via DHCP (Cloud-Init)
ProvisioningCloud-Init drive handles credentials & config (Proxmox ciuser/cipassword skipped)
OptionalCommented-out steps: convert to template, clone, destroy VM

Here, it’s good to note that I switched from traditional Linux ISO’s to cloud-init images. These images are the cloud native industry standard for any type of public cloud platform. It makes the configuration process much easier by setting parameters such as username/password and hostname before the VM is even deployed.

cloud-init script

cloud-init scripts are stored in YAML files. But we can define the script details in our bash script then create the YAML file once the script is run.

This exports our script

# Create a cloud-init user configuration file to set up the VM on first boot
cat > /var/lib/vz/snippets/${DEV_VM_ID}-user.yaml <<EOF

The start of our cloud-init cloud-config script

#cloud-config

hostname: $DEV_VM_NAME

# Configure Host
manage_etc_hosts: true
package_update: true

Install cmatrix as a fun package install test

# Install additional packages
packages: [cmatrix]

This is what the script does

We then create out Root user:

users:
  - name: ${DEV_VM_CIUSER}
    sudo: ALL=(ALL) NOPASSWD:ALL
    groups: sudo
    lock_passwd: false
    shell: /bin/bash
chpasswd:
  list: |
    ${DEV_VM_CIUSER}:${DEV_VM_CIPASSWORD}
  expire: false

We then have to update the APT sources repository

# Update the APT sources to use the bookworm repositories More info: https://wiki.debian.org/SourcesList
write_files:
  - path: /etc/apt/sources.list
    
    # Insert "contrib non-free non-free-firmware" here to match your APT file

Long story short, since we configured PCIE passthrough via IOMMU and I am using an Nvidia GPU, in order to install the GPU drivers you need to add contrib non-free non-free-firmware repositories since the official Nvidia GPU drivers are unsigned.

Note: here I had to define some custom DNS servers since I was running into

network:
        # Insert DNS servers here

We then run our install commands:

runcmd:
    # Update and upgrade the system
    # Install Tailscale and SSH
    # Update and upgrade the system
    # Install Linux headers (Required for NVIDIA drivers)
    # Install NVIDIA drivers
    # Update the system with the new drivers
    # Install Docker
    # Install NVIDIA Container Toolkit
    # Reboot the VM to apply all changes
EOF

We then export the file and start the VM

qm set $DEV_VM_ID --cicustom "user=local:snippets/${DEV_VM_ID}-user.yaml"

qm start $DEV_VM_ID

Future Works

I think this was a fun project to test and explore. I think long term for maintainability I might switch this configuration to HashiCorps’s Terraform product. This would also allow me to learn more about cloud IaC and how it interacts with cloud native images.

Conclusion

Overall this was a fun project to work on. I learned a lot about Proxmox’s qm, bash and GPU PCIE passthroughs. I plan to use this new VM as my new development playground for new AI projects, docker testing and be a remote IDE for my blog/website (This right here!) this is also where I will continue to share my Proxmox journey and milestones!