JustToThePoint English Website Version
JustToThePoint en español
Colaborate with us

Automatizing creating VMs in Proxmox

Give me six hours to chop down a tree and I will spend the first four sharpening the axe, Abraham Lincoln

Packer

Provisioning VMs manually isn’t just tedious and time consuming; it can lead to inconsistencies and errors that impact your entire infrastructure. Using Packer, we can automate the VM creation process, so every image is created and configured exactly as needed every time. Packer is a tool that lets you create identical machine images for multiple platforms from a single source template.

Install Packer in Proxmox

Install Packer following the instructions provided in its webpage. Do not use the package because it is quite obsolete.

wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
apt install lsb-release # This instruction is necessary in ProxMox
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install packer

Install the proxmox-iso plugin

Packer uses the proxmox-iso plugin to create a new VM on Proxmox, attach an ISO to an ISO URL, boot the VM, pass a boot command that instructs Ubuntu’s installer how/where to fetch autoinstall data, etc. Basically, the Proxmox Packer builder is able to create Proxmox virtual machines and store them as new Proxmox Virtual Machine images.

The fastest and easiest way to install the proxmox-iso plugin is using the following order: packer plugins install github.com/hashicorp/proxmox

Generate an API Token in Proxmox

To create an API token for Proxmox, log in to the Proxmox web interface and navigate to the Datacenter section. Under the Permissions tab, go to the API Tokens section and click Add to generate a new token.

Select a user (e.g., root@pam), give the token a name (e.g., packer), Expire (e.g., never), and very important disable Privilege Separation. Make sure that you do not select the Privilege Separation box. This new token will inherit all permissions from your admin user, allowing you to create the VM and its resources without any issues.

Finally, a secret will then be displayed and must be stored in a safe place. This secret is required to operate Proxmox with Packer. It allows you to authenticate requests to the Proxmox API for automation and management tasks.

Directory structure

In my home directory, I have create a homelab directory with this structure:

homelab/

├── credentials.pkr.hcl
├── files/
│ └── 99-pve.cfg
├── http/
│ ├── user-data
│ └── meta-data
└── ubuntu-server-jammy.pkr.hcl

vim credentials.pkr.hcl:

proxmox_api_url = "https://192.168.1.33:8006/api2/json"  # Your Proxmox IP Address
proxmox_api_token_id = "root@pam!packer"  # API Token ID
proxmox_api_token_secret = "YOUR-API-TOKEN-ID"

vim ~/homelab/ubuntu-server-jammy.pkr.hcl:

# Ubuntu Server jammy
# It is an updated version from Create VMs on Proxmox in Seconds.
# YouTube's Channel: Christian Lempa
# Packer Template to create an Ubuntu Server (jammy) on Proxmox

# Variable Definitions
variable "proxmox_api_url" {
    type = string
    default = "https://192.168.1.33:8006/api2/json"
}
# It specifies the API URL for Proxmox. Make sure this matches your Proxmox server's address.

variable "proxmox_api_token_id" {
    type = string
    default = "root@pam"
}
# The user account for API access. Using root@pam is common, but you may want consider using a dedicated user with limited permissions for security.

variable "proxmox_api_token_secret" {
    type = string
    default = "YOUR-PREVIOUSLY-GENERATED-API-TOKEN"
    sensitive = true
}
# A sensitive token for authentication. Marked as sensitive to avoid logging.
# Ensure this token is managed and stored securely.

# Resource Definition for the VM Template
# This block defines the source for the VM image, specifying Proxmox-specific settings.
source "proxmox-iso" "ubuntu-server-jammy" {

    # Proxmox Connection Settings
    proxmox_url = "${var.proxmox_api_url}" # Uses the defined API URL.
    # username and token for authentication to the Proxmox API.
    username = "${var.proxmox_api_token_id}"
    token = "${var.proxmox_api_token_secret}"
    # (Optional) Skip TLS Verification
    insecure_skip_tls_verify = true

    # VM General Settings
    node = "myserver" # It specifies the Proxmox node where the VM will be created.
    vm_id = "107" # A unique identifier for the VM. Ensure this ID is not conflicting with existing VMs.
    vm_name = "ubuntu-server-jammy" # Name of the VM, which is descriptive and indicates the OS version.
    template_description = "Ubuntu Server jammy Image"

    # This template uses a URL to download the Ubuntu Server ISO, which is convenient for automation.
    iso_url = "https://releases.ubuntu.com/22.04/ubuntu-22.04.5-live-server-amd64.iso"
    # The template ensures the downloaded ISO is not corrupted by verifying its checksum.
    iso_checksum = "9bc6028870aef3f74f4e16b900008179e78b130e6b0b9a140635434a46aa98b0"
    iso_storage_pool = "local"
    unmount_iso = true

    # VM System Settings
    qemu_agent = true
    # Set to true to enable the QEMU Guest Agent for improved management capabilities.
    # Packer needs the VM's IP and relies on the QEMU guest agent to be running on the machine.

    # VM Hard Disk Settings
    scsi_controller = "virtio-scsi-pci"

    # Defines the disk size, format, storage pool, and type. Using virtio is optimal for performance in virtual environments.
    # Check my article about RAID, Adding SATA drives, TrueNAS
    disks {
        disk_size = "20G"
        format = "raw"
        storage_pool = "mypool"
        # Specify the storage pool where the VM's disk will be stored.
        type = "virtio"
    }

    # VM CPU and Memory Settings.
    # Allocates 1 core and 2GB of RAM, which might be sufficient for basic server tasks but could be increased based on application needs.
    cores = "1"
    memory = "2048"

    # VM Network Settings
    # Configures the network adapter model and bridge. Using virtio is recommended for performance.
    network_adapters {
        model = "virtio"
        bridge = "vmbr0"
        firewall = "false"
        # Set to false (no firewall enable); ensure this aligns with your security policies.
    }

    # VM Cloud-Init Settings
    # Enables cloud-init for post-deployment customization, which is critical for dynamic environments.
    # It handles a range of tasks that normally happen when a new instance is created. It’s responsible for activities like setting the hostname, configuring network interfaces, creating user accounts, and even running shell scripts.
    cloud_init = true
    cloud_init_storage_pool = "local-lvm"

    # PACKER Boot Commands
    # It defines the boot commands to initiate the autoinstall process.
    # The autoinstall option allows for unattended or handsoff installation, which is essential for automation.
    # ds=datasource, s=Location
    # If the autoinstall does not work, you may go to the Console, and do a typical installation
    boot_command = [
        "<esc><wait>",
        "e<wait>",
        "<down><down><down><end>",
        "<bs><bs><bs><bs><wait>",
        "autoinstall ds=nocloud-net\\;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ ---<wait>",
        # This instructs the Ubuntu “live server” to run autoinstall (It will force Subiquity to perform destructive actions without asking confirmation from the user)
        # and fetch user-data from the Packer HTTP server.
        # The --- signals the kernel to continue.
        "<f10><wait>"
    ]
    boot = "c"
    boot_wait = "5s"

    # PACKER Autoinstall Settings
    http_directory = "http"
    # Packer will start a HTTP server from the content of the http directory (with the http_directory parameter). This will allow Subiquity to fetch the cloud-init files remotely.
    # Subiquity is a new live system based on cloud-init and uses a YAML file to fully automate the installation process.
    # This means Packer will serve all files from ~/homelab/http/ at the root URL.
    # So the file user-data is available at  http://{{ .HTTPIP }}:{{ .HTTPPort }}/user-data.
    # The meta-data file can be empty but must be present, otherwise cloud-init will not start correctly.

    ssh_username = "nmaximo7"
    ssh_password = "YOUR-SSH-PASSWORD"

    # The OS install can take a while. If 20 minutes pass and it’s still not installed or no networking, it’ll fail eventually.
    ssh_timeout = "20m"
}

# Build Definition to create the VM Template
# This block defines the overall build process.
build {

    name = "ubuntu-server-jammy"
    sources = ["proxmox-iso.ubuntu-server-jammy"]

    # The first provisioner waits for cloud-init to finish, removes SSH host keys, and cleans up unnecessary files, which is crucial for preparing the image for reuse.
    provisioner "shell" {
        inline = [
            "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
            "sudo rm /etc/ssh/ssh_host_*",
            "sudo truncate -s 0 /etc/machine-id",
            "sudo apt -y autoremove --purge",
            "sudo apt -y clean",
            "sudo apt -y autoclean",
            "sudo cloud-init clean",
            "sudo rm -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg",
            "sudo rm -f /etc/netplan/00-installer-config.yaml",
            "sudo sync"
        ]
    }

    # The second provisioner transfers a custom configuration file (99-pve.cfg) to the VM, which is used for cloud-init configuration.
    provisioner "file" {
        source = "~/homelab/files/99-pve.cfg"
        destination = "/tmp/99-pve.cfg"
    }

    # The third provisioner copies the configuration file to the appropriate location in the VM.
    provisioner "shell" {
        inline = [ "sudo cp /tmp/99-pve.cfg /etc/cloud/cloud.cfg.d/99-pve.cfg" ]
    }
}

~/homelab/http/user-data is a cloud-init configuration file that automates the setup of an Ubuntu Server during the installation process. The user-data file is written in YAML format and is used by cloud-init to configure the server automatically during the installation.

Running in your node (e.g., myserver), Shell, ip link show and ip a tells us that the physical NIC is named enp42s0. That bridged into vmbr0. The guest VM sees only a virtual NIC provided by QEMU (e.g., virtio). That NIC typically appears as ensX, enpXsY, or eth0 inside the VM, e.g., ens18.

root@myserver:/home/nmaximo7/homelab# ip link show
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp42s0:  mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff
3: vmbr0:  mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff

root@myserver:/home/nmaximo7/homelab# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp42s0:  mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff
3: vmbr0:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.33/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::da43:aeff:fec0:4ac4/64 scope link
       valid_lft forever preferred_lft forever
# vi /home/nmaximo7/homelab/http/user-data
autoinstall:
  version: 1 # It specifies the version of the autoinstall configuration format being used.
  locale: en_US # Sets the system locale to US English.
  keyboard: # Configures the keyboard layout to Spanish (es).
    layout: es
  ssh: # SSH Configuration
    install-server: true # Installs the OpenSSH server, allowing SSH access to the VM.
    allow-pw: true # Allows password authentication for SSH.
    disable_root: false # Allows root login via SSH, which is a security risk (testing environment).
    ssh_quiet_keygen: true # Suppresses output during SSH key generation.
    allow_public_ssh_keys: true # Allows the use of public SSH keys for authentication.
  packages: # Lists packages to be installed during the setup.
    - qemu-guest-agent
    # A helper daemon, which is installed in the guest.
    # It is used to exchange information between the host and guest, and to execute command in the guest.
    - sudo
    # Ensures that the sudo package is installed for privilege escalation.
    - curl
    - net-tools
  storage:
    layout:
      name: direct
    # Indicates a direct disk layout for storage, meaning that we are going to use the entire disk without partitioning.
    swap:
      size: 0
    # Configures the swap space to 0, meaning no swap will be created.
  # OPTION 1: DHCP
  network:
    version: 2
    ethernets:
      ens18: # Replace ens18 with your actual interface name (e.g., eth0, enp0s3)
        dhcp4: true
        nameservers:
          addresses: [8.8.8.8, 8.8.4.4]
  # OPTION 2: Static IP
  network:
    version: 2
    ethernets:
      dhcp4: false # Disable DHCP
      addresses:
        - 192.168.1.82/24
        # Your desired static IP address and subnet mask in CIDR notation
      gateway4: 192.168.1.1 # Your gateway IP address
      nameservers:
        addresses: [8.8.8.8, 8.8.4.4] # Your DNS server addresses
  user-data:
    package_update: true
    package_upgrade: true
    # Enablete automatic package upgrades during the initial setup.
    timezone: Europe/Madrid
    # Sets the system timezone to Madrid (Europe).
    users: # Defines user accounts to create during installation.
      - name: YOUR-USER-NAME # Creates a user named YOUR-USER-NAME.
        groups: [adm, sudo] # Adds the user to the adm and sudo groups, granting administrative privileges.
        lock-passwd: false # Allows the user to log in with a password.
        sudo: ALL=(ALL) NOPASSWD:ALL
        # Grants the user passwordless sudo access to all commands, which is convenient (test environments)
        # but obviously poses a security risk (not recommended in production environments).
        shell: /bin/bash # Sets the default shell for the user to Bash.
        passwd: $randomSalt$...
        # Contains the hashed password for the user, which should be kept secure.
        # You need to provide a hashed password, use: openssl passwd -6
        # Type in your password and it gives you a salted SHA-512 hash.
    runcmd:
      - systemctl enable qemu-guest-agent
      - systemctl start qemu-guest-agent
      - echo "Network configuration applied"

vim ~/homelab/files/99-pve.cfg. This configuration allows the VM to automatically configure itself based on the data sources available. When the VM boots up, cloud-init will check the specified data sources in the order listed to find the necessary information for initial setup.

datasource_list: specifies the data sources that cloud-init will use to gather configuration information during the VM’s boot process. It will first look for configuration data in a ConfigDrive (which is typically used in environments like OpenStack) and then in the NoCloud data source. The second one is commonly used for user-data scripts where configuration data can be supplied via HTTP or directly from the disk.

datasource_list: [ConfigDrive, NoCloud]

Creating templates and virtual machines

cd /home/nmaximo7/homelab
# The first command checks the Packer template file for syntax errors and validates that the configuration is correct.
packer validate -var-file='credentials.pkr.hcl' ubuntu-server-jammy.pkr.hcl
# -var-file specifies a variable file that contains sensitive information (like API tokens or credentials) used in the Packer template.
# Using a separate variable file helps keep private and sensitive data more organized and secure.
# ubuntu-server-jammy.pkr.hcl is the path to the Packer template file we want to validate.

packer build -var-file='credentials.pkr.hcl' ubuntu-server-jammy.pkr.hcl
# It executes the build process defined in the Packer template.
# It will create the VM template as specified in the template ubuntu-server-jammy.pkr.hcl...
# and provision it according to the defined settings (like installing packages, configuring users, etc.).

Once, this is done, in the left-hand sidebar, navigate to the node (e.g., myserver) where your template is located, right-click on the template (ID 107, ubuntu-server-jammy) and select Clone from the context menu.

In the Clone dialog: Target Node (Select the node where you want to create the new VM, e.g., myserver), VM ID (Assign a unique ID to the new VM), Name (Give the new VM a descriptive name), Mode (Full Clone, it creates an independent copy of the template; this is recommended for most use cases). Then, click Clone to start the process.

Remove the CD after installing

To remove the CD drive after installing in Proxmox, you can follow these steps:

  1. Stop the VM: Ensure the VM is not running before making changes to its configuration.

  2. Edit the VM Configuration: Use the Proxmox web interface or the pvesh command-line tool to remove the CD drive from the VM configuration: pvesh set nodes/YOUR-NODE/qemu/YOUR-VM-ID/config -delete=ide2, e.g, pvesh set nodes/myserver/qemu/107/config -delete=id2

Automatizing & Scripting

#!/bin/bash
# 1. Make it executable: chmod +x build_template.sh
# 2. Customize variables: Change the TEMPLATE_ID, PACKER_TEMPLATE, and CREDENTIALS_FILE variables at the beginning of the script to match your setup.
# 3. Run the script: ./build_template.sh

# Set variables (customize these)
TEMPLATE_ID="107" # ID of the template to delete
PACKER_TEMPLATE="ubuntu-server-jammy.pkr.hcl"
CREDENTIALS_FILE="credentials.pkr.hcl"

# Go to the correct directory
cd /home/nmaximo7/homelab || exit 1  # Exit if the directory doesn't exist

# Validate Packer template
echo "Validating Packer template..."
packer validate -var-file="$CREDENTIALS_FILE" "$PACKER_TEMPLATE" || {
  echo "Packer validation failed. Exiting."
  exit 1
}
# The packer validate command is still included to catch syntax errors early. The || construct ensures that the script exits if validation fails.

echo "Checking for existing template..."
if pvesh get /vms/"$TEMPLATE_ID" > /dev/null 2>&1; then
  echo "Template $TEMPLATE_ID found. Stopping VM..."
  pvesh stop /vms/"$TEMPLATE_ID" || {
    echo "Failed to stop VM $TEMPLATE_ID. Exiting."
    exit 1
  }
  echo "Template $TEMPLATE_ID stopped. Deleting..."
  pvesh delete /vms/"$TEMPLATE_ID" --purge || {
    echo "Failed to delete template $TEMPLATE_ID. Exiting."
    exit 1
  }
  echo "Template $TEMPLATE_ID deleted."
else
  echo "Template $TEMPLATE_ID not found. Skipping deletion."
fi

# Build Packer image
echo "Building Packer image..."
packer build -var-file="$CREDENTIALS_FILE" "$PACKER_TEMPLATE" || {
  echo "Packer build failed. Exiting."
  exit 1
}

echo "Packer build complete."

Troubleshooting

  1. After creating the VM, check if the router is issuing a lease (meaning that the router is not assigning an IP address to our VM for a specified period of time, known as a “lease”): sudo dhclient -v ens18. You should get something like: DHCPOFFER of 192.186.1.56 (Ip offered by the router) from 192.168.1.1 (router’s IP).
  2. Run cat /etc/netplan/*.yaml. If you see no YAML file or the file is missing a dhcp4: true line, the final system has no persistent config.
  3. Check your logs: sudo journalctl -u cloud-init -b or sudo tail -n100 /var/log/cloud-init.log to see if cloud-init is overwriting your configuration on your VM first boot.
  4. Create your own file, e.g. /etc/netplan/50-cloud-init.yaml, then run sudo netplan apply and check ip a again:
network:
  version: 2
  ethernets:
    ens18:
      dhcp4: true
      nameservers:
        addresses:
          - 8.8.8.8
          - 8.8.4.4
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.