JustToThePoint English Website Version
JustToThePoint en español

Using Terraform to Manage Proxmox VMs

The problem is not the problem. The problem is your attitude about the problem, Captain Jack Sparrow

Home Server & Proxmox

Using Terraform to Manage Proxmox VMs

Terraform is an infrastructure as code tool that lets you build, change, and version your entirely infrastructure safely and efficiently. Users define and describe their desired state in declarative files (HCL or JSON), and Terraform figures out how to create or update resources. This includes compute instances, storage, networking, DNS records, and more.

This guide shows how to use Terraform to clone a Proxmox VM template into running VMs, fully automated and reproducible.

Post-Installation Steps for Ubuntu VM in Proxmox

After installing Ubuntu on your VM, follow these steps to finalize the setup:

  1. Install the qemu-guest-agent and cloud-init.

    qemu-guest-agent allows the host to perform tasks like shutting down the VM cleanly, syncing the system clock, or freezing the file system before taking snapshots. Cloud-init is a powerful tool that automates the initial setup of a virtual machine during its first boot. You can use it to: set the hostname and time zone; create users and set passwords; configure network interfaces and DNS; install packages and run custom scripts, and inject SSH keys for secure access.

  2. Run virt-customize or manually adjust the operating system settings as needed to prepare it for deployment.

    virt-customize is a command-line tool used to modify virtual machine disk images before booting them. It can install packages into the image; edit configuration files, such as setting up networking or SSH; inject SSH keys or set passwords; run scripts or commands on first boot; change hostnames, timezones, and more

  3. Convert the VM to a Template. Power down the VM. In the Proxmox web UI, right-click on your VM and select Convert to template.
  4. Check BIOS Settings. To to the Options tab of the original VM in Proxmox. If the VM used OVMF (UEFI), ensure that your Terraform configuration specifies bios = “ovmf”. If it used SeaBIOS, adjust your Terraform configuration to use bios = “seabios”.
    Always verify the BIOS type to avoid issues when deploying new VMs from the template.

Install Terraform

On your Proxmox host:

root@myserver:~# mkdir terraform
root@myserver:~# cd terraform/
root@myserver:~/terraform# wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
root@myserver:~/terraform# echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
root@myserver:~/terraform# sudo apt update && sudo apt install terraform
root@myserver:~/terraform# terraform --version
Terraform v1.12.2
on linux_amd64
+ provider registry.terraform.io/telmate/proxmox v3.0.1-rc6

Directory Layout

.
└─ terraform
├─── credentials.auto.tfvars # contains sensitive vars
Terraform will automatically load *.auto.tfvars to populate variables.
├─── docker-compose.yml # optional: run Terraform inside Docker
├─── production.tf # it declares the Proxmox provider and variables
├─── provider.tf # your VM resource definitions
├─── proxmox-terraform.sh
└─── config
·····├─── script.sh
└─── .ssh
·····├─── id_rsa # SSH private key
·····├─── id_rsa.pub # corresponding public key

Managing Credentials

We will create a file named credentials.auto.tfvars to store all sensitive information that Terraform requires to connect to Proxmox.

Creating an API Token for Proxmox

To create an API token for Proxmox, log in to the Proxmox web interface and navigate to the Datacenter section. Under the Permissions tab, go to the API Tokens section and click Add to generate a new token.

Select a user (e.g., root@pam), give the token a descriptive name (e.g., packer), Expire (set the expiration, e.g., never), and very importantly uncheck the Privilege Separation checkbox. Make sure that you do not select the Privilege Separation box. This new token will inherit all permissions from your admin user, allowing you to create the VM and manage its resources without any issues.

After creating the token, a secret will be displayed. Store this secret in a safe place. This secret is required to operate Proxmox with Packer and Terraform. It allows you to authenticate requests to the Proxmox API for automation and management tasks.

Example credentials.auto.tfvars File

    # Proxmox API URL + /api2/json
    proxmox_api_url          = "https://192.168.1.33:8006/api2/json"

    # User and API token
    proxmox_api_token_id     = "root@pam!packer"
    proxmox_api_token_secret = "0c37b9f5-a427-45e7-b61d-472a586fb4d7"

    # SSH Key Configuration
    private_key_path         = "./.ssh/id_rsa"
    ssh_key             	 = "ssh-rsa AAAA[...]uZQ== nmaximo7@nixos"

Configure Terraform Proxmox provider

Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs. Most providers configure a specific infrastructure platform (either cloud or self-hosted). A Terraform provider is responsible for understanding API interactions and exposing resources.

The most widely-used community-maintained Telmate provider lets you define and manage Proxmox virtual machines using infrastructure-as-code principles.

This Terraform configuration is set up to manage resources in a Proxmox environment using the specified provider. vim provider.tf:

terraform {
  required_providers { # It tells Terraform which plugins to fetch
    proxmox = { # It defines the provider
      source = "Telmate/proxmox" # It indicates where to find the provider
      version = "3.0.1-rc6" # It specifies which version of the provider to use
    }
  }
}

# These blocks define input variables for the Terraform configuration.
# This variable will hold the URL for the Proxmox API.
variable proxmox_api_url {
  type        = string
  description = "The URL of the Proxmox API."
}

# This variable will hold the token ID for authenticating with the Proxmox API.
variable proxmox_api_token_id {
  type        = string
  description = "The token ID for the Proxmox API."
}

# This variable will hold the actual token used for API authentication.
variable proxmox_api_token_secret {
  type        = string
  description = "The token secret for the Proxmox API."
  sensitive   = true
}

variable ssh_key {
  type         = string
  description  = "Your public SSH Key."
  sensitive = true
}

variable private_key_path {
	type = string
	sensitive = true
}

provider "proxmox" {
  # Configuration options for the Proxmox provider.
  # It passes credentials to the plugin.
  pm_api_url = var.proxmox_api_url
  # It sets the API URL for connecting to the Proxmox server, using the corresponding variable.
  pm_api_token_id = var.proxmox_api_token_id
  # It sets the token ID for API access.
  pm_api_token_secret = var.proxmox_api_token_secret
  # It sets the token for authentication.
  pm_tls_insecure = true
  # It allows Terraform to skip TLS verification.
}

Develop Terraform plan

This template provides a comprehensive way to define a VM in Proxmox using Terraform, allowing for automation and reproducibility in your infrastructure management. nvim production.tf:

# The resource type is proxmox_vm_qemu, which is used to manage QEMU virtual machines in Proxmox.
resource "proxmox_vm_qemu" "myvmserver" {
    name                = "myvmserver" # Name of the virtual machine
    vmid                = 207 # Unique ID for the VM (must be unique within the Proxmox cluster)
    target_node         = "myserver" # Target Proxmox node where the VM will be created
    clone               = "ubuntu-server-plucky" # Base template to clone from
    full_clone          = true # It is set to true, so it creates a full clone of the template
    os_type             = "ubuntu" # Operating system type
    bios                = "seabios"
    # Check the original VM's (where the template is coming from) Hardware tab, BIOS in Proxmox.
    # If it used OVMF (UEFI), then bios = "ovmf".
    # If it used SeaBIOS, then bios = "seabios" in Terraform.
    scsihw              = "virtio-scsi-pci" # It ensures a proper SCSI controller for the disk.
    # Boot order configuration for the VM
    # This specifies the order in which the VM will attempt to boot.
    # 'virtio0' refers to the virtual disk, and 'net0' refers to the network interface.
    boot = "order=virtio0;net0" # If the VM cannot boot from the disk, it will then attempt to boot from the network.
    agent = 1  # Enable QEMU guest agent

By default, Proxmox tries to boot from your first disk, but sometimes you need to specify the boot order. In the Proxmox UI, you should check your template: ubuntu-server-plucky, Hardware. In my particular case, Hard Disk (virtio0) mypool···.

Besides, go to the VM template’s Options tab, Boot Order (e.g., virtio0, net0), that is,

  1. virtio0: points to mypool storage. You need to enable it. This is your root disk.
  2. net0: network boot device.
    agent               = 1 # Enables the QEMU guest agent
    sockets             = 1 # Number of CPU sockets
    cores               = 2 # Number of CPU cores per socket
    memory              = 8192 # Amount of memory allocated (in MB), it should match template's size.

    disk { # Disk configuration block
    # It uses SCSI, 32 GB, in mypool, with raw format.
        slot            = "virtio0" # Disk slot for the VM
        # Must match one of the allowed strings: scsi0, virtio0, ide0, etc.
        size            = "80G" # Size of the disk, it should match template's size.
        type            = "disk" # Type of the disk (disk, cdrom or cloudinit)
        # Use disk if this is a normal hard disk
        storage         = "mypool" # Storage pool where the disk will be located.
        # I have a storage "mypool" (type ZFS) on node "myserver".
        format          = "raw" # Disk format (typically qcow2, raw, etc.)
        # I'm using the ZFS pool for storing my VM disks and containers.
        # By default, ZFS-based storage in Proxmox only supports raw format.
    }

    network { # Network configuration block
        id        = 0 # Network interface ID
        model     = "virtio" # Network interface model (virtio is optimized for KVM)
        bridge    = "vmbr0" # Bridge to connect the VM's network interface (Datacenter, myserver, localnetwork (myserver))
        firewall  = false # Disable firewall for this interface
        link_down = false # Keep the link up (set to true to bring it down)
    }

    # Cloud-init settings. It ensures network and SSH are configured at first boot.
    # Configure VM to use DHCP for IP assignment
    ipconfig0 = "ip=192.168.1.76/24,gw=192.168.1.1"
    nameserver = "8.8.8.8"
    # Cloud-init configuration
    ciuser = "nmaximo7"   # Cloud-init user for the VM
    sshkeys = var.ssh_key
}

Initialize, Plan & Apply

To create a new VM, run terraform init to initialize the provider, then terraform plan to view the intended changes, and finally terraform apply to create the VM.

root@myserver:~/terraform# ./terraform init
# The command ./terraform init is used to initialize a Terraform working directory.
# It downloads the necessary provider plugins specified in your configuration files.
# More specifically, it will fetch the Proxmox provider, which allows Terraform to interact with your Proxmox environment.
Initializing the backend...
Initializing provider plugins...
- Finding latest version of telmate/proxmox...
- Installing telmate/proxmox v2.9.14...
- Installed telmate/proxmox v2.9.14 (self-signed, key ID A9EBBE091B35AFCE)
Partner and community providers are signed by their developers.
[...]
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure.
./terraform plan
# It tells Terraform to evaluate the current state of the resources defined in your configuration files and compare it against the existing infrastructure.
# It reads the .tf files in the current directory to understand the desired state of your infrastructure.
Terraform used the selected providers to generate
the following execution plan. Resource actions
are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.vm-instance will be created
  + resource "proxmox_vm_qemu" "vm-instance" {
      + additional_wait           = 5
      [...]

root@myserver:~/terraform# ./terraform apply
# Apply the changes required to reach the desired state of your infrastructure as defined in your Terraform configuration files.
# It executes the previous plan and makes the necessary changes to your infrastructure.
root@myserver:~/terraform# ./terraform destroy -auto-approve
# Destroy the VM

Troubleshooting

  1. Verify Proxmox Template Configuration. Locate the ubuntu-server-plucky VM (template) under Datacenter. Check Hardware Tab: BIOS (SeaBIOS), Hard Disk (virtio0), and Network device (net0). Check Options Tab: Boot Order (virtio0, net0).
# Check VM config in Proxmox
qm config 207 | grep -E 'boot:|bootdisk:|scsi|virtio'
boot: order=virtio0;net0
net0: virtio=BC:24:11:71:1C:8C,bridge=vmbr0
onboot: 0
scsihw: virtio-scsi-pci
virtio0: mypool:vm-207-disk-1,replicate=0,size=32G

# Inspect attached drives
qm disk list 207
It does not work

Biography

Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.