The problem is not the problem. The problem is your attitude about the problem, Captain Jack Sparrow
Terraform is an infrastructure as code tool that lets you build, change, and version infrastructure safely and efficiently. Users define and provide data center infrastructure using a declarative configuration language such as HashiCorp Configuration Language (HCL) or JSON. This includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.
In Proxmox, after installing Ubuntu on a VM, you typically:
root@myserver:~# mkdir terraform
root@myserver:~# cd terraform/
root@myserver:~/terraform# wget https://releases.hashicorp.com/terraform/1.10.3/terraform_1.10.3_linux_amd64.zip
root@myserver:~/terraform# command -v unzip
root@myserver:~/terraform# apt install unzip
root@myserver:~/terraform# ls
LICENSE.txt terraform terraform_1.10.3_linux_amd64.zip
root@myserver:~/terraform# ./terraform --version
Terraform v1.10.3
root@myserver:~/terraform# vi main.tf
root@myserver:~/terraform# ./terraform init
# The command ./terraform init is used to initialize a Terraform working directory.
# It downloads the necessary provider plugins specified in your configuration files.
# More specifically, it will fetch the Proxmox provider, which allows Terraform to interact with your Proxmox environment.
Initializing the backend...
Initializing provider plugins...
- Finding latest version of telmate/proxmox...
- Installing telmate/proxmox v2.9.14...
- Installed telmate/proxmox v2.9.14 (self-signed, key ID A9EBBE091B35AFCE)
Partner and community providers are signed by their developers.
[...]
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure.
./terraform plan
# It tells Terraform to evaluate the current state of the resources defined in your configuration files and compare it against the existing infrastructure.
# It reads the .tf files in the current directory to understand the desired state of your infrastructure.
Terraform used the selected providers to generate
the following execution plan. Resource actions
are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# proxmox_vm_qemu.vm-instance will be created
+ resource "proxmox_vm_qemu" "vm-instance" {
+ additional_wait = 5
[...]
root@myserver:~/terraform# ./terraform apply
# Apply the changes required to reach the desired state of your infrastructure as defined in your Terraform configuration files.
# It executes the previous plan and makes the necessary changes to your infrastructure.
.
└─ terraform
├─── credentials.auto.tfvars
├─── docker-compose.yml
├─── production.tf
├─── provider.tf
├─── proxmox-terraform.sh
└─── config
·····├─── script.sh
└─── .ssh
·····├─── id_rsa (SSH Private Key)
Install Docker. Navigate to dockerdocks, Install, Debian, Install Docker Engine on Debiab.
# Uninstall old versions
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
# Install using the apt repository
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Verify that you can run docker commands without sudo.
docker run hello-world
Inside the project, add a new file docker-compose.yml with the contents (each line is explained below):
services:
terraform: # The name of the service that will run Terraform.
image: hashicorp/terraform
# It sets the Docker image we want to use for our service.
volumes:
- .:/terraform
# Docker containers use virtualization to run in an environment that’s isolated from the host machine. As a result, containers don’t have access to the file system by default.
# To get around this problem, we define - .:/terraform which creates volume mapping . (this is shorthand for the current project directory where the docker-compose.yml is located)
# to /terraform inside the docker container.
working_dir: /terraform
# It tells the Docker container to work from the /terraform directory (working directory) which we are mapping our code project to.
# If they don't match, the container will not know where to look for the configuration files.
network: host
# So the container can communicate with the ProxMox server without any additional network setup
entrypoint: ["terraform"]
command: ["plan", "-var-file=credentials.auto.tfvars"]
# The docker compose command does not directly support Terraform's -var or -var-file flags.
# Instead, we need to pass these arguments to the terraform command inside the container.
We will create a file named credentials.auto.tfvars to store all sensitive information that Terraform requires to connect to Proxmox.
To create an API token for Proxmox, log in to the Proxmox web interface and navigate to the Datacenter section. Under the Permissions tab, go to the API Tokens section and click Add to generate a new token.
Select a user (e.g., root@pam), give the token a name (e.g., packer), Expire (e.g., never), and very important disable Privilege Separation. Make sure that you do not select the Privilege Separation box. This new token will inherit all permissions from your admin user, allowing you to create the VM and its resources without any issues.
Finally, a secret will then be displayed and must be stored in a safe place. This secret is required to operate Proxmox with Packer. It allows you to authenticate requests to the Proxmox API for automation and management tasks.
When you run docker compose -f docker-compose.yml run –rm terraform …, Docker creates a temporary container from the hashicorp/terraform image. Unless you explicitly mount or copy files into the container, it won’t see them.
# Proxmox's server IP + /api2/json
pm_api_url = "https://192.168.1.33:8006/api2/json"
# User and API key
pm_api_token_id = "root@pam!packer"
pm_api_token_secret = "0c37b9f5-a427-45e7-b61d-472a586fb4d7"
pm_tls_insecure = true
private_key_path = "./.ssh/id_rsa"
ssh_key = "ssh-rsa AAA···YOUR-PUBLIC-SSH-KEY···ZQ== nmaximo7@nixos"
In my docker-compose.yml, I mounted the current directory (., the project folder) to /terraform inside the container:
volumes:
- .:/terraform
Therefore, anything inside my project directory (terraform) on the host is available under /terraform in the container. If id_rsa is in ./.ssh/id_rsa on my host (/root/terraform/.ssh/id_rsa), then in the container it’ll be at /terraform/.ssh/id_rsa.
private_key = file(var.private_key_path). Terraform will then look for /terraform/.ssh/id_rsa inside the container, which actually maps to ./.ssh/id_rsa (/root/terraform/.ssh/id_rsa) on my host.
Terraform relies on plugins called providers to interact with cloud providers, SaaS providers, and other APIs. Most providers configure a specific infrastructure platform (either cloud or self-hosted). A Terraform provider is responsible for understanding API interactions and exposing resources. The Proxmox provider uses the Proxmox API.
This Terraform configuration is set up to manage resources in a Proxmox environment using the specified provider. vim provider.tf:
terraform {
required_providers {
proxmox = { # It defines the provider
source = "Telmate/proxmox" # It indicates where to find the provider
version = "3.0.1-rc6" # It specifies which version of the provider to use
}
}
}
# These blocks define input variables for the Terraform configuration.
# This variable will hold the URL for the Proxmox API.
variable proxmox_api_url {
type = string
description = "The URL of the Proxmox API."
}
# This variable will hold the token ID for authenticating with the Proxmox API.
variable proxmox_api_token_id {
type = string
description = "The token ID for the Proxmox API."
}
# This variable will hold the actual token used for API authentication.
variable proxmox_api_token_secret {
type = string
description = "The token secret for the Proxmox API."
sensitive = true
}
variable ssh_key {
type = string
description = "Your public SSH Key."
sensitive = true
}
variable private_key_path {
type = string
sensitive = true
}
provider "proxmox" {
# Configuration options for the Proxmox provider
pm_api_url = var.proxmox_api_url
# It sets the API URL for connecting to the Proxmox server, using the corresponding variable.
pm_api_token_id = var.proxmox_api_token_id
# It sets the token ID for API access.
pm_api_token_secret = var.proxmox_api_token_secret
# It sets the token for authentication.
pm_tls_insecure = true
# It allows Terraform to skip TLS verification.
}
root@myserver:~/terraform# docker compose -f docker-compose.yml run --rm terraform init --upgrade
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.
[...]
root@myserver:~/terraform# docker compose -f docker-compose.yml run --rm terraform plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your configuration and real physical resources that exist. As a result, no actions need to be performed.
This template provides a comprehensive way to define a VM in Proxmox using Terraform, allowing for automation and reproducibility in your infrastructure management. nvim production.tf:
# The resource type is proxmox_vm_qemu, which is used to manage QEMU virtual machines in Proxmox.
resource "proxmox_vm_qemu" "myvmserver" {
name = "myvmserver" # Name of the virtual machine
vmid = 207 # Unique ID for the VM (must be unique within the Proxmox cluster)
target_node = "myserver" # Target Proxmox node where the VM will be created
clone = "ubuntu-server-jammy" # Base template to clone from
full_clone = true # It is set to true, so it creates a full clone of the template
os_type = "ubuntu" # Operating system type
bios = "seabios"
# Check the original VM's (where the template is coming from) Option tab in Proxmox.
# If it used OVMF (UEFI), then bios = "ovmf".
# If it used SeaBIOS, then bios = "seabios" in Terraform.
scsihw = "virtio-scsi-pci" # It ensures a proper SCSI controller for the disk.
bootdisk = "virtio0" # Ensures the VM boots from the hard disk.
agent = 1 # Enable QEMU guest agent
}
By default, Proxmox tries to boot from your first disk, but sometimes you need to specify the boot order. In the Proxmox UI, you should check your template: VM, Hardware. In my particular case, Hard Disk (virtio0) mypool:base-107···.
Besides, go to the VM’s Options tab, Boot Order (e.g., virtio0), that is,
Options, Boot Order to ensure SCSI0 is first.
boot = "c"
agent = 1 # Enables the QEMU guest agent
sockets = 1 # Number of CPU sockets
cores = 2 # Number of CPU cores per socket
memory = 2048 # Amount of memory allocated (in MB)
disk { # Disk configuration block
# It uses SCSI, 32 GB, in mypool, with raw format.
slot = "virtio0" # Disk slot for the VM
# Must match one of the allowed strings: scsi0, virtio0, ide0, etc.
size = "32G" # Size of the disk
type = "disk" # Type of the disk (disk, cdrom or cloudinit)
# Use disk if this is a normal hard disk
storage = "mypool" # Storage pool where the disk will be located.
# I have a storage "mypool" (type ZFS) on node "myserver".
format = "raw" # Disk format (typically qcow2, raw, etc.)
# I'm using the ZFS pool for storing my VM disks and containers.
# By default, ZFS-based storage in Proxmox only supports raw format.
}
network { # Network configuration block
id = 0 # Network interface ID
model = "virtio" # Network interface model (virtio is optimized for KVM)
bridge = "vmbr0" # Bridge to connect the VM's network interface (Datacenter, myserver, localnetwork (myserver))
firewall = false # Disable firewall for this interface
link_down = false # Keep the link up (set to true to bring it down)
}
# Cloud-init settings
# Configure VM to use DHCP for IP assignment
ipconfig0 = "ip=192.168.1.76/24,gw=192.168.1.1"
nameserver = "8.8.8.8"
ciuser = "nmaximo7" # Cloud-init user for the VM
sshkeys = < ${var.ssh_key}
EOF
}
To create a new VM, run terraform init to initialize the provider, then terraform plan to view the intended changes, and finally terraform apply to create the VM.
This script automates the process of:
#!/bin/bash
# Make the script executable: chmod +x proxmox-terraform.sh
# Run the script: ./proxmox-terraform.sh
# Variables
VM_ID=207
DOCKER_COMPOSE_FILE="docker-compose.yml"
TERRAFORM_DIR="/root/terraform" # Update this path
# Function to check if a VM exists
vm_exists() {
qm status $VM_ID > /dev/null 2>&1
return $?
}
# Remove the previously created VM if it exists
# Normally, you need to stop using Proxmox's GUI, VM, Shutdown, Stop.
# Sometimes, you need to stop a broken VM using: qm stop 207
if vm_exists; then
echo "VM $VM_ID exists. Removing it..."
qm stop $VM_ID --force > /dev/null 2>&1
qm destroy $VM_ID --purge > /dev/null 2>&1
echo "VM $VM_ID removed."
else
echo "VM $VM_ID does not exist. Skipping removal."
fi
# Change to the Terraform directory
cd $TERRAFORM_DIR || { echo "Failed to change to directory $TERRAFORM_DIR"; exit 1; }
# Run Docker Compose commands
# When you are done editing the configuration files.
# Use Docker Compose to run Terraform commands in a containerized environment.
# The -f flag specifies which Docker Compose file to use, in this case, docker-compose.yml.
# The --rm flag ensures that the container is removed after the command completes.
# Initializes the Terraform environment.
echo "Running 'docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform init'..."
docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform init || { echo "Terraform init failed"; exit 1; }
# Generates and shows the execution plan, allowing you to see what actions Terraform will take.
echo "Running 'docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform plan'..."
docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform plan -out k8s-master.tfplan || { echo "Terraform plan failed"; exit 1; }
# Applies the changes to create the VM
echo "Running 'docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform apply'..."
docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform apply -auto-approve k8s-master.tfplan || { echo "Terraform apply failed"; exit 1; }
# Cleanup
docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform plan -destroy -out k8s-master.destroy.tfplan
docker compose -f $DOCKER_COMPOSE_FILE run --rm terraform apply k8s-master.destroy.tfplan
# Destroy previously-created infrastructure
echo "Script completed successfully."
sudo apt install ubuntu-gnome-desktop
command.