Give me six hours to chop down a tree and I will spend the first four sharpening the axe, Abraham Lincoln

Provisioning VMs manually isn’t just tedious and time consuming; it can lead to inconsistencies, errors that impact your entire infrastructure, and it is quite hard to reproduce at scale. By using Packer, we can fully automate the VM image builds, so every machine is created and configured exactly as needed every time. Packer lets you define a single template and then generate consistent images for Proxmox (and many other platforms) with one command.
Install Packer following the instructions provided in its webpage. Do not use the package because it is quite obsolete.
# 1. Import HashiCorp GPG signing key
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
# 2. Install lsb-release (needed in Proxmox to detect codename)
apt install lsb-release
# 3. Add the HashiCorp repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
# 4. Update and install Packer
sudo apt update && sudo apt install packer
# 5. Verify installation
packer --version
Packer needs a builder plugin to talk to Proxmox. The proxmox-iso plugin lets Packer create and configure VMs on Proxmox as first-class templates. It enables Packer to:
The fastest and easiest way to install the proxmox-iso plugin is using the following order: packer plugins install github.com/hashicorp/proxmox
This command will download and install the latest version of the proxmox-iso builder plugin into your local Packer plugin directory.
To create an API token for Proxmox, log in to the Proxmox web interface and navigate to the Datacenter section. Under the Permissions tab, go to the API Tokens section and click Add to generate a new token.
Select a user (e.g., root@pam), give the token a name (e.g., packer), set Expire to never, and very importantly uncheck Privilege Separation. By disabling Privilege Separation, the token inherits full admin rights —required for creating and managing VMs without any issues.
Finally, a secret will then be displayed and must be stored in a safe place. This secret is required to operate Proxmox with Packer. It allows you to authenticate requests to the Proxmox API for automation and management tasks.
In my home directory, I have create a ~/homelab directory with this structure:
homelab/
│
├── credentials.pkr.hcl # API URL, token ID & secret
├── files/
│ └── 99-pve.cfg # cloud-init datasource
configuration
│ └── 01-netcfg.yaml # A fallback netplan configuration
├── http/
│ ├── user-data # cloud-init user-data (YAML)
│ └── meta-data # cloud-init meta-data (often empty)
└── ubuntu-server-plucky.pkr.hcl # Packer template for Ubuntu plucky
proxmox_api_url = "https://192.168.1.33:8006/api2/json" # Your Proxmox IP Address
proxmox_api_token_id = "root@pam!packer" # API Token ID
proxmox_api_token_secret = "YOUR-API-TOKEN-SECRET"
Be cautious with your API token. Ensure that this file is not publicly accessible and consider using environment variables or other secure methods to manage sensitive information.
This configuration file is essential for enabling automatic setup of the VM using cloud-init. When the VM boots up, cloud-init checks the specified data sources in the order listed to find the necessary information for initial setup when a VM boots for the first time —like setting the hostname, creating users, injecting SSH keys, etc.
datasource_list: specifies the data sources that cloud-init will use to gather configuration information during the VM’s boot process.
This effectively tells cloud-init to look for configuration from:
# /home/nmaximo7/homelab/files/99-pve.cfg
# This configuration file ensures the QEMU Guest Agent is enabled
# and that cloud-init performs a clean boot suitable for templates.
datasource_list: [ NoCloud, Ec2 ]
Instead of relying solely on cloud-init, create a fallback netplan configuration. It ensures that your VM retains a reliable network setup, even if cloud-init fails or does not provide the necessary configuration.
Netplan is a network configuration tool used in modern Linux distributions, particularly Ubuntu. It simplifies the management of network settings by using YAML files, allowing users to define their network interfaces and other settings in a clear and structured way. Netplan translates these YAML configurations into backend configurations for either systemd-networkd or NetworkManager, depending on the specified renderer.
# Create this as /home/nmaximo7/homelab/files/01-netcfg.yaml
network:
version: 2 # Specifies the version of the network configuration.
renderer: networkd # Indicates that systemd-networkd will handle the network configuration. This is a lightweight network management daemon.
ethernets: # This section defines settings for Ethernet interfaces.
ens18: # The name of the network interface. Ensure this matches your actual NIC name (ip a).
dhcp4: true # Enables DHCP for IPv4, allowing the interface to automatically obtain an IP address from a DHCP server.
dhcp4-overrides: # Overrides for DHCPv4 settings. In this case, it prevents the DHCP server from using the system's hostname when assigning an IP address
use-hostname: false # Prevents the DHCP server from using the hostname for the IP lease.
dhcp6: false # Disables DHCP for IPv6.
optional: true # Marks the interface as optional, meaning the system can boot even if the interface is not up.
vim ~/homelab/ubuntu-server-plucky.pkr.hcl:
# Packer Template to create an Ubuntu Server (plucky) VM template on Proxmox.
# This template is designed for unattended installation using cloud-init.
packer {
required_plugins {
proxmox = {
version = ">= 0.1.0"
source = "github.com/hashicorp/proxmox"
}
}
}
# Variable Definitions
# These variables are pulled from 'credentials.pkr.hcl' or use default values.
# Ensure 'credentials.pkr.hcl' is secure and not publicly accessible.
variable "proxmox_api_url" {
type = string
default = "https://192.168.1.33:8006/api2/json"
}
# It specifies the API URL for Proxmox. Make sure this matches your Proxmox server's address.
variable "proxmox_api_token_id" {
type = string
default = "root@pam"
}
# API Token ID for Proxmox. While using root@pam!packer is common for automation,
# consider creating a dedicated user with minimal necessary permissions for production use.
variable "proxmox_api_token_secret" {
type = string
default = "YOUR-PREVIOUSLY-GENERATED-API-TOKEN-SECRET"
sensitive = true
}
# Sensitive authentication token. Marked as sensitive to prevent logging.
# Manage this token securely (e.g., using environment variables or a secrets management system).
# Resource Definition for the VM Template: Proxmox ISO builder
# This block defines the source for the VM image along with Proxmox-specific settings.
source "proxmox-iso" "ubuntu-server-plucky" {
# Proxmox Connection Settings
proxmox_url = "${var.proxmox_api_url}" # Uses the defined API URL.
# Username and token for authentication to the Proxmox API.
username = "${var.proxmox_api_token_id}"
token = "${var.proxmox_api_token_secret}"
# (Optional) Skip TLS Verification
insecure_skip_tls_verify = true
# Set to 'true' if your Proxmox server uses a self-signed certificate and you
# wish to bypass TLS verification. For production environments, configure proper
# TLS certificates to ensure secure communications.
# VM General Settings
node = "myserver" # It specifies the Proxmox node where the VM will be created.
vm_id = "108" # A unique identifier for the VM. Ensure this ID is not conflicting with existing VMs.
vm_name = "ubuntu-server-plucky" # Descriptive name of the VM and indicates the OS version.
template_description = "Unattended Ubuntu Server plucky Image" # Description for the VM template.
# VM OS Settings
# This template uses a URL to download the Ubuntu Server ISO, which is convenient for automation.
iso_file = "local:iso/ubuntu-25.04-live-server-amd64.iso" # Specifies the local ISO file for installation.
iso_storage_pool = "local" # Defines the storage pool for the ISO.
unmount_iso = true # Unmount the ISO after the build completes to free up resources.
# VM System Settings
qemu_agent = true
# Set to true to enable the QEMU Guest Agent for improved management capabilities and IP address detection by Packer.
# Packer needs the VM's IP and relies on the QEMU guest agent to be running on the machine.
cores = "2" # Allocates CPU cores to the VM. Adjust based on application needs.
memory = "8192" # Allocates 8GB of RAM to the VM.
# VM Hard Disk Settings
scsi_controller = "virtio-scsi-pci" # Recommended for performance in virtual environments.
# Defines the disk size, format, storage pool, and type. Using virtio is optimal for performance in virtual environments.
disks {
disk_size = "80G" # Defines the disk size.
format = "raw" # Disk format.
storage_pool = "mypool" # Specify the storage pool for the VM's disk.
type = "virtio" # Disk type, optimal for performance.
}
# VM Network Settings
# Configures the network adapter model and bridge. Using virtio is recommended for performance.
network_adapters {
model = "virtio" # Network adapter model.
bridge = "vmbr0" # Network bridge to connect the VM.
firewall = "false" # Disable firewall for this VM; ensure this aligns with your security policies.
}
# VM Cloud-Init Settings
# Enables cloud-init for post-deployment customization, which is critical for dynamic environments.
# It handles a range of tasks that normally happen when a new instance is created. It’s responsible for activities like setting the hostname, configuring network interfaces, creating user accounts, and even running shell scripts.
cloud_init = true
cloud_init_storage_pool = "local-lvm" # Specifies the storage pool for cloud-init data.
# PACKER Boot Commands
# It defines the boot commands to initiate the autoinstall process.
# The autoinstall option allows for unattended or handsoff installation, which is essential for automation.
# ds=datasource, s=Location
# If the autoinstall does not work, you may go to the Console, and do a typical installation
# The '---' signals the kernel to continue booting.
# www.virtualizationhowto.com, Proxmox Packer Template for Ubuntu 24.04
boot_command = [
"<esc><wait>", # Exit the boot menu to access the boot options for editing.
"e<wait>", # Enter the edit mode for the boot command line, allowing modifications.
"<down><down><down><end>", # Navigate down to the end of the kernel command line parameters for editing.
"<bs><bs><bs><bs><wait>", # Backspace to remove existing options (e.g., `quiet` or `splash`)
# The actual parameter for the autoinstall process:
"autoinstall ds=nocloud-net\\;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ ---<wait>",
# This command instructs the Ubuntu “live server” to run start the autoinstall process.
# It will force Subiquity to perform destructive actions without asking confirmation from the user
# and to fetch user-data from the Packer HTTP server.
# The --- at the end signals the kernel to continue booting after processing
# the autoinstall configuration, ensuring that the installation proceeds.
"<f10><wait>"
# Execute the modified boot command and start the installation process.
]
boot = "order=virtio0;ide2;net0"
# Indicates the boot device: virtio0 (root disk), ide2 (it is where Packer attaches the cloud-init or ISO drive; CD-ROM), and net0 (network boot device).
# After creating the template, check Options, Boot Order: virtio0, net0; and both enable.
boot_wait = "5s" # Wait time before booting.
# PACKER Autoinstall Settings
http_directory = "http"
# Packer will start a HTTP server from the content of the http directory (with the http_directory parameter relative to the Packer template file). This will allow Subiquity to fetch the cloud-init files remotely.
# Subiquity is a new live system based on cloud-init and uses a YAML file to fully automate the installation process.
# This means Packer will serve all files from ~/homelab/http/ at the root URL.
# So the file user-data is available at http://{{ .HTTPIP }}:{{ .HTTPPort }}/user-data.
# The meta-data file can be empty but must be present, otherwise cloud-init will not start correctly.
ssh_username = "nmaximo7"
# IMPORTANT: This password must match the plaintext password corresponding to
# the hashed password in your 'http/user-data' file.
ssh_password = "YOUR-SSH-PASSWORD"
# The OS install can take a while. If 20 minutes pass and it’s still not installed or no networking, it’ll fail eventually. This options increases timeout for OS installation.
ssh_timeout = "20m"
}
# Build Definition to create the VM Template
# This block defines the overall build and provision steps executed inside the VM.
build {
name = "ubuntu-server-plucky"
sources = ["proxmox-iso.ubuntu-server-plucky"]
# Shell Provisioner for Cleanup and Setup
# This shell provisioner waits for cloud-init to finish, removes SSH host keys,
# and cleans up unnecessary files, crucial for preparing the image for reuse.
provisioner "shell" {
inline = [
"echo 'Autoinstall presumably done. Checking QEMU Guest Agent...' ",
"systemctl status qemu-guest-agent || true", # Check if the QEMU Guest Agent is running.
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done", # Wait for cloud-init to complete.
"sudo rm /etc/ssh/ssh_host_*", # Remove SSH host keys for security.
"sudo truncate -s 0 /etc/machine-id", # Reset machine ID to ensure new ID on next boot.
"sudo apt -y autoremove --purge", # Remove unnecessary packages.
"sudo apt -y clean", # Clean up the local repository.
"sudo apt -y autoclean", # Remove obsolete packages.
"sudo cloud-init clean", # Clean cloud-init state.
"sudo rm -f /etc/netplan/50-cloud-init.yaml", # Remove default netplan config to avoid conflicts.
"sudo rm -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg", # Clean up networking config.
----------------OPTIONAL----------------
"sudo apt update", # Update package lists.
"sudo apt upgrade -y", # Upgrade installed packages.
"sudo apt install -y jq", # Install jq for JSON processing.
# ----------------END OPTIONAL----------------
"sudo sync" # Ensure all changes are written to disk.
]
}
# Provisioning the VM Template for Cloud-Init Integration in Proxmox
# This provisioner transfers a custom configuration file (99-pve.cfg) to the VM., which is used for cloud-init configuration.
provisioner "file" {
source = "~/homelab/files/99-pve.cfg"
destination = "/tmp/99-pve.cfg"
}
# Provisioning the VM Template for Cloud-Init Integration in Proxmox
# This provisioner copies the configuration file to the appropriate location in the VM.
provisioner "shell" {
inline = [ "sudo cp /tmp/99-pve.cfg /etc/cloud/cloud.cfg.d/99-pve.cfg" ]
}
# Instead of relying solely on cloud-init, we are going to create a fallback netplan configuration.
# This provisioner transfers the Netplan configuration file (01-netcfg.yaml) to the VM.
provisioner "file" {
source = "/home/nmaximo7/homelab/files/01-netcfg.yaml"
destination = "/tmp/01-netcfg.yaml"
}
# This provisioner copies the Netplan configuration file to the appropriate location and applies the configuration.
provisioner "shell" {
inline = [
"sudo cp /tmp/01-netcfg.yaml /etc/netplan/01-netcfg.yaml",
"sudo chmod 600 /etc/netplan/01-netcfg.yaml", # Set permissions for security
"sudo netplan generate" # Generate the network configuration
]
}
}
~/homelab/http/user-data is a cloud-init configuration file that automates the setup of an Ubuntu Server during the installation process. The user-data file is written in YAML format and is used by cloud-init to configure the server automatically during the installation.
Running in your node (e.g., myserver), Shell, ip link show and ip a tells us that the physical NIC is named enp42s0. That bridged into vmbr0. The guest VM sees only a virtual NIC provided by QEMU (e.g., virtio). That NIC typically appears as ensX, enpXsY, or eth0 inside the VM, e.g., ens18.
root@myserver:/home/nmaximo7/homelab# ip link show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp42s0: mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff
3: vmbr0: mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff
root@myserver:/home/nmaximo7/homelab# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp42s0: mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff
3: vmbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether d8:43:ae:c0:4a:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.33/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::da43:aeff:fec0:4ac4/64 scope link
valid_lft forever preferred_lft forever
# vi /home/nmaximo7/homelab/http/user-data
#cloud-config
# This header is crucial for cloud-init to recognize the file as a valid cloud configuration.
autoinstall:
version: 1 # It specifies the version of the autoinstall configuration format being used.
identity:
hostname: ubuntu-server-plucky # Sets the hostname for the VM.
username: nmaximo7 # Username for the primary user.
password: "$randomSalt$..." # e.g., password: "$6$W06Fo7Jt5/wyk4mf$uT9NZKHyX5bgCa6/64y33YzHM8q/1WchaUYICcDu.GQ2tys4jK7Ztme4qnLkfk2YRgAfoWZyt4OJKbUgwwVVg."
# The password hash for the 'nmaximo7' user.
# IMPORTANT: Ensure this hash corresponds to the plaintext password used in Packer's ssh_password (vim ~/homelab/ubuntu-server-plucky.pkr.hcl).
# Generate with: mkpasswd --method=SHA-512 (or openssl passwd -6)
# It must be a hashed password Anawim, mkpasswd --method=SHA-512
locale: en_US # Sets the system locale to US English.
keyboard: # Configures the keyboard to American layout.
layout: us
ssh: # Configuration for SSH during installation.
install-server: true # Installs the OpenSSH server for remote access.
allow-pw: true # Allows password authentication for SSH logins.
disable_root: false # Permits root login via SSH (note: this can be a security risk, use only for testing purposes).
ssh_quiet_keygen: true # Suppresses output during SSH key generation for cleaner logs.
allow_public_ssh_keys: true # Allows the use of public SSH keys for authentication.
packages: # Packages to be installed during the setup process.
- qemu-guest-agent
# Essential for Packer to obtain VM IP and for Proxmox integration.
# It is used to exchange information between the host and guest, and to execute command in the guest.
- openssh-server # Installs the OpenSSH server.
- sudo # Ensures that the sudo package is installed for privilege escalation.
- curl # Command-line tool for transferring data with URLs.
- jq # Utility for processing JSON data; included as per Packer provisioner.
- net-tools # Provides networking utilities like ifconfig and netstat.
storage:
layout:
name: direct # Indicates a direct disk layout for storage, meaning that we are going to use the entire disk without partitioning.
swap:
size: 0 # Configures the swap space to 0, meaning no swap will be created.
# This section configures the network settings for the VM.
# Ensure that the interface name in the configuration matches the actual network interface name inside the VM.
network:
version: 2 # Specifies the version of the network configuration.
ethernets:
en:
match:
name: en* # Dynamically matches any interface starting with 'en'.
dhcp4: true # Enable DHCP for automatic IP assignment.
# Cloud-init User-Data (This block runs AFTER the initial installation)
# This section is nested under 'autoinstall:' as required by Subiquity.
user-data:
package_update: true # Enables automatic package updates during initial setup.
package_upgrade: true # Enable automatic package upgrades during the initial setup.
timezone: Europe/Madrid
# Sets the system timezone to Madrid (Europe).
disable_root: false # If true, root login is disabled. If false, root login is allowed.
users: # Defines user accounts to create during installation.
- name: YOUR-USER-NAME # Creates a user named YOUR-USER-NAME.
groups: [adm, sudo] # Adds the user to the adm and sudo groups, granting administrative privileges.
lock-passwd: false # Allows the user to log in with a password.
sudo: ALL=(ALL) NOPASSWD:ALL
# Grants the user passwordless sudo access to all commands, which is convenient (convenient for testing; security risk in production)
shell: /bin/bash # Sets the default shell for the user to Bash.
passwd: $randomSalt$...
# Contains the hashed password for the user, which should be kept secure.
# You need to provide a hashed password, use: openssl passwd -6
# Type in your password and it gives you a salted SHA-512 hash.
# Ensure this matches the password hash for 'nmaximo7' in the 'identity' section above.
# Optional: Uncomment and add your SSH public key for key-based authentication (recommended for production).
# ssh_authorized_keys:
# - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQ... your-ssh-key-here
runcmd:
# This section specifies commands to run after cloud-init completes its initial setup.
# These commands help ensure that the QEMU Guest Agent is enabled and that the VM is properly configured.
- systemctl enable qemu-guest-agent # Ensure QEMU Guest Agent starts at boot, which is essential for VM management.
- systemctl start qemu-guest-agent # Start QEMU Guest Agent immediately, allowing for real-time communication and management of the VM.
- echo "Network configuration applied and cloud-init finished." # Log message indicating completion.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# Re-running cloud-init stages (init, config, final) can help in edge cases where the initial boot missed something. Otherwise, you can safely remove these lines.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
- cloud-init init --local # Re-initialize cloud-init locally, allowing it to process any additional local configuration that may not have been applied initially.
- cloud-init modules --mode=config # Executes the configuration modules of cloud-init, applying any configuration changes specified in the user data and other sources.
- cloud-init modules --mode=final # Runs the final modules of cloud-init, performing last-minute configurations and cleanup tasks before the system is fully operational.
cd /home/nmaximo7/homelab
# Change directory to the homelab folder where the Packer template files are located.
packer init ubuntu-server-plucky.pkr.hcl
# This command initializes the Packer template, checking for syntax errors
# and validating that the configuration is correct. It also downloads any required plugins.
packer validate -var-file='credentials.pkr.hcl' ubuntu-server-plucky.pkr.hcl
# The -var-file option specifies a variable file that contains sensitive information
# (like API tokens or credentials) used in the Packer template.
# Using a separate variable file helps keep private and sensitive data more organized and secure.
# ubuntu-server-plucky.pkr.hcl is the path to the Packer template file we want to validate,
# ensuring that all required variables are correctly defined and formatted.
packer build -var-file='credentials.pkr.hcl' ubuntu-server-plucky.pkr.hcl
# It executes the build process defined in the Packer template.
# It will create the VM template as specified in the template ubuntu-server-plucky.pkr.hcl...
# and provision it according to the defined settings, such as, installing packages, configuring users, and applying network settings.
# This step may take quite some time as it involves downloading images and setting up the VM.
Once, the Packer build is complete, in the left-hand sidebar of the Proxmox interface, locate and navigate to the node (e.g., myserver) where your template is located, right-click on the template (ID 108, ubuntu-server-plucky) and select Clone from the context menu.
In the Clone dialog: Target Node (Select the node where you want to create the new VM, e.g., myserver), VM ID (Assign a unique ID to the new VM), Name (Give the new VM a descriptive name), Mode (Full Clone, it creates an independent copy of the template; this is recommended for most use cases). Then, click Clone to start the process. The new VM will be created based on your template, and you can then customize it further as needed.
#!/bin/bash
# 1. Make it executable: chmod +x build_template.sh
# 2. Customize variables: Change the TEMPLATE_ID, PACKER_TEMPLATE, and CREDENTIALS_FILE variables at the beginning of the script to match your setup.
# 3. Run the script: ./build_template.sh
# Set variables (customize these as needed)=
TEMPLATE_ID="108" # ID of the template to delete
PACKER_TEMPLATE="ubuntu-server-plucky.pkr.hcl" # Packer template file name
CREDENTIALS_FILE="credentials.pkr.hcl" # File containing sensitive credentials
# Go to the correct directory
cd /home/nmaximo7/homelab || exit 1 # Exit if the directory doesn't exist
# Validate Packer template
echo "Validating Packer template..."
packer validate -var-file="$CREDENTIALS_FILE" "$PACKER_TEMPLATE" || {
echo "Packer validation failed. Exiting."
exit 1 # Exit if validation fails
}
# The packer validate command is still included to catch syntax errors early. The || construct ensures that the script exits if validation fails.
echo "Checking for existing template..."
if pvesh get /vms/"$TEMPLATE_ID" > /dev/null 2>&1; then
echo "Template $TEMPLATE_ID found. Stopping VM..."
# Stop the existing VM template
pvesh stop /vms/"$TEMPLATE_ID" || {
echo "Failed to stop VM $TEMPLATE_ID. Exiting."
exit 1 # Exit if stopping the VM fails
}
echo "Template $TEMPLATE_ID stopped. Deleting..."
pvesh delete /vms/"$TEMPLATE_ID" --purge || {
echo "Failed to delete template $TEMPLATE_ID. Exiting."
exit 1
}
echo "Template $TEMPLATE_ID stopped. Deleting..."
# Delete the existing VM template
pvesh delete /vms/"$TEMPLATE_ID" --purge || {
echo "Failed to delete template $TEMPLATE_ID. Exiting."
exit 1 # Exit if deletion fails
}
echo "Template $TEMPLATE_ID deleted."
else
echo "Template $TEMPLATE_ID not found. Skipping deletion."
fi
# Build the new Packer image
echo "Building Packer image..."
packer build -var-file="$CREDENTIALS_FILE" "$PACKER_TEMPLATE" || {
echo "Packer build failed. Exiting."
exit 1 # Exit if the build fails
}
echo "Packer build complete."
# Check Network Interface Name
ip a # Verify if the interface is indeed ens18 or if it has a different name
# Check Netplan Configuration
ls -l /etc/netplan/ # List the Netplan configuration files.
cat /etc/netplan/*.yaml # Display the contents of the Netplan configuration files.
# If you see no YAML file or the file is missing a dhcp4: true line,
# the final system lacks a persistent network configuration.
sudo dhclient -v ens18. You should get something like: DHCPOFFER of 192.186.1.56 (Ip offered by the router) from 192.168.1.1 (router’s IP).sudo journalctl -u cloud-init -b or sudo tail -n100 /var/log/cloud-init.log to see if cloud-init is overwriting your configuration on your VM first boot.