Proxmox is an open-source server platform for enterprise virtualization that uses a modified Ubuntu kernel to deploy and manage multiple virtualized environments on a single bare metal server.
Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor (Kernel-based Virtual Machine, an open-source hypervisor that allows you to run multiple virtual machines on a Linux system) and Linux Containers (LXC) providing a robust, scalable, and flexible environment for your virtual machines and containers, e.g., comprehensive backup and restore options for both VMs and containers, advanced networking features, a built-in high availability clustering, etc.
Docker is a software platform designed to make it easier to create, deploy, and run applications by using containers. It offers OS-level virtualization to deliver software in packages called containers.
To delete a VM using Proxmox’s GUI, Log in to the Proxmox web console and select the VM. If the machine is running, click the Shutdown button, the VM has to be shut down or stopped. Finally, select More, Remove, enter the VM ID, and click Remove.
Select both options to purge from job configurations and destroy unreferenced disks owned by guest. This will ensure that the jobs and disks owned by the virtual machine will be deleted.
To delete a VM Disk via GUI: Select the virtual machine, open the Hardware tab. Click the disk and use the Remove button to instruct Proxmox to delete the disk. To delete a VM disk using the command line: zfs list
(it displays the names of the datasets and the values of their used, available, referenced, and mounted properties), then zfs destroy -f [disk_path]
Delete a Virtual Machine ID. Find the VMID (virtual machine ID) in the VM list by running the command: cat /etc/pve/.vmlist
root@myserver:~# cat /etc/pve/.vmlist
{
"version": 87,
"ids": {
"104": { "node": "myserver", "type": "qemu", "version": 2 },
"201": { "node": "myserver", "type": "lxc", "version": 7 },
"103": { "node": "myserver", "type": "qemu", "version": 4 },
"102": { "node": "myserver", "type": "qemu", "version": 5 },
"101": { "node": "myserver", "type": "qemu", "version": 1 },
"107": { "node": "myserver", "type": "lxc", "version": 84 },
"100": { "node": "myserver", "type": "qemu", "version": 3 },
"106": { "node": "myserver", "type": "lxc", "version": 6 },
"105": { "node": "myserver", "type": "lxc", "version": 9 }}
}
The output lists information about the VMs created, including their IDs.
Shut down, stop or destroy the VM by running: qm shutdown/stop/destroy [vmid]
Sometimes, you will need to delete its configuration file: rm /etc/pve/qemu-server/107.conf
or rm /etc/pve/lxc/107.conf
List dockers: docker ps -a
. Remove container: docker rm ID
. Run and Remove docker rm ID (automatically when it exits): docker run --rm ID
. Remove all exited containers: docker rm $(docker ps -a -f status=exited -q)
.
To create a VM with a custom configuration script, first ssh in your Proxmox server. Then, I will strongly recommend creating some kind of directory structure, e.g., /home/your-user/homelab/dockers, and finally let’s create the script: vi/nano /home/your-user/homelab/mydockers/ubuntu-desktop.sh:
#!/bin/bash
# Variables
CTID=301 # The unique identifier for the container.
OSTEMPLATE="ubuntu-24.10-standard_24.10-1_amd64.tar.zst" # The template file for the Ubuntu image.
TEMPLATE_STORAGE="local" # The storage location for the template
CONTAINER_STORAGE="mypool" # The storage location for the container’s disk
DISK_SIZE="80" # The size of the container's disk in GB.
PASSWORD="YOUR-PASSWORD" # The root password for the container
HOSTNAME="ubuntu-desktop" # The hostname for the container
MEMORY=4096 # The amount of RAM allocated to the container
CORES=2 # The number of CPU cores assigned to the container
BRIDGE="vmbr0" # The network bridge for the container
IPADDRESS="192.168.1.50" # Desired static IP
GATEWAY="192.168.1.1" # LAN's gateway
CIDR="/24" # Adjust if not 255.255.255.0
# 1. Checks if a container with the specified CTID already exists;
# if it does, it stops and deletes it
if pct status $CTID &>/dev/null; then
echo "Container $CTID exists. Stopping the container."
pct stop $CTID
sleep 2 # Giving it a moment to stop gracefully
echo "Proceeding with deletion of container $CTID."
pct destroy $CTID
else echo "Container $CTID does not exist."
fi
# 2. Checks if the specified Ubuntu template is already downloaded.
# If not, it downloads the template from the specified storage.
# pveam is the Proxmox VE Appliance Manager, pveam list $TEMPLATE_STORAGE return a list of all templates on storage
if ! pveam list $TEMPLATE_STORAGE | grep -q $OSTEMPLATE; then
echo "Downloading Ubuntu template..."
pveam download $TEMPLATE_STORAGE $OSTEMPLATE
else
echo "Ubuntu template already exists."
fi
# 3. Create a privileged Ubuntu container
# Create the container with 80 GB disk on mypool
pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE \
--storage $CONTAINER_STORAGE \
--rootfs $CONTAINER_STORAGE:$DISK_SIZE \
--password $PASSWORD \
--hostname $HOSTNAME \
--memory $MEMORY \
--cores $CORES \
--net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY \
--unprivileged 0 # Set to 1 for unprivileged containers
pct set $CTID --onboot 1
# We want our LXC container starts automatically on boot.
# 4. Configure network interface with DHCP
# pct set $CTID --net0 name=eth0,bridge=vmbr0,ip=dhcp,firewall=0
# pct set $CTID --nameserver 8.8.8.8
# 5. Enable the "keyctl" feature (required for Docker) and Docker nesting features (if needed)
pct set $CTID --features nesting=1,keyctl=1,mknod=1
pct set is used to modify the configuration of an existing LXC container in Proxmox, identified by its container ID ($CTID):
# Disable AppArmor for our container
# AppArmor is a Linux security module that restricts programs' capabilities with per-program profiles
# When AppArmor is enabled, it can sometimes interfere with certain operations within containers, such as networking or accessing certain system resources
echo "lxc.apparmor.profile: unconfined" >> /etc/pve/lxc/$CTID.conf
# 6. Start container
pct start $CTID
# Wait a moment for container to boot
sleep 5
# 7. Install Docker inside the Ubuntu container
pct exec $CTID -- apt-get update
pct exec $CTID -- apt-get install -y docker.io
pct exec $CTID -- echo "root ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
pct exec $CTID -- echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Enable and start Docker
pct exec $CTID -- systemctl enable docker
pct exec $CTID -- systemctl start docker
# Test Docker
echo "Test Docker"
pct exec $CTID -- docker run hello-world
# Check IP addresses
echo "Check IP addresses"
pct exec $CTID -- ip a
# If you see an IP, test connectivity
pct exec $CTID -- ping -c 3 google.com
# 10. Finally run Kasm
# Create a directory /etc/loca.ld in the container for our script
pct exec $CTID -- mkdir -p /etc/local.d
# This block creates a systemd service file named startup-script.service in the /etc/systemd/system directory of the container.
echo "Creating the systemd service (startup-script.service)"
pct exec $CTID -- bash -c "cat > /etc/systemd/system/startup-script.service << 'EOF'
[Unit]
Description=Startup Script to run Kasm Docker container
After=network.target docker.service
# Specifies that this service should start after the network is available and after the Docker service is running.
[Service]
Type=simple # Indicates that the service will run as a simple process.
ExecStart=/etc/local.d/docker-startup.sh
# Specifies the command to execute to start the service, which is the startup script created in the next step.
RemainAfterExit=true
# This tells systemd to consider the service active even after the ExecStart command has completed. This is useful if the script runs a container and exits while Docker continues to run.
[Install]
WantedBy=multi-user.target
# Indicates the target under which this service should be started, in this case, during the multi-user run level.
EOF"
# This block creates a script named docker-startup.sh in the /etc/local.d directory.
pct exec $CTID -- bash -c "cat > /etc/local.d/docker-startup.sh << 'EOF'
#!/bin/bash
# Ensure Docker is running.
# It checks if the Docker service is active. If not, it waits and checks again.
while ! systemctl is-active --quiet docker; do
echo 'Waiting for Docker...'
sleep 1
done
# If container is already running, stop/remove it (optional)
docker rm -f kasm_container 2>/dev/null || true
# rm -f kasm_container attempts to forcefully remove any existing container named kasm_container, if it exists.
# The output is redirected to /dev/null to suppress any error messages if the container doesn’t exist.
# Run container in detached mode (in the background, allowing you to continue using your terminal or performing other tasks) with the specified options:
# shared memory size, port mapping, and environment variable for the VNC password.
docker run -d --name kasm_container \
--shm-size=512m \
-p 6901:6901 \
-e VNC_PW=password \
kasmweb/ubuntu-focal-desktop:1.16.0
EOF"
Kasm sets up a VNC password inside the container. Then, upon connecting to “https:” // Your-Host-IP : 6901 “/”, you are prompted for that password. To avoid introducing kasm_user as a user and password as its password, you may want to replace the line VNC_PW=password \ and use -e VNCOPTIONS=-disableBasicAuth \ instead.
# Makes the docker-startup.sh script executable.
pct exec $CTID -- chmod +x /etc/local.d/docker-startup.sh
# It reloads the systemd manager configuration to recognize the new service created.
pct exec $CTID -- systemctl daemon-reload
# Enable (to start automatically at boot) and start the service
pct exec $CTID -- systemctl enable startup-script.service
pct exec $CTID -- systemctl start startup-script.service
# Output success message
echo "Ubuntu-based LXC container $CTID created with Docker installed."
echo "Kasm Ubuntu Focal Desktop is running on port 6901. User : kasm_user, Password: password. https://192.168.1.50:6901/."
pct exec $CTID -- ip a