The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded, Stephen Hawking
People say that if you play Microsoft CD’s backwards, you hear satanic things, but that’s nothing, because if you play them forwards, they install Windows.
Proxmox is a powerful, complete, open-source server platform for enterprise virtualization to deploy and manage multiple virtualized environments on a single bare metal server (under one unified roof). It is great for home labs because: it is open-source and free; it tightly integrates the KVM hypervisor (Kernel-based Virtual Machine, an open-source hypervisor that allows you to run multiple virtual machines on a Linux system) and Linux Containers (LXC) providing a robust, scalable, and flexible environment for your virtual machines and containers, e.g., comprehensive backup and restore options for both VMs and containers, advanced storage and networking features, a built-in high availability clustering, and web-based management.
Each LXC container runs a full Linux distribution but shares the host’s Linux kernel. This makes them very lightweight and efficient, therefore ideal for tasks that need a full Linux environment without the overhead of hardware emulation.
Kasm Workspaces is a container-based workspace streaming platform that provides virtual desktops (supports both Windows and Linux environments), browsers, and applications via Docker containers and accessible through any web browser.
It allows users to access their customized desktop environments and applications through a web browser from any location on any device. For example, you might run a Windows or Linux desktop session in a container and connect to it from any device’s browser. It uses container technology to create isolated environments, meaning that each workspace is independent, providing a secure and consistent experience.
Using Kasm is like throwing your laptop away after each use and using a different internet connection each time, KASM WORKSPACES.
In this article, we show how to cleanly remove VMs and containers, automate LXC creation with a Bash script, and set up Docker and deploy an image of Kasm that contains a browser-accessible Ubuntu Focal Desktop with various productivity and development apps installed. This image was designed to run natively within Kasm Workspaces, but it can also be deployed stand-alone and accessed through a web browser.
Purge from job configurations
(Removes associated backup/replication jobs in /etc/pve/jobs.cfg) and Destroy unreferenced disks
.
Select both options to purge from job configurations and destroy config-referenced (listed in Hardware tab) and unreferenced disks (orphaned storage, it requires ☑ Destroy unreferenced disks) owned by guest. This will ensure that the jobs and disks owned by the virtual machine will be deleted.
# Stop and delete VM 105 with purge
qm [stop | shutdown] 105 && qm destroy 105 --purge=true --destroy-unreferenced-disks=true
# stop: Immediately halts the VM, shutdown: Attempts a graceful shutdown.
# --purge=true: It specifies that the VM's configuration should be removed along with the VM itself.
# --destroy-unreferenced-disks=true: It ensures that any disks associated with the VM that are not referenced by other VMs (orphaned storage) are also deleted.
# Stop and delete CT 205 with purge
pct stop 205 && pct destroy 205 --purge=true --destroy-unreferenced-disks=true
# Manual disk removal (VM 105, disk scsi0)
qm set 105 --delete scsi0
Usage Instructions:
To create a CT with a custom configuration script, first ssh in your Proxmox server. Then, I will strongly recommend creating some kind of directory structure, e.g., /home/your-user/homelab/mydockers, and finally let’s create the script: vi/nano /home/your-user/homelab/mydockers/ubuntu-desktop.sh:
#!/bin/bash
# PROXMOX LXC AUTOMATION SCRIPT (UBUNTU + DOCKER + KASM)
# Run the script, cd /home/nmaximo7/homelab/mydockers/
# chmod +x ubuntu-desktop.sh, ./ubuntu-desktop.sh
# echo "Access Kasm via: https://$IPADDRESS:6901/"
# echo "User: kasm_user, Password: password."
# --- Global Script Settings for Robustness ---
set -euo pipefail # Exit immediately if a command exits with a non-zero status, or if a variable is unset, or if a command in a pipeline fails.
# ========================
# CONFIGURATION SECTION
# ========================
CTID=301 # The unique identifier for the container.
OSTEMPLATE="ubuntu-24.10-standard_24.10-1_amd64.tar.zst" # The template file for the Ubuntu image.
TEMPLATE_STORAGE="local" # The storage location for the template (usually 'local')
CONTAINER_STORAGE="mypool" # The storage location for the container’s disk (ZFS)
DISK_SIZE="80" # The size of the container's disk in GB.
PASSWORD="YOUR-PASSWORD" # The root password for the container
HOSTNAME="ubuntu-desktop" # A descriptive hostname for the container
MEMORY=4096 # The amount of RAM allocated to the container (4096 = 4GB)
CORES=2 # The number of CPU cores assigned to the container (2-4 for typical workloads)
BRIDGE="vmbr0" # The network bridge for the container (vmbr0 = default)
IPADDRESS="192.168.1.52" # Static IP/CIDR (use /24 for class C)
GATEWAY="192.168.1.1" # Default LAN gateway (router IP)
CIDR="/24" # Adjust if not 255.255.255.0
DNS_SERVERS="1.1.1.1 8.8.8.8" # Cloudflare + Google DNS
# ========================
# CONTAINER MANAGEMENT
# ========================
# Checks if a container with the specified CTID already exists;
# if it is running, it stops and deletes it
if pct status "$CTID" >/dev/null 2>&1; then
echo "Container $CTID exists."
# Check if the container is running
if [ "$(pct status "$CTID" | awk '{print $2}')" = "running" ]; then
echo "Stopping running container $CTID."
if ! pct stop "$CTID"; then
echo "Error: Failed to stop container $CTID. Please check manually."
exit 1
fi
sleep 5 # Give it a moment to stop gracefully
else
echo "Container $CTID is not running. No need to stop."
fi
echo "Proceeding with deletion of container $CTID."
if ! pct destroy "$CTID"; then
echo "Error: Failed to destroy container $CTID. Please check manually."
exit 1
fi
else
echo "Container $CTID does not exist. Proceeding with creation."
fi
The line if pct status $CTID &>/dev/null; then
requires a little more information:
This is an alternatively to avoid hardcoding sensitive credentials withing the script:
# --- Security-Sensitive Variables (DO NOT HARDCODE) ---
# Prompt for the root password for the LXC container.
# This is a critical security improvement to prevent plaintext exposure. (Recommendation from 2.1, 3.1)
read -s -p "Enter root password for LXC container $CTID: " LXC_ROOT_PASSWORD
echo # Newline after password input
# Prompt for the Kasm VNC password.
# This avoids hardcoding sensitive credentials within the script. (Recommendation from 2.6)
read -s -p "Enter Kasm VNC password: " KASM_VNC_PASSWORD
echo # Newline after password input
Let’s continue with our script:
# ========================
# TEMPLATE HANDLING
# ========================
# Checks if the specified Ubuntu template is already downloaded at $TEMPLATE_STORAGE.
# pveam is the Proxmox VE Appliance Manager, pveam list $TEMPLATE_STORAGE return a list of all templates on storage
# If not, it downloads the template $TEMPLATE_STORAGE on $TEMPLATE_STORAGE.
if ! pveam list $TEMPLATE_STORAGE | grep -q $OSTEMPLATE; then
echo "Downloading Ubuntu template '$OSTEMPLATE'..."
if ! pveam download "$TEMPLATE_STORAGE" "$OSTEMPLATE"; then
echo "Error: Failed to download template '$OSTEMPLATE'. Please check template name and storage."
exit 1
fi
else
echo "Ubuntu template '$OSTEMPLATE' already exists."
fi
# ========================
# CONTAINER CREATION
# ========================
# Create a privileged Ubuntu container
# Create the container with 80 GB disk on mypool
echo "Creating unprivileged container $CTID with hostname $HOSTNAME..."
if ! pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE \
--storage $CONTAINER_STORAGE \
--rootfs $CONTAINER_STORAGE:$DISK_SIZE \
--password $PASSWORD \
--hostname $HOSTNAME \
--memory $MEMORY \
--cores $CORES \
--net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY \
--nameserver "$DNS_SERVERS" \
--unprivileged 0; then \ # Set to 1 for unprivileged containers
echo "Error: Failed to create LXC container $CTID. Check parameters and Proxmox logs."
exit 1
fi
# Post-configuration
pct set $CTID --onboot 1
pct set $CTID --features nesting=1,keyctl=1,mknod=1
pct set $CTID --start 1
Use ‐‐password “$LXC_ROOT_PASSWORD” to avoid hardcoding sensitive credentials within the script.
qm set $CTID --ipconfig0 ip=$IPADDRESS$CIDR,gw=$GATEWAY
Configure network interface with DHCP: pct set $CTID --net0 name=eth0,bridge=vmbr0,ip=dhcp,firewall=0
. pct set is used to modify the configuration of an existing LXC container in Proxmox, identified by its container ID ($CTID).
pct set $CTID --nameserver 1.1.1.1 8.8.8.8
# Disable AppArmor for our container
# AppArmor is a Linux security module that enforces Mandatory Access Control (MAC) by restricting what programs can do at the kernel level
# The explicit disabling of AppArmor removes a vital kernel-level security layer, compounding the risk, especially in a privileged container (home lab).
# When AppArmor is enabled, it can sometimes interfere with certain operations within containers, such as networking or accessing certain system resources
touch /etc/pve/lxc/$CTID.conf
echo "lxc.apparmor.profile: unconfined" >> /etc/pve/lxc/$CTID.conf
# Start container
pct start $CTID
# Wait for the container to boot and for network connectivity.
# Instead of using an unreliable 'sleep 5', we use a robust ping loop.
echo "Waiting for container $CTID to boot and acquire network..."
MAX_ATTEMPTS=30 # Maximum number of attempts to check network connectivity
ATTEMPT=0 # Initialize attempt counter
# Loop to check network connectivity
while ! pct exec "$CTID" -- ping -c 1 -W 1 "$GATEWAY" >/dev/null 2>&1; do
# This starts a loop that will continue executing as long as the condition following it evaluates to true.
# ! The loop will continue as long as the command fails, pct exec "$CTID" -- executes a command inside the LXC container with ID $CTID
# ping -c 1 -W 1 "$GATEWAY": sends a single ping (-c 1) to the specified gateway ("$GATEWAY") and sets a timeout of 1 second for the ping response (-W 1)
# >/dev/null 2>&1: it redirects both standard output (stdout) and standard error (stderr) to /dev/null, effectively discarding any output or error messages from the command.
if [[ $ATTEMPT -ge $MAX_ATTEMPTS ]]; then
echo "Error: Container $CTID did not acquire network connectivity within expected time."
exit 1 # Exit with an error if no connectivity
fi
echo "Network not ready, retrying in 2 seconds..."
sleep 2 # Wait for 2 seconds before retrying
ATTEMPT=$((ATTEMPT + 1)) # Increment attempt counter
done
echo "Container $CTID is online and network is active."
# ========================================
# INSTALL DOCKER INSIDE THE CONTAINER
# ========================================
# Based in dockerdocs, Install Docker Engine on Ubuntu, Install using the apt repository
# Install prerequisite packages for Docker.
echo "--- Install Docker Engine ---"
echo "Updating container packages and installing prerequisites..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed."; exit 1; fi
# ! The block will execute (outputs an error message and exists with a status code of 1, indicating an error) if the command fails (returns a non-zero exit status).
# pct exec "$CTID" -- apt-get update: executes the apt-get update command inside the LXC container
if ! pct exec "$CTID" -- apt-get install -y ca-certificates curl gnupg lsb-release; then
echo "Error: Failed to install Docker prerequisites."
exit 1
fi
# Add Docker GPG key and apt repository inside the LXC
pct exec "$CTID" -- bash -c '
# Enables strict error handling
set -euo pipefail
# Creates the directory for keyrings if it doesn’t exist, ensuring the path is ready for the GPG key.
mkdir -p /etc/apt/keyrings
# Fetch and install Docker’s GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --batch --yes --dearmor -o /etc/apt/keyrings/docker.gpg
# Sets permissions to allow all users to read the GPG key file.
chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repo; use "jammy" (22.04 LTS) for broad support
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
> /etc/apt/sources.list.d/docker.list
# Updates the package index to include the new Docker repository, making Docker packages available for installation.
apt-get update
'
root@ubuntu-desktop:~# cat /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu oracular stable
# Install Docker packages (docker-ce, docker-ce-cli, containerd.io).
echo "Installing Docker CE and related packages..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed after adding Docker repo."; exit 1; fi
if ! pct exec "$CTID" -- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin; then
echo "Error: Failed to install Docker CE packages."
exit 1
fi
# Enable and start Docker service.
echo "Enabling and starting Docker service..."
if ! pct exec "$CTID" -- systemctl enable docker; then echo "Error: Failed to enable Docker."; exit 1; fi
if ! pct exec "$CTID" -- systemctl start docker; then echo "Error: Failed to start Docker."; exit 1; fi
# Test Docker installation.
echo "Testing Docker installation with 'hello-world'..."
if ! pct exec "$CTID" -- docker run hello-world; then
echo "Error: Docker 'hello-world' test failed. Docker might not be fully functional."
exit 1
fi
echo "Docker installed and tested successfully."
# Passwordless sudo
# It is a severe security vulnerability
# Using passwordless sudo in a Proxmox LXC container for a homelab is generally acceptable
# This configuration grants passwordless sudo access to the root user and any user within the sudo group
pct exec $CTID -- echo "root ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
pct exec $CTID -- echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Check IP addresses
echo "Check IP addresses"
pct exec $CTID -- ip a
# Test connectivity.
echo "Testing external connectivity from container $CTID..."
if ! pct exec "$CTID" -- ping -c 3 google.com; then
echo "Warning: Container $CTID cannot reach google.com. Network connectivity issues may exist."
fi
# ========================================
# DEPLOY KASM WORKSPACES
# ========================================
echo "--- Deploy Kasm Workspaces ---"
# Create a directory /etc/local.d in the container for our script
echo "Creating /etc/local.d directory in container $CTID..."
if ! pct exec "$CTID" -- mkdir -p /etc/local.d; then echo "Error: Failed to create /etc/local.d."; exit 1; fi
# This block creates a systemd service file named startup-script.service in the /etc/systemd/system directory of the container.
echo "Creating the systemd service (startup-script.service)"
if ! pct exec "$CTID" -- bash -c "cat > /etc/systemd/system/startup-script.service << EOF_INNER
[Unit]
Description=Startup Script to run Kasm Docker container
After=network.target docker.service
# Specifies that this service should start after the network is available and after the Docker service is running.
[Service]
Type=simple # Indicates that the service will run as a simple process.
ExecStart=/etc/local.d/docker-startup.sh
# Specifies the command to execute to start the service, which is the startup script created in the next step.
RemainAfterExit=true
# This tells systemd to consider the service active even after the ExecStart command has completed. This is useful if the script runs a container and exits while Docker continues to run.
[Install]
WantedBy=multi-user.target
# Indicates the target under which this service should be started, in this case, during the multi-user run level.
EOF_INNER"; then
echo "Error: Failed to create systemd service file."
exit 1
fi
# This block creates a script named docker-startup.sh in the /etc/local.d directory.
echo "Creating docker-startup.sh script in container $CTID..."
if ! pct exec $CTID -- bash -c "cat > /etc/local.d/docker-startup.sh << EOF_INNER
#!/bin/bash
# Ensure Docker is running.
# It checks if the Docker service is active. If not, it waits and checks again.
while ! systemctl is-active --quiet docker; do
echo 'Waiting for Docker...'
sleep 1
done
# If container is already running, stop/remove it (optional)
docker rm -f kasm_container 2>/dev/null || true
# rm -f kasm_container attempts to forcefully remove any existing container named kasm_container, if it exists.
# The output is redirected to /dev/null to suppress any error messages if the container doesn’t exist.
# It sets up a new Docker container named kasm_container running an Ubuntu desktop environment with VNC access.
docker run -d --name kasm_container \
--shm-size=512m \
-p 6901:6901 \
-e VNC_PW=password \
kasmweb/ubuntu-focal-desktop:1.16.0
EOF_INNER"; then
echo "Error: Failed to create docker-startup.sh script."
exit 1
fi
systemctl is-active ‐‐quiet docker
returns a success status if Docker is running (the flag ‐‐quiet suppreses output).# Makes the docker-startup.sh script executable.
echo "Making docker-startup.sh executable..."
if ! pct exec "$CTID" -- chmod +x /etc/local.d/docker-startup.sh; then echo "Error: Failed to set script executable."; exit 1; fi
# It reloads the systemd manager configuration to recognize the new service created.
echo "Reloading systemd daemon..."
if ! pct exec "$CTID" -- systemctl daemon-reload; then echo "Error: Failed to reload systemd daemon."; exit 1; fi
# Enable (to start automatically at boot) and start the service
if ! pct exec "$CTID" -- systemctl enable startup-script.service; then echo "Error: Failed to enable startup-script.service."; exit 1; fi
if ! pct exec "$CTID" -- systemctl start startup-script.service; then echo "Error: Failed to start startup-script.service."; exit 1; fi
# ========================================
# CONFIGURE SSH ACCESS
# ========================================
echo "--- Configuring SSH Server ---"
# Install OpenSSH Server
if ! pct exec "$CTID" -- apt-get install -y openssh-server; then
echo "Error: Failed to install OpenSSH server."
exit 1
fi
# Configure SSH to allow root login with password (temporary, for key setup)
pct exec "$CTID" -- sed -i \
-e 's/^#PermitRootLogin.*/PermitRootLogin yes/' \
-e 's/^#PasswordAuthentication.*/PasswordAuthentication yes/' \
/etc/ssh/sshd_config
# Ensure SSH service starts on boot
pct exec "$CTID" -- systemctl enable ssh
# Restart SSH to apply changes
pct exec "$CTID" -- systemctl restart ssh
# Allow SSH port (22) in container firewall (if UFW is active)
pct exec "$CTID" -- bash -c ' \
if command -v ufw >/dev/null; then \
ufw allow 22; \
fi'
# Output success message
echo "--- Deployment Complete ---"
echo "Ubuntu-based LXC container $CTID created with Docker and Kasm installed."
echo "Kasm Ubuntu Focal Desktop should be running on port 6901."
echo "Access Kasm via: https://$IPADDRESS:6901/"
echo "Access VS Code: http://$IPADDRESS:8443/"
echo "User: kasm_user, Password: password."
pct exec "$CTID" -- ip a
echo "Script finished successfully."
http://dirIP:8443
, http://192.168.1.52:6901/
.pct enter ContainerID
, docker ps -a
(check container status), docker logs kasm
(check container logs).netstat -tulpn | grep 6901
.ssh-copy-id root@192.168.1.52
. Now try logging into the machine, with: ssh root@192.168.1.52
ssh root@myubuntuct
, create an alias myubuntuct = "ssh root@myubuntuct";
.