JustToThePoint English Website Version
JustToThePoint en español
Colaborate with us

Automating Proxmox LXC Container Creation with Docker and Ubuntu Desktop

The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded, Stephen Hawking

People say that if you play Microsoft CD’s backwards, you hear satanic things, but that’s nothing, because if you play them forwards, they install Windows.

VS Studio

Proxmox is a powerful, complete, open-source server platform for enterprise virtualization to deploy and manage multiple virtualized environments on a single bare metal server (under one unified roof). It is great for home labs because: it is open-source and free; it tightly integrates the KVM hypervisor (Kernel-based Virtual Machine, an open-source hypervisor that allows you to run multiple virtual machines on a Linux system) and Linux Containers (LXC) providing a robust, scalable, and flexible environment for your virtual machines and containers, e.g., comprehensive backup and restore options for both VMs and containers, advanced storage and networking features, a built-in high availability clustering, and web-based management.

Each LXC container runs a full Linux distribution but shares the host’s Linux kernel. This makes them very lightweight and efficient, therefore ideal for tasks that need a full Linux environment without the overhead of hardware emulation.

image info

Kasm Workspaces is a container-based workspace streaming platform that provides virtual desktops (supports both Windows and Linux environments), browsers, and applications via Docker containers and accessible through any web browser.

It allows users to access their customized desktop environments and applications through a web browser from any location on any device. For example, you might run a Windows or Linux desktop session in a container and connect to it from any device’s browser. It uses container technology to create isolated environments, meaning that each workspace is independent, providing a secure and consistent experience.

Using Kasm is like throwing your laptop away after each use and using a different internet connection each time, KASM WORKSPACES.

In this article, we show how to cleanly remove VMs and containers, automate LXC creation with a Bash script, and set up Docker and deploy an image of Kasm that contains a browser-accessible Ubuntu Focal Desktop with various productivity and development apps installed. This image was designed to run natively within Kasm Workspaces, but it can also be deployed stand-alone and accessed through a web browser.

Deleting Proxmox VE Virtual Machines (VMs) & LXC Containers (CTs)

  1. To delete a VM or LXC container using Proxmox's web GUI, log in to the Proxmox web console (https://< proxmox-ip >:8006), expand your Proxmox cluster, select the relevant node and VM/CT in the navigation pane on the left. If this resource is running, first stop it by clicking Shutdown (graceful Shutdown) or Stop (Force Stop if unresponsive). Then, click More (upper toolbar), Remove, confirm it by entering the ID, and Proxmox will remove the configuration and data: Purge from job configurations (Removes associated backup/replication jobs in /etc/pve/jobs.cfg) and Destroy unreferenced disks.

    Select both options to purge from job configurations and destroy config-referenced (listed in Hardware tab) and unreferenced disks (orphaned storage, it requires ☑ Destroy unreferenced disks) owned by guest. This will ensure that the jobs and disks owned by the virtual machine will be deleted.

  2. By default, the Proxmox’s web GUI remove/destroy action deletes the VM/CT’s disks as well. Manual Disk Deletion (Delete specific disk without removing entire VM/CT). Navigate to VM/CT, open the Hardware tab. Select the target disk (e.g., scsi0, virtio1) and use the Remove button to instruct Proxmox to delete the disk.
  3. Using the CLI. On the command line, you can delete VMs and containers with qm (for QEMU VMs) and pct (for LXC CTs). First shut down or stop the instance, then issue the destroy command.
# Stop and delete VM 105 with purge
qm [stop | shutdown] 105 && qm destroy 105 --purge=true --destroy-unreferenced-disks=true
# stop: Immediately halts the VM, shutdown: Attempts a graceful shutdown.
# --purge=true: It specifies that the VM's configuration should be removed along with the VM itself.
# --destroy-unreferenced-disks=true: It ensures that any disks associated with the VM that are not referenced by other VMs (orphaned storage) are also deleted.

# Stop and delete CT 205 with purge
pct stop 205 && pct destroy 205 --purge=true --destroy-unreferenced-disks=true

# Manual disk removal (VM 105, disk scsi0)
qm set 105 --delete scsi0

Automating LXC Container Creation with Bash

Usage Instructions:

  1. Save the script as ubuntu-desktop.sh
  2. Make it executable: chmod +x ubuntu-desktop.sh
  3. Run it: ./ubuntu-desktop.sh
  4. Access Kasm at: https://192.168.1.88:443/

To create a CT with a custom configuration script, first ssh in your Proxmox server. Then, I will strongly recommend creating some kind of directory structure, e.g., /home/your-user/homelab/mydockers, and finally let’s create the script: vi/nano /home/your-user/homelab/mydockers/ubuntu-desktop.sh:

#!/bin/bash
# PROXMOX LXC AUTOMATION SCRIPT (UBUNTU + DOCKER + KASM)
# Run the script, cd /home/nmaximo7/homelab/mydockers/
# chmod +x ubuntu-desktop.sh, ./ubuntu-desktop.sh
# echo "Access Kasm via: https://$IPADDRESS:6901/"
# echo "User: kasm_user, Password: password."
# --- Global Script Settings for Robustness ---
set -euo pipefail # Exit immediately if a command exits with a non-zero status, or if a variable is unset, or if a command in a pipeline fails.

# ========================
# CONFIGURATION SECTION
# ========================
CTID=301 # The unique identifier for the container.
OSTEMPLATE="ubuntu-24.10-standard_24.10-1_amd64.tar.zst" # The template file for the Ubuntu image.
TEMPLATE_STORAGE="local" # The storage location for the template (usually 'local')
CONTAINER_STORAGE="mypool" # The storage location for the container’s disk (ZFS)
DISK_SIZE="80" # The size of the container's disk in GB.
PASSWORD="YOUR-PASSWORD" # The root password for the container
HOSTNAME="ubuntu-desktop" # A descriptive hostname for the container
MEMORY=4096 # The amount of RAM allocated to the container (4096 = 4GB)
CORES=2 # The number of CPU cores assigned to the container (2-4 for typical workloads)
BRIDGE="vmbr0" # The network bridge for the container (vmbr0 = default)
IPADDRESS="192.168.1.52" # Static IP/CIDR (use /24 for class C)
GATEWAY="192.168.1.1" # Default LAN gateway (router IP)
CIDR="/24" # Adjust if not 255.255.255.0
DNS_SERVERS="1.1.1.1 8.8.8.8" # Cloudflare + Google DNS

# ========================
# CONTAINER MANAGEMENT
# ========================
# Checks if a container with the specified CTID already exists;
# if it is running, it stops and deletes it
if pct status "$CTID" >/dev/null 2>&1; then
    echo "Container $CTID exists."
    # Check if the container is running
    if [ "$(pct status "$CTID" | awk '{print $2}')" = "running" ]; then
        echo "Stopping running container $CTID."
        if ! pct stop "$CTID"; then
            echo "Error: Failed to stop container $CTID. Please check manually."
            exit 1
        fi
        sleep 5 # Give it a moment to stop gracefully
    else
        echo "Container $CTID is not running. No need to stop."
    fi

    echo "Proceeding with deletion of container $CTID."
    if ! pct destroy "$CTID"; then
        echo "Error: Failed to destroy container $CTID. Please check manually."
        exit 1
    fi
else
    echo "Container $CTID does not exist. Proceeding with creation."
fi

The line if pct status $CTID &>/dev/null; then requires a little more information:

  1. pct status $CTID: it checks the status of the container with ID $CTID. It returns information about whether the container is running, stopped, or does not exist.
  2. &>/dev/null: it redirects both the standard output (stdout) and standard error (stderr) of the command to /dev/null, which effectively discards any output, so we won’t see any messages or errors from the pct status command.
  3. if … then: it checks if the container exists without displaying any output. If it does exist, it proceeds to stop and delete the container.

This is an alternatively to avoid hardcoding sensitive credentials withing the script:

# --- Security-Sensitive Variables (DO NOT HARDCODE) ---
# Prompt for the root password for the LXC container.
# This is a critical security improvement to prevent plaintext exposure. (Recommendation from 2.1, 3.1)
read -s -p "Enter root password for LXC container $CTID: " LXC_ROOT_PASSWORD
echo # Newline after password input

# Prompt for the Kasm VNC password.
# This avoids hardcoding sensitive credentials within the script. (Recommendation from 2.6)
read -s -p "Enter Kasm VNC password: " KASM_VNC_PASSWORD
echo # Newline after password input

Let’s continue with our script:

# ========================
# TEMPLATE HANDLING
# ========================
# Checks if the specified Ubuntu template is already downloaded at $TEMPLATE_STORAGE.
# pveam is the Proxmox VE Appliance Manager, pveam list $TEMPLATE_STORAGE return a list of all templates on storage
# If not, it downloads the template $TEMPLATE_STORAGE on $TEMPLATE_STORAGE.
if ! pveam list $TEMPLATE_STORAGE | grep -q $OSTEMPLATE; then
  echo "Downloading Ubuntu template '$OSTEMPLATE'..."
  if ! pveam download "$TEMPLATE_STORAGE" "$OSTEMPLATE"; then
        echo "Error: Failed to download template '$OSTEMPLATE'. Please check template name and storage."
        exit 1
    fi
else
    echo "Ubuntu template '$OSTEMPLATE' already exists."
fi

# ========================
# CONTAINER CREATION
# ========================
# Create a privileged Ubuntu container
# Create the container with 80 GB disk on mypool
echo "Creating unprivileged container $CTID with hostname $HOSTNAME..."

if ! pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE \
    --storage $CONTAINER_STORAGE \
    --rootfs $CONTAINER_STORAGE:$DISK_SIZE \
    --password $PASSWORD \
    --hostname $HOSTNAME \
    --memory $MEMORY \
    --cores $CORES \
    --net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY \
    --nameserver "$DNS_SERVERS" \
    --unprivileged 0; then \ # Set to 1 for unprivileged containers
    echo "Error: Failed to create LXC container $CTID. Check parameters and Proxmox logs."
    exit 1
fi

# Post-configuration
pct set $CTID --onboot 1
pct set $CTID --features nesting=1,keyctl=1,mknod=1
pct set $CTID --start 1
  1. pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE: Creates a new container with the specified ID ($CTID) using the template located at $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE.
  2. ‐‐storage $CONTAINER_STORAGE: Specifies the storage location for the container.
  3. ‐‐rootfs $CONTAINER_STORAGE:$DISK_SIZE: Sets the root filesystem location and size for the container.
  4. ‐‐password $PASSWORD: Sets the root password for the container.

    Use ‐‐password “$LXC_ROOT_PASSWORD” to avoid hardcoding sensitive credentials within the script.

  5. ‐‐hostname $HOSTNAME: Assigns a hostname to the container for easy identification and management.
  6. ‐‐memory $MEMORY: Allocates the specified amount of RAM to the container.
  7. ‐‐cores $CORES: Specifies the number of CPU cores to allocate to the container.
  8. ‐‐net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY: Configures the network interface with: name=eth0 (Interface name), bridge=$BRIDGE (Network bridge to connect to), and ip=$IPADDRESS$CIDR (IP address and CIDR notation for the container). Alternatively, qm set $CTID --ipconfig0 ip=$IPADDRESS$CIDR,gw=$GATEWAY

    Configure network interface with DHCP: pct set $CTID --net0 name=eth0,bridge=vmbr0,ip=dhcp,firewall=0. pct set is used to modify the configuration of an existing LXC container in Proxmox, identified by its container ID ($CTID).

  9. ‐‐nameserver “$DNS_SERVERS”: Sets the DNS servers for the container. Alternatively, pct set $CTID --nameserver 1.1.1.1 8.8.8.8
  10. ‐‐unprivileged 0: Specifies whether the container is unprivileged. Set to 1 for unprivileged containers (more secure).
  11. pct set $CTID ‐‐features nesting=1,keyctl=1,mknod=1: Enables specific features: nesting=1: enable nesting, allowing you to run containers (like Docker containers) inside this LXC container; keyctl=1: Enables keyctl feature, it allows the use of the kernel key management facility, which can be necessary for certain applications that rely on keyring functionality and environments that need secure credential storage. mknod=1: allows the use of the mknod command within the container, which is used to create special files (like device files) in the filesystem.
  12. pct set $CTID ‐‐onboot 1: Configures the container to start automatically on boot.
  13. ‐‐start 1: Starts the container immediately after creation.
# Disable AppArmor for our container
# AppArmor is a Linux security module that enforces Mandatory Access Control (MAC) by restricting what programs can do at the kernel level
# The explicit disabling of AppArmor removes a vital kernel-level security layer, compounding the risk, especially in a privileged container (home lab).
# When AppArmor is enabled, it can sometimes interfere with certain operations within containers, such as networking or accessing certain system resources
touch /etc/pve/lxc/$CTID.conf
echo "lxc.apparmor.profile: unconfined" >> /etc/pve/lxc/$CTID.conf

# Start container
pct start $CTID

# Wait for the container to boot and for network connectivity.
# Instead of using an unreliable 'sleep 5', we use a robust ping loop.
echo "Waiting for container $CTID to boot and acquire network..."
MAX_ATTEMPTS=30 # Maximum number of attempts to check network connectivity
ATTEMPT=0  # Initialize attempt counter

# Loop to check network connectivity
while ! pct exec "$CTID" -- ping -c 1 -W 1 "$GATEWAY" >/dev/null 2>&1; do
# This starts a loop that will continue executing as long as the condition following it evaluates to true.
# ! The loop will continue as long as the command fails, pct exec "$CTID" -- executes a command inside the LXC container with ID $CTID
# ping -c 1 -W 1 "$GATEWAY": sends a single ping (-c 1) to the specified gateway ("$GATEWAY") and sets a timeout of 1 second for the ping response (-W 1)
# >/dev/null 2>&1: it redirects both standard output (stdout) and standard error (stderr) to /dev/null, effectively discarding any output or error messages from the command.
    if [[ $ATTEMPT -ge $MAX_ATTEMPTS ]]; then
        echo "Error: Container $CTID did not acquire network connectivity within expected time."
        exit 1 # Exit with an error if no connectivity
    fi
    echo "Network not ready, retrying in 2 seconds..."
    sleep 2 # Wait for 2 seconds before retrying
    ATTEMPT=$((ATTEMPT + 1)) # Increment attempt counter
done
echo "Container $CTID is online and network is active."

# ========================================
# INSTALL DOCKER INSIDE THE CONTAINER
# ========================================
# Based in dockerdocs, Install Docker Engine on Ubuntu, Install using the apt repository
# Install prerequisite packages for Docker.
echo "--- Install Docker Engine ---"
echo "Updating container packages and installing prerequisites..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed."; exit 1; fi
# ! The block will execute (outputs an error message and exists with a status code of 1, indicating an error) if the command fails (returns a non-zero exit status).
# pct exec "$CTID" -- apt-get update: executes the apt-get update command inside the LXC container
if ! pct exec "$CTID" -- apt-get install -y ca-certificates curl gnupg lsb-release; then
    echo "Error: Failed to install Docker prerequisites."
    exit 1
fi
# Add Docker GPG key and apt repository inside the LXC
pct exec "$CTID" -- bash -c '
  # Enables strict error handling
  set -euo pipefail

  # Creates the directory for keyrings if it doesn’t exist, ensuring the path is ready for the GPG key.
  mkdir -p /etc/apt/keyrings

  # Fetch and install Docker’s GPG key
  curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
    | gpg --batch --yes --dearmor -o /etc/apt/keyrings/docker.gpg

  # Sets permissions to allow all users to read the GPG key file.
  chmod a+r /etc/apt/keyrings/docker.gpg

  # Add the Docker repo; use "jammy" (22.04 LTS) for broad support
  echo "deb [arch=$(dpkg --print-architecture) \
    signed-by=/etc/apt/keyrings/docker.gpg] \
    https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
    > /etc/apt/sources.list.d/docker.list

  # Updates the package index to include the new Docker repository, making Docker packages available for installation.
  apt-get update
'
  1. if !: the block executes (outputs an error message if the key download or conversion fails and exists the script with a status code of 1) if the command fails (returns a non-zero exit status).
  2. pct exec “$CTID” ‐‐ bash -c’: it executes the enclosed commands inside the LXC container identified by $CTID using a new Bash shell.
  3. curl -fsSL https://download.docker.com/linux/ubuntu/gpg: it fetches the Docker GPG key from the specified URL
  4. |: it takes the output of the curl command and passes it as input to the next command (gpg –dearmor).
  5. gpg ‐‐batch ‐‐yes ‐‐dearmor -o /etc/apt/keyrings/docker.gpg: ‐‐dearmor: Converts the ASCII-armored key (this format uses printable characters. It is intended for easy sharing via text mediums and is readable by humans) to binary format (a more efficient way to store the key). -o /etc/apt/keyrings/docker.asc: Specifies the output file where the converted key will be saved. ‐‐batch: Runs in batch mode, meaning that there are no interactive prompts. ‐‐yes: Assumes “yes” to all prompts.
  6. echo "deb [arch=$(dpkg ‐‐print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \ > /etc/apt/sources.list.d/docker.list: construct a line to be added to the APT source list
    deb: Indicates a Debian package repository.
    [arch=$(dpkg ‐‐print-architecture): Specifies the architecture (e.g., amd64) dynamically.
    signed-by=/etc/apt/keyrings/docker.asc]: Points to the GPG key for package verification.
    https://download.docker.com/linux/ubuntu: The base URL for the Docker repository.
    $(lsb_release -cs): Dynamically retrieves the distribution codename of the Ubuntu release (e.g., focal, jammy).
    stable: Specifies the repository’s release channel.
    > /etc/apt/sources.list.d/docker.list: Redirects the output to create a new file that contains the Docker repository configuration.

root@ubuntu-desktop:~# cat /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu oracular stable

# Install Docker packages (docker-ce, docker-ce-cli, containerd.io).
echo "Installing Docker CE and related packages..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed after adding Docker repo."; exit 1; fi
if ! pct exec "$CTID" -- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin; then
    echo "Error: Failed to install Docker CE packages."
    exit 1
fi

# Enable and start Docker service.
echo "Enabling and starting Docker service..."
if ! pct exec "$CTID" -- systemctl enable docker; then echo "Error: Failed to enable Docker."; exit 1; fi
if ! pct exec "$CTID" -- systemctl start docker; then echo "Error: Failed to start Docker."; exit 1; fi

# Test Docker installation.
echo "Testing Docker installation with 'hello-world'..."
if ! pct exec "$CTID" -- docker run hello-world; then
    echo "Error: Docker 'hello-world' test failed. Docker might not be fully functional."
    exit 1
fi
echo "Docker installed and tested successfully."

# Passwordless sudo
# It is a severe security vulnerability
# Using passwordless sudo in a Proxmox LXC container for a homelab is generally acceptable
# This configuration grants passwordless sudo access to the root user and any user within the sudo group
pct exec $CTID -- echo "root ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
pct exec $CTID -- echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

# Check IP addresses
echo "Check IP addresses"
pct exec $CTID -- ip a

# Test connectivity.
echo "Testing external connectivity from container $CTID..."
if ! pct exec "$CTID" -- ping -c 3 google.com; then
    echo "Warning: Container $CTID cannot reach google.com. Network connectivity issues may exist."
fi

# ========================================
# DEPLOY KASM WORKSPACES
# ========================================
echo "--- Deploy Kasm Workspaces ---"

# Create a directory /etc/local.d in the container for our script
echo "Creating /etc/local.d directory in container $CTID..."
if ! pct exec "$CTID" -- mkdir -p /etc/local.d; then echo "Error: Failed to create /etc/local.d."; exit 1; fi

# This block creates a systemd service file named startup-script.service in the /etc/systemd/system directory of the container.
echo "Creating the systemd service (startup-script.service)"
if ! pct exec "$CTID" -- bash -c "cat > /etc/systemd/system/startup-script.service << EOF_INNER
[Unit]
Description=Startup Script to run Kasm Docker container
After=network.target docker.service
# Specifies that this service should start after the network is available and after the Docker service is running.

[Service]
Type=simple # Indicates that the service will run as a simple process.
ExecStart=/etc/local.d/docker-startup.sh
# Specifies the command to execute to start the service, which is the startup script created in the next step.
RemainAfterExit=true
# This tells systemd to consider the service active even after the ExecStart command has completed. This is useful if the script runs a container and exits while Docker continues to run.

[Install]
WantedBy=multi-user.target
# Indicates the target under which this service should be started, in this case, during the multi-user run level.
EOF_INNER"; then
    echo "Error: Failed to create systemd service file."
    exit 1
fi

# This block creates a script named docker-startup.sh in the /etc/local.d directory.
echo "Creating docker-startup.sh script in container $CTID..."
if ! pct exec $CTID -- bash -c "cat > /etc/local.d/docker-startup.sh << EOF_INNER
#!/bin/bash
# Ensure Docker is running.
# It checks if the Docker service is active. If not, it waits and checks again.
while ! systemctl is-active --quiet docker; do
    echo 'Waiting for Docker...'
    sleep 1
done

# If container is already running, stop/remove it (optional)
docker rm -f kasm_container 2>/dev/null || true
# rm -f kasm_container attempts to forcefully remove any existing container named kasm_container, if it exists.
# The output is redirected to /dev/null to suppress any error messages if the container doesn’t exist.

# It sets up a new Docker container named kasm_container running an Ubuntu desktop environment with VNC access.
docker run -d --name kasm_container \
  --shm-size=512m \
  -p 6901:6901 \
  -e VNC_PW=password \
  kasmweb/ubuntu-focal-desktop:1.16.0

EOF_INNER"; then
    echo "Error: Failed to create docker-startup.sh script."
    exit 1
fi
  1. cat > /etc/local.d/docker-startup.sh: Redirects the following input to create a new file named docker-startup.sh in the /etc/local.d/ directory.
  2. << EOF_INNER: Starts a here-document that allows multi-line input until the EOF_INNER marker is encountered.
  3. while ! systemctl is-active ‐‐quiet docker; do: This loop checks if the Docker service is active; the condition systemctl is-active ‐‐quiet docker returns a success status if Docker is running (the flag ‐‐quiet suppreses output).
  4. docker run: create and start a new container from a specified image.
  5. -d: Runs the container in detached mode, meaning it will run in the background.
  6. ‐‐name kasm_container: assings a name (kasm_container) to the container.
  7. ‐‐shm-size=512m: sets the size of the shared memory for the container to 512 megabytes.
  8. -p 6901:6901: Maps port 6901 of the host to port 6901 of the container. This allows external access to the service (VNC) running on that port inside the container.
  9. -e VNC_PW=password: It sets up a VNC password inside the container. Then, upon connecting to “https:” // Your-Host-IP : 6901 “/”, you are prompted for that password. To avoid introducing kasm_user as a user and password as its password, you may want to replace the line -e VNC_PW=password \ and use -e VNCOPTIONS=-disableBasicAuth \ instead.
  10. kasmweb/ubuntu-focal-desktop:1.16.0: specifies the Docker image to use for the container.
# Makes the docker-startup.sh script executable.
echo "Making docker-startup.sh executable..."
if ! pct exec "$CTID" -- chmod +x /etc/local.d/docker-startup.sh; then echo "Error: Failed to set script executable."; exit 1; fi

# It reloads the systemd manager configuration to recognize the new service created.
echo "Reloading systemd daemon..."
if ! pct exec "$CTID" -- systemctl daemon-reload; then echo "Error: Failed to reload systemd daemon."; exit 1; fi

# Enable (to start automatically at boot) and start the service
if ! pct exec "$CTID" -- systemctl enable startup-script.service; then echo "Error: Failed to enable startup-script.service."; exit 1; fi
if ! pct exec "$CTID" -- systemctl start startup-script.service; then echo "Error: Failed to start startup-script.service."; exit 1; fi

# ========================================
# CONFIGURE SSH ACCESS
# ========================================
echo "--- Configuring SSH Server ---"

# Install OpenSSH Server
if ! pct exec "$CTID" -- apt-get install -y openssh-server; then
    echo "Error: Failed to install OpenSSH server."
    exit 1
fi

# Configure SSH to allow root login with password (temporary, for key setup)
pct exec "$CTID" -- sed -i \
  -e 's/^#PermitRootLogin.*/PermitRootLogin yes/' \
  -e 's/^#PasswordAuthentication.*/PasswordAuthentication yes/' \
  /etc/ssh/sshd_config

# Ensure SSH service starts on boot
pct exec "$CTID" -- systemctl enable ssh

# Restart SSH to apply changes
pct exec "$CTID" -- systemctl restart ssh

# Allow SSH port (22) in container firewall (if UFW is active)
pct exec "$CTID" -- bash -c ' \
  if command -v ufw >/dev/null; then \
    ufw allow 22; \
  fi'

# Output success message
echo "--- Deployment Complete ---"
echo "Ubuntu-based LXC container $CTID created with Docker and Kasm installed."
echo "Kasm Ubuntu Focal Desktop should be running on port 6901."
echo "Access Kasm via: https://$IPADDRESS:6901/"
echo "Access VS Code: http://$IPADDRESS:8443/"
echo "User: kasm_user, Password: password."
pct exec "$CTID" -- ip a

echo "Script finished successfully."

image info

Troubleshoting & best practices

  1. Try HTTP first. http://dirIP:8443, http://192.168.1.52:6901/.
  2. Some checks: pct enter ContainerID, docker ps -a (check container status), docker logs kasm (check container logs).
  3. Check if the service is actually listening. netstat -tulpn | grep 6901.
  4. Once the container is fully provisioned, run this from your local machine: ssh-copy-id root@192.168.1.52. Now try logging into the machine, with: ssh root@192.168.1.52
  5. Log into Pi-hole’s web interface (e.g., http://192.168.1.43/admin), go to Settings, Local DNS Records, and add myubuntuct (domain) mapped to 192.168.1.52. Then, ssh root@myubuntuct, create an alias myubuntuct = "ssh root@myubuntuct";.
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.