Do, or do not. There is no try, Yoda.

Kasm Workspaces is a container-native workspace streaming platform that provides virtual desktops (supports both Windows and Linux environments), web browsers, and applications via Docker containers and accessible through any web browser.
It allows users to access their customized desktop environments and applications through a web browser from any location on any device. For example, you might run a Windows or Linux desktop session in a container and connect to it from any device’s browser. It uses container technology to create isolated, disposable environments that can be accessed securely from any device, anywhere.
Using Kasm is like throwing your laptop away after each use and using a different internet connection each time, KASM WORKSPACES.
The default Kasm Workspaces are configured to require 4GB of memory, 2 cores, and 50 GB (SSD). Kasm Workspaces can be installed on a Virtual Machine or directly on bare metal. In my particular case, I install it in a VM in my Proxmox running Ubuntu Server 22.04 with 4 processors and 16 GB Memory.
The Kasm Workspaces community edition container streaming platform allows users to self-host Kasm on their own servers or computers completely free of charge. Notice that there is a five concurrent session limit in the community edition compared to the professional and enterprise plan.
cd /tmp
curl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_LATEST_RELEASE.tar.gz
# Replace kasm_release_LATEST_RELEASE.tar.gz with the actual latest release,
# e.g., tar -xf kasm_release_1.17.0.7f020d.tar.gz
# Download the latest release of Kasm Workspaces to /tmp
# kasmweb.com, Documentation, Installation process.
tar -xf kasm_release_LATEST_RELEASE.tar.gz
# Extract the package
cd kasm_release/
sudo ./install.sh
# Launch the installation script.
# During installation users will be prompted to create a swap partition if none are present.
# The Default usernames are admin@kasm.local and user@kasm.local.
# These users' passwords will be randomly generated and presented at the end of the install.
Installation Complete
Kasm UI Login Credentials
------------------------------------
username: admin@kasm.local
password: AL24Yigghvihn
------------------------------------
username: user@kasm.local
password: Zur3rWzHYoOfF
------------------------------------
[...]
Log into the Web Application running on port 443 at https:// + WEBAPP_SERVER_IP + :443 and using the admin credentials from the installer. On the dashboard control panel you will be able to see useful statistics like max active users, successful/fails logins, current sessions, etc.
To access the Kasm admin interface, open your web browser and navigate to: https[:]//[IP address of your server:443]. When prompted, enter the admin login credentials that were generated by the installer script.
Kasm Workspaces allows administrators to define customizable workspaces for users, enabling access to desktop environments, applications, or webpages directly through their web browsers. This solution is designed for remote access and virtualization, allowing users to utilize their workspaces without relying on traditional remote desktop software.
Each workspace and application runs in its own container, ensuring isolation and security. This setup allows you to launch a variety of applications, links, tools, utilities, and even full operating system distributions from a containerized environment. To install an application in Kasm Workspaces, navigate to the Workspaces section in the admin interface. Here, you can edit existing workspaces or add new ones as needed.

Kasm supports four workspace types:
One of the key important feature is browser isolation, which protects you from web-based threats such as malware, ransomware, phishing attacks, and drive-by downloads. By isolating your browsing activity within a secure environment, Kasm ensures that it cannot affect your devices or network.

One of the cool features about Kasm is the wide variety of workspaces that are available out of the box, e.g., browsers (Chrome, Brave, Edge, etc), Linux distributions (AlmaLinux, Fedora, Alpine, CentOS, etc.), office suites (LibreOffice, OnlyOffice), utilities (FileZilla), image editors (Gimp, Pinta), etc. Using what Kasm calls the registry, you can chose from a catalog of Docker containerized services that can serve as the foundation for a workspace.
Additionally, you can add extra registries to Kasm. Under Workspaces, Registry, Registries, type the registry URL (e.g., https://kasmregistry.linuxserver.io/) and then click the Add new section.
Later on, you can select a workspace and edit various settings, most of them are self-explanatory:
Example: Docker Exec Config (JSON) for Terminal Workspace: Workspaces, Workspaces, Terminal, ➡️, ✏️(Edit), scroll down to the Docker Exec Config (JSON) field and type:
{
"first_launch": { # It defines the settings that will be executed the first time the terminal workspace is launched.
"user": "root",
# Specifies the user under which the commands will be executed.
# It is set to root, allowing full administrative privileges.
"cmd": "bash -c 'apt update && apt install -y neovim'",
# This command runs when the workspace starts.
# Updates the package list and instals neovim.
"environment": {
# This section allows you to set environment variables for the terminal session.
# TERMINAL_ARGS: Contains arguments that customize the terminal's appearance and behavior, e.g., start the terminal in full-screen mode, removes the windows borders, etc.
"TERMINAL_ARGS": "--fullscreen --hide-borders --hide-menubar --zoom=-1 --hide-scrollbar"
}
}
The following commands will run asynchronously as the terminal spins up. If you attempt to execute some of these commands too quickly, they may not be found —just wait a few seconds.
To set up web filtering, go to Settings and select Web Filter from the left sidebar. Click on Add Policy to create a new web filtering policy.
Give the policy a descriptive name. If you want a strict policy that blocks all websites by default, set it to Deny by Default. This will block all websites except for those you explicitly allow.
To allow specific websites, enter their URL in the Domain Whitelist field. This ensures that only the listed sites can be accessed.
To block certain websites, enter their URLs in the in the Domain Blacklist field. Besides, activate the Safe Search option to enable search engines to filter out sites that may contain malicious content.
Once you’ve set up your web filtering policy, navigate to Workspaces on the left sidebar. Select the Docker image (e.g., Chrome or Brave) you want to apply the policy to, click ➡️, ✏️(Edit), scroll down to the Web Filter Policy field, and select your previously created policy.
When you click on the Workspaces button at the top, next to the Admin button, you will be taken to the Available Workspaces dashboard. From here, simply click on one of the available workspaces. You will then be prompted to choose how you want to open the session: in the current tab, in a new tab, or in a new window. Enjoy your experience!
At any time, you can click on the right arrow to access various options, including Resume, Pause, Stop, and Delete.
AppArmor is a security mechanism; disabling it removes an important layer of protection and is generally not recommended. However, if you need to prioritize functionality over security (e.g. in a homelab environment), you can disable it on the Proxmox host.
To completely disable AppArmor, you must instruct the kernel not to load the AppArmor module at boot. This is done by adding an apparmor=0 parameter to your kernel’s boot configuration:
nvim /etc/default/grub
[...]
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash apparmor=0"
Update GRUB (sudo update-grub) and Reboot (sudo reboot).
Once the system comes back up, you’ll want to confirm that AppArmor is truly inactive: cat /sys/module/apparmor/parameters/enabled. If AppArmor is disabled, this file will either not exist or show N/0.
#!/bin/bash
# PROXMOX LXC AUTOMATION SCRIPT (UBUNTU + DOCKER + KASM)
# Run the script, cd /home/nmaximo7/homelab/mydockers/
# chmod +x ubuntu-desktop.sh, sh ./ubuntu-desktop.sh
# echo "Access Kasm via: https://$IPADDRESS:6901/"
# echo "User: kasm_user, Password: password."
# --- Global Script Settings for Robustness ---
set -euo pipefail
# -e: exit immediately if any command fails (non‐zero return code).
# -u: treat unset variables as errors (avoid typos).
# -o pipefail: if any stage in a pipeline fails, the entire pipeline fails.
# ========================
# CONFIGURATION SECTION
# ========================
CTID=307 # The unique identifier for the container.
OSTEMPLATE="ubuntu-24.10-standard_24.10-1_amd64.tar.zst" # The template file for the Ubuntu image.
TEMPLATE_STORAGE="local" # The storage location for the template (usually 'local')
CONTAINER_STORAGE="mypool" # The storage location for the container’s disk (ZFS)
DISK_SIZE="80" # The size of the container's disk in GB.
PASSWORD="Anawim" # The root password for the container
HOSTNAME="kasm" # A descriptive hostname for the container
MEMORY=16384 # The amount of RAM allocated to the container (16384 = 16GB)
CORES=4 # The number of CPU cores assigned to the container (2-4 for typical workloads)
BRIDGE="vmbr0" # The network bridge for the container (vmbr0 = default)
IPADDRESS="192.168.1.88" # Static IP/CIDR (use /24 for class C)
GATEWAY="192.168.1.1" # Default LAN gateway (router IP)
CIDR="/24" # Adjust if not 255.255.255.0
DNS_SERVERS="1.1.1.1 8.8.8.8" # Cloudflare + Google DNS
# ========================
# CONTAINER MANAGEMENT
# ========================
# Checks if a container with the specified CTID already exists;
# if it is running, it stops and deletes it
# pct status "$CTID" tries to get the status of an LXC with that ID.
# If it exists, the command returns 0; else returns non‐zero.
if pct status "$CTID" >/dev/null 2>&1; then
echo "Container $CTID exists."
# Check if the container is running
if [ "$(pct status "$CTID" | awk '{print $2}')" = "running" ]; then
echo "Stopping running container $CTID."
# Run pct status "$CTID" | awk '{print $2}' to fetch the status string, e.g. “running” or “stopped.”
# If it’s running, we pct stop it.
if ! pct stop "$CTID"; then
echo "Error: Failed to stop container $CTID. Please check manually."
exit 1
fi
sleep 5 # Give it a moment to stop gracefully
else
echo "Container $CTID is not running. No need to stop."
fi
echo "Proceeding with deletion of container $CTID."
if ! pct destroy "$CTID"; then
echo "Error: Failed to destroy container $CTID. Please check manually."
exit 1
fi
else
echo "Container $CTID does not exist. Proceeding with creation."
fi
# ========================
# TEMPLATE HANDLING
# ========================
# Checks if the specified Ubuntu template is already downloaded at $TEMPLATE_STORAGE.
# pveam is the Proxmox VE Appliance Manager, pveam list $TEMPLATE_STORAGE return a list of all templates on storage
# If not, it downloads the template $TEMPLATE_STORAGE on $TEMPLATE_STORAGE.
# We grep -q for the exact $OSTEMPLATE name.
if ! pveam list $TEMPLATE_STORAGE | grep -q $OSTEMPLATE; then
echo "Downloading Ubuntu template '$OSTEMPLATE'..."
if ! pveam download "$TEMPLATE_STORAGE" "$OSTEMPLATE"; then
echo "Error: Failed to download template '$OSTEMPLATE'. Please check template name and storage."
exit 1
fi
else
echo "Ubuntu template '$OSTEMPLATE' already exists."
fi
# ========================
# CONTAINER CREATION
# ========================
# Create a privileged Ubuntu container
# Create the container with 80 GB disk on mypool
echo "Creating unprivileged container $CTID with hostname $HOSTNAME..."
# pct create CTID storage :vztmpl/template is how we instantiate a new LXC.
if ! pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE \
--storage $CONTAINER_STORAGE \
--rootfs $CONTAINER_STORAGE:$DISK_SIZE \
--password $PASSWORD \
--hostname $HOSTNAME \
--memory $MEMORY \
--cores $CORES \
--net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY \
--nameserver "$DNS_SERVERS" \
--unprivileged 0 \
; then
echo "Error: Failed to create LXC container $CTID. Check parameters and Proxmox logs."
exit 1
fi
# Post-configuration
# Configure the container to start automatically when the Proxmox host reboots.
pct set $CTID --onboot 1
# Enable nested virtualization (nesting=1), allow keyctl (needed for some key management inside Docker), and mknod for device creation.
pct set $CTID --features nesting=1,keyctl=1,mknod=1,fuse=1
pct set $CTID --start 1
# --- Add TUN device support for VPN features ---
echo "Adding /dev/net/tun passthrough to LXC config for VPN support"
echo "lxc.cgroup2.devices.allow: c 10:200 rwm" >> /etc/pve/lxc/${CTID}.conf
echo "lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file" >> /etc/pve/lxc/${CTID}.conf
# Disable AppArmor for our container
# AppArmor is a Linux security module that enforces Mandatory Access Control (MAC) by restricting what programs can do at the kernel level
# The explicit disabling of AppArmor removes a vital kernel-level security layer, compounding the risk, especially in a privileged container (home lab).
# When AppArmor is enabled, it can sometimes interfere with certain operations within containers, such as networking or accessing certain system resources
echo "Disabling AppArmor for container $CTID at the host level"
echo "lxc.apparmor.profile = unconfined" \
>> /etc/pve/lxc/${CTID}.conf
# Start container (it ensures the container is actually booting).
pct start $CTID
# Wait for the container to boot and for network connectivity.
# Instead of using an unreliable 'sleep 5', we use a robust ping loop.
echo "Waiting for container $CTID to boot and acquire network..."
MAX_ATTEMPTS=30 # Maximum number of attempts to check network connectivity
ATTEMPT=0 # Initialize attempt counter
# Loop to check network connectivity
while ! pct exec "$CTID" -- ping -c 1 -W 1 "$GATEWAY" >/dev/null 2>&1; do
# This starts a loop that will continue executing as long as the condition following it evaluates to true.
# ! The loop will continue as long as the command fails, pct exec "$CTID" -- executes a command inside the LXC container with ID $CTID
# ping -c 1 -W 1 "$GATEWAY": sends a single ping (-c 1) to the specified gateway ("$GATEWAY") and sets a timeout of 1 second for the ping response (-W 1)
# >/dev/null 2>&1: it redirects both standard output (stdout) and standard error (stderr) to /dev/null, effectively discarding any output or error messages from the command.
if [[ $ATTEMPT -ge $MAX_ATTEMPTS ]]; then
echo "Error: Container $CTID did not acquire network connectivity within expected time."
exit 1 # Exit with an error if no connectivity
fi
echo "Network not ready, retrying in 2 seconds..."
sleep 2 # Wait for 2 seconds before retrying
ATTEMPT=$((ATTEMPT + 1)) # Increment attempt counter
done
echo "Container $CTID is online and network is active."
# ========================================
# INSTALL DOCKER INSIDE THE CONTAINER
# ========================================
# Based in dockerdocs, Install Docker Engine on Ubuntu, Install using the apt repository
# Install prerequisite packages for Docker.
echo "--- Install Docker Engine ---"
echo "Updating container packages and installing prerequisites..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed."; exit 1; fi
# ! The block will execute (outputs an error message and exists with a status code of 1, indicating an error) if the command fails (returns a non-zero exit status).
# pct exec "$CTID" -- apt-get update: executes the apt-get update command inside the LXC container
if ! pct exec "$CTID" -- apt-get install -y ca-certificates curl gnupg lsb-release; then
echo "Error: Failed to install Docker prerequisites."
exit 1
fi
# Add Docker GPG key and apt repository inside the LXC
pct exec "$CTID" -- bash -c '
set -euo pipefail
# Prepare keyrings directory to store Docker’s key in a keyring file (modern, recommended approach).
mkdir -p /etc/apt/keyrings
# Fetch and converts the GPG key to the binary form.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --batch --yes --dearmor -o /etc/apt/keyrings/docker.gpg
# Sets permissions to allow all users to read the GPG key file.
chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repo; use "jammy" (22.04 LTS) for broad support
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
> /etc/apt/sources.list.d/docker.list
# Update package index
apt-get update
'
# Install Docker packages (docker-ce, docker-ce-cli, containerd.io).
echo "Installing Docker CE and related packages..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed after adding Docker repo."; exit 1; fi
if ! pct exec "$CTID" -- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin; then
echo "Error: Failed to install Docker CE packages."
exit 1
fi
echo "--- Cleaning up any existing Docker state inside LXC $CTID ---"
pct exec "$CTID" -- bash -c '
set -euo pipefail
# 1. Stop docker.service and docker.socket
if systemctl is-active --quiet docker.service; then
echo "Stopping docker.service…"
systemctl stop docker.service
fi
if systemctl is-active --quiet docker.socket; then
echo "Stopping docker.socket…"
systemctl stop docker.socket
fi
sleep 1
# 2. Remove any leftover socket file or directory
if [ -e /var/run/docker.sock ]; then
echo "Removing stale /var/run/docker.sock…"
rm -rf /var/run/docker.sock
fi
# 3. Unmount anything bind-mounted there
if mountpoint -q /var/run/docker.sock; then
echo "Unmounting bind-mount on /var/run/docker.sock…"
umount /var/run/docker.sock
fi
'
echo "Enabling and starting Docker inside LXC $CTID…"
# Enable only the service, not the socket
if ! pct exec "$CTID" -- systemctl enable docker.service; then
echo "Error: Failed to enable docker.service" >&2
exit 1
fi
# Then start the service directly
if ! pct exec "$CTID" -- systemctl start docker.service; then
echo "Error: Failed to start docker.service" >&2
exit 1
fi
# Wait up to 10 seconds for /var/run/docker.sock to be a socket
pct exec "$CTID" -- bash -c '
set -euo pipefail
i=0
while [ $i -lt 10 ]; do
if [ -S /var/run/docker.sock ]; then
echo "Docker socket is now up."
exit 0
fi
echo "Waiting for Docker socket… ($((i+1))/10)"
sleep 1
i=$((i+1))
done
echo "Error: /var/run/docker.sock never appeared." >&2
exit 1
'
echo "Docker is confirmed running with a valid socket."
# Test Docker with hello-world
echo "Testing Docker installation with 'hello-world'…"
if ! pct exec "$CTID" -- docker run --security-opt apparmor=unconfined hello-world; then
echo "Error: Docker 'hello-world' failed." >&2
exit 1
fi
echo "Docker hello-world succeeded."
# Passwordless sudo
# It is a severe security vulnerability
# Using passwordless sudo in a Proxmox LXC container for a homelab is generally acceptable
# This configuration grants passwordless sudo access to the root user and any user within the sudo group
pct exec $CTID -- echo "root ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
pct exec $CTID -- echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Check IP addresses
echo "Check IP addresses"
pct exec $CTID -- ip a
# Test connectivity.
echo "Testing external connectivity from container $CTID..."
if ! pct exec "$CTID" -- ping -c 3 google.com; then
echo "Warning: Container $CTID cannot reach google.com. Network connectivity issues may exist."
fi
# ========================================
# Deploying Kasm (LinuxServer.io Image)
# ========================================
# pct exsh ec $CTID -- docker run --rm -it --shm-size=512m -p 6901:6901 -e VNC_PW="" kasmweb/ubuntu-focal-desktop:1.16.0
pct exec $CTID -- bash -c '
cd /tmp
curl -O https://kasm-static-content.s3.amazonaws.com/kasm_release_1.17.0.7f020d.tar.gz
tar -xf kasm_release_1.17.0.7f020d.tar.gz
sudo bash kasm_release/install.sh
'
# Output success message
echo "Ubuntu-based LXC container $CTID created with Docker installed."
echo "Kasm Admin: https://$IPADDRESS:443"
echo "Kasm UI: http://$IPADDRESS:3000"
pveam available | grep ubuntu
system ubuntu-24.04-standard_24.04-2_amd64.tar.zst
system ubuntu-24.10-standard_24.10-1_amd64.tar.zst
system ubuntu-25.04-standard_25.04-1.1_amd64.tar.zst
root@myserver:~# pvesm status
Name Type Status Total Used Available %
local dir active 87531640 81797640 1409876 93.45%
local-lvm lvmthin active 148086784 3805830 144280953 2.57%
mypool zfspool active 7650410496 554518824 7095891672 7.25%
# Verify Docker Images.
# Check if KASM Docker images (kasm_proxy - Web proxy (port 443), kasm_api - API service, kasm_manager - Management service, kasm_agent - Agent service, kasm_share - File sharing service, kasm_db - PostgreSQL database, etc.) were properly pulled:
docker images
kasmweb/proxy 1.17.0 29e30ed777d8 5 weeks ago 134MB
kasmweb/share 1.17.0 9062c3909faa 5 weeks ago 244MB
kasmweb/agent 1.17.0 ce2dc309dc77 5 weeks ago 183MB
kasmweb/manager 1.17.0 c532baaa218f 5 weeks ago 998MB
kasmweb/api 1.17.0 b403c55bf8cc 5 weeks ago 999MB
kasmweb/rdp-https-gateway 1.17.0 2d94da1daac6 2 months ago 56.2MB
kasmweb/rdp-gateway 1.17.0 3cf654e1945f 2 months ago 285MB
hello-world latest 74cc54e27dc4 4 months ago 10.1kB
redis 5-alpine 7558bc54e8a2 2 years ago 22.9MB
# Container Status:
# Monitor all containers, including stopped ones:
docker ps -a
root@kasm:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02dba10ed02e kasmweb/proxy:1.17.0 "/docker-entrypoint.…" 7 hours ago Up 7 hours 80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp kasm_proxy
f9b19908479b kasmweb/rdp-https-gateway:1.17.0 "/opt/rdpgw/rdpgw" 7 hours ago Up 7 hours (healthy) kasm_rdp_https_gateway
abe04409d8f1 kasmweb/share:1.17.0 "python3 /src/api_se…" 7 hours ago Up 7 hours (healthy) 8182/tcp kasm_share
5cf3f21d17f2 kasmweb/agent:1.17.0 "python3 /src/Provis…" 7 hours ago Up 7 hours (healthy) 4444/tcp kasm_agent
e6c0e208ffc3 kasmweb/rdp-gateway:1.17.0 "/start.sh" 7 hours ago Up 7 hours (healthy) 0.0.0.0:3389->3389/tcp, [::]:3389->3389/tcp kasm_rdp_gateway
15811bda52fa kasmweb/api:1.17.0 "/bin/sh -c /usr/bin…" 7 hours ago Up 7 hours (healthy) 8080/tcp kasm_api
614b9e46eba3 kasmweb/manager:1.17.0 "python3 /src/api_se…" 7 hours ago Up 7 hours (healthy) 8181/tcp kasm_manager
ef593dad335e redis:5-alpine "docker-entrypoint.s…" 7 hours ago Up 7 hours 6379/tcp kasm_redis
# If containers are crashing, check logs:
docker logs kasm_proxy
docker logs kasm_api
docker logs kasm_manager
The root is by default: admin@kasm.local. Select your favorite browser, go to “[https:]” + “//” + YOUR_CONTAINER_IP (e.g., 192.168.1.88) + :443/. Type in your credentials, select all/some images, then install. Ensure that the IP address (192.168.1.80) is not already in use on your network.
Enable FUSE and TUN/TAP support in the LXC Container. Open the configuration file for your LXC container (/etc/pve/lxc/YOUR-CTID.conf, e.g., vi /etc/pve/lxc/307.conf) and add the following lines:
FUSE allows user-space programs to create their own file systems without kernel modifications. TUN/TAP devices are virtual network devices that can be used to create a network interface. TUN/TAP: Virtual network devices for packet tunneling (TUN) and network bridging (TAP).
features: nesting=1,keyctl=1,mknod=1,fuse=1
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file 0 0
lxc.cgroup2.devices.allow: c 10:200 rwm
[...]
# After making these changes, reboot the container, enter the LXC Container
pct reboot 307
# Verify the devices are available:
ls -l /dev/fuse
ls -l /dev/net/tun