JustToThePoint English Website Version
JustToThePoint en español
Colaborate with us

Automating VS Code LXC Container Deployment with Bash in Proxmox

The potential benefits of artificial intelligence are huge, so are the dangers, Dave Waters.

VS Studio

Automating LXC Container Creation with Bash

VS Code (Visual Studio Code) is a free, lightweight, and highly versatile source code editor created by Microsoft. Usage Instructions:

  1. Save the script as vscode.sh
  2. Make it executable: chmod +x vscode.sh.sh
  3. Run it: ./vscode.sh
  4. Access VS Code at: https://$IPADDRESS:8443/

To create a CT with a custom configuration script, first ssh in your Proxmox server. Then, I will strongly recommend creating some kind of directory structure, e.g., /home/your-user/homelab/mydockers, and finally let’s create the script: vi/nano /home/your-user/homelab/mydockers/vscode.sh:

#!/bin/bash
# PROXMOX LXC AUTOMATION SCRIPT (VS CODE)
# Run the script, cd /home/nmaximo7/homelab/mydockers/, chmod +x vscode.sh, ./ubuntu-desktop.sh
# echo "Access VS Code via: https://$IPADDRESS:8443/"
# --- Global Script Settings for Robustness ---
set -euo pipefail # Exit immediately if a command exits with a non-zero status, or if a variable is unset, or if a command in a pipeline fails.

# ========================
# CONFIGURATION SECTION
# ========================
CTID=305 # The unique identifier for the container.
OSTEMPLATE="ubuntu-24.10-standard_24.10-1_amd64.tar.zst" # The template file for the Ubuntu image.
TEMPLATE_STORAGE="local" # The storage location for the template (usually 'local')
CONTAINER_STORAGE="mypool" # The storage location for the container’s disk (ZFS)
DISK_SIZE="80" # The size of the container's disk in GB.
PASSWORD="Anawim" # The root password for the container
HOSTNAME="vscode" # A descriptive hostname for the container
MEMORY=4096 # The amount of RAM allocated to the container (4096 = 4GB)
CORES=2 # The number of CPU cores assigned to the container (2-4 for typical workloads)
BRIDGE="vmbr0" # The network bridge for the container (vmbr0 = default)
IPADDRESS="192.168.1.92" # Static IP/CIDR (use /24 for class C)
GATEWAY="192.168.1.1" # Default LAN gateway (router IP)
CIDR="/24" # Adjust if not 255.255.255.0
DNS_SERVERS="1.1.1.1 8.8.8.8" # Cloudflare + Google DNS
EXTENSIONS=(ms-python.python esbenp.prettier-vscode)
# To avoid permission issues on the host volume (/mypool/code), specify the user/group the container should run as via environment variables.
PUID=1000
PGID=1000
# ========================
# CONTAINER MANAGEMENT
# ========================
# Checks if a container with the specified CTID already exists;
# if it is running, it stops and deletes it
if pct status "$CTID" >/dev/null 2>&1; then
    echo "Container $CTID exists."
    # Check if the container is running
    if [ "$(pct status "$CTID" | awk '{print $2}')" = "running" ]; then
        echo "Stopping running container $CTID."
        if ! pct stop "$CTID"; then
            echo "Error: Failed to stop container $CTID. Please check manually."
            exit 1
        fi
        sleep 5 # Give it a moment to stop gracefully
    else
        echo "Container $CTID is not running. No need to stop."
    fi

    echo "Proceeding with deletion of container $CTID."
    if ! pct destroy "$CTID"; then
        echo "Error: Failed to destroy container $CTID. Please check manually."
        exit 1
    fi
else
    echo "Container $CTID does not exist. Proceeding with creation."
fi

The line if pct status $CTID &>/dev/null; then requires a little more information:

  1. pct status $CTID: it checks the status of the container with ID $CTID. It returns information about whether the container is running, stopped, or does not exist.
  2. &>/dev/null: it redirects both the standard output (stdout) and standard error (stderr) of the command to /dev/null, which effectively discards any output, so we won’t see any messages or errors from the pct status command.
  3. if … then: it checks if the container exists without displaying any output. If it does exist, it proceeds to stop and delete the container.

This is an alternatively to avoid hardcoding sensitive credentials withing the script:

# --- Security-Sensitive Variables (DO NOT HARDCODE) ---
# Prompt for the root password for the LXC container.
# This is a critical security improvement to prevent plaintext exposure. (Recommendation from 2.1, 3.1)
read -s -p "Enter root password for LXC container $CTID: " LXC_ROOT_PASSWORD
echo # Newline after password input

Let’s continue with our script:

# ========================
# TEMPLATE HANDLING
# ========================
# Checks if the specified Ubuntu template is already downloaded at $TEMPLATE_STORAGE.
# pveam is the Proxmox VE Appliance Manager, pveam list $TEMPLATE_STORAGE return a list of all templates on storage
# If not, it downloads the template $TEMPLATE_STORAGE on $TEMPLATE_STORAGE.
if ! pveam list $TEMPLATE_STORAGE | grep -q $OSTEMPLATE; then
  echo "Downloading Ubuntu template '$OSTEMPLATE'..."
  if ! pveam download "$TEMPLATE_STORAGE" "$OSTEMPLATE"; then
        echo "Error: Failed to download template '$OSTEMPLATE'. Please check template name and storage."
        exit 1
    fi
else
    echo "Ubuntu template '$OSTEMPLATE' already exists."
fi

# ========================
# CONTAINER CREATION
# ========================
# Create a privileged Ubuntu container
# Create the container with 80 GB disk on mypool
echo "Creating unprivileged container $CTID with hostname $HOSTNAME..."

if ! pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE \
    --storage $CONTAINER_STORAGE \
    --rootfs $CONTAINER_STORAGE:$DISK_SIZE \
    --password $PASSWORD \
    --hostname $HOSTNAME \
    --memory $MEMORY \
    --cores $CORES \
    --net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY \
    --nameserver "$DNS_SERVERS" \
    --unprivileged 0; then \ # Set to 1 for unprivileged containers
    echo "Error: Failed to create LXC container $CTID. Check parameters and Proxmox logs."
    exit 1
fi

# Post-configuration
pct set $CTID --onboot 1
pct set $CTID --features nesting=1,keyctl=1,mknod=1
pct set $CTID --start 1
  1. pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE: Creates a new container with the specified ID ($CTID) using the template located at $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE.
  2. ‐‐storage $CONTAINER_STORAGE: Specifies the storage location for the container.
  3. ‐‐rootfs $CONTAINER_STORAGE:$DISK_SIZE: Sets the root filesystem location and size for the container.
  4. ‐‐password $PASSWORD: Sets the root password for the container.

    Use ‐‐password “$LXC_ROOT_PASSWORD” to avoid hardcoding sensitive credentials within the script.

  5. ‐‐hostname $HOSTNAME: Assigns a hostname to the container.
  6. ‐‐memory $MEMORY: Allocates the specified amount of RAM to the container.
  7. ‐‐cores $CORES: Specifies the number of CPU cores to allocate to the container.
  8. ‐‐net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY: Configures the network interface with: name=eth0 (Interface name), bridge=$BRIDGE (Network bridge to connect to), and ip=$IPADDRESS$CIDR (IP address and CIDR notation for the container). Alternatively, qm set $CTID --ipconfig0 ip=$IPADDRESS$CIDR,gw=$GATEWAY

    Configure network interface with DHCP: pct set $CTID --net0 name=eth0,bridge=vmbr0,ip=dhcp,firewall=0. pct set is used to modify the configuration of an existing LXC container in Proxmox, identified by its container ID ($CTID).

  9. ‐‐nameserver “$DNS_SERVERS”: Sets the DNS servers for the container. Alternatively, pct set $CTID --nameserver 1.1.1.1 8.8.8.8
  10. ‐‐unprivileged 0: Specifies whether the container is unprivileged. Set to 1 for unprivileged containers (more secure).
  11. pct set $CTID ‐‐features nesting=1,keyctl=1,mknod=1: Enables specific features: nesting=1: enable nesting, allowing you to run containers (like Docker containers) inside this LXC container; keyctl=1: Enables keyctl feature, it allows the use of the kernel key management facility, which can be necessary for certain applications that rely on keyring functionality and environments that need secure credential storage. mknod=1: allows the use of the mknod command within the container, which is used to create special files (like device files) in the filesystem.
  12. pct set $CTID ‐‐onboot 1: Configures the container to start automatically on boot.
  13. ‐‐start 1: Starts the container immediately after creation.
# Disable AppArmor for our container
# AppArmor is a Linux security module that enforces Mandatory Access Control (MAC) by restricting what programs can do at the kernel level
# The explicit disabling of AppArmor removes a vital kernel-level security layer, compounding the risk, especially in a privileged container (home lab).
# When AppArmor is enabled, it can sometimes interfere with certain operations within containers, such as networking or accessing certain system resources
echo "Disabling AppArmor for container $CTID at the host level"
echo "lxc.apparmor.profile = unconfined" \
  >> /etc/pve/lxc/${CTID}.conf

# Start container
pct start $CTID

# Set locale, install and start D-Bus
echo "--- Set Locale and Installing D-Bus ---"
pct exec "$CTID" -- apt-get update
pct exec "$CTID" -- apt-get install -y locales dbus systemd-sysv
pct exec "$CTID" -- sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen
pct exec "$CTID" -- locale-gen
pct exec "$CTID" -- update-locale LANG=en_US.UTF-8
pct exec "$CTID" -- systemctl start dbus

# Wait for the container to boot and for network connectivity.
# Instead of using an unreliable 'sleep 5', we use a robust ping loop.
echo "Waiting for container $CTID to boot and acquire network..."
MAX_ATTEMPTS=30 # Maximum number of attempts to check network connectivity
ATTEMPT=0  # Initialize attempt counter

# Loop to check network connectivity
while ! pct exec "$CTID" -- ping -c 1 -W 1 "$GATEWAY" >/dev/null 2>&1; do
# This starts a loop that will continue executing as long as the condition following it evaluates to true.
# ! The loop will continue as long as the command fails, pct exec "$CTID" -- executes a command inside the LXC container with ID $CTID
# ping -c 1 -W 1 "$GATEWAY": sends a single ping (-c 1) to the specified gateway ("$GATEWAY") and sets a timeout of 1 second for the ping response (-W 1)
# >/dev/null 2>&1: it redirects both standard output (stdout) and standard error (stderr) to /dev/null, effectively discarding any output or error messages from the command.
    if [[ $ATTEMPT -ge $MAX_ATTEMPTS ]]; then
        echo "Error: Container $CTID did not acquire network connectivity within expected time."
        exit 1 # Exit with an error if no connectivity
    fi
    echo "Network not ready, retrying in 2 seconds..."
    sleep 2 # Wait for 2 seconds before retrying
    ATTEMPT=$((ATTEMPT + 1)) # Increment attempt counter
done
echo "Container $CTID is online and network is active."

# ========================================
# INSTALL DOCKER INSIDE THE CONTAINER
# ========================================
# Based in dockerdocs, Install Docker Engine on Ubuntu, Install using the apt repository
# Install prerequisite packages for Docker.
echo "--- Install Docker Engine ---"
echo "Updating container packages and installing prerequisites..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed."; exit 1; fi
# ! The block will execute (outputs an error message and exists with a status code of 1, indicating an error) if the command fails (returns a non-zero exit status).
# pct exec "$CTID" -- apt-get update: executes the apt-get update command inside the LXC container
if ! pct exec "$CTID" -- apt-get install -y ca-certificates curl gnupg lsb-release; then
    echo "Error: Failed to install Docker prerequisites."
    exit 1
fi
# Add Docker GPG key and apt repository inside the LXC
pct exec "$CTID" -- bash -c '
  # Enables strict error handling
  set -euo pipefail

  # Creates the directory for keyrings if it doesn’t exist, ensuring the path is ready for the GPG key.
  mkdir -p /etc/apt/keyrings

  # Fetch and install Docker’s GPG key
  curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
    | gpg --batch --yes --dearmor -o /etc/apt/keyrings/docker.gpg

  # Sets permissions to allow all users to read the GPG key file.
  chmod a+r /etc/apt/keyrings/docker.gpg

  # Add the Docker repo; use "jammy" (22.04 LTS) for broad support
  echo "deb [arch=$(dpkg --print-architecture) \
    signed-by=/etc/apt/keyrings/docker.gpg] \
    https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
    > /etc/apt/sources.list.d/docker.list

  # Updates the package index to include the new Docker repository, making Docker packages available for installation.
  apt-get update
'
  1. if !: the block executes (outputs an error message if the key download or conversion fails and exists the script with a status code of 1) if the command fails (returns a non-zero exit status).
  2. pct exec “$CTID” ‐‐ bash -c’: it executes the enclosed commands inside the LXC container identified by $CTID using a new Bash shell.
  3. curl -fsSL https://download.docker.com/linux/ubuntu/gpg: it fetches the Docker GPG key from the specified URL
  4. |: it takes the output of the curl command and passes it as input to the next command (gpg –dearmor).
  5. gpg ‐‐batch ‐‐yes ‐‐dearmor -o /etc/apt/keyrings/docker.gpg: ‐‐dearmor: Converts the ASCII-armored key (this format uses printable characters. It is intended for easy sharing via text mediums and is readable by humans) to binary format (a more efficient way to store the key). -o /etc/apt/keyrings/docker.asc: Specifies the output file where the converted key will be saved. ‐‐batch: Runs in batch mode, meaning that there are no interactive prompts. ‐‐yes: Assumes “yes” to all prompts.
  6. echo "deb [arch=$(dpkg ‐‐print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \ > /etc/apt/sources.list.d/docker.list: construct a line to be added to the APT source list
    deb: Indicates a Debian package repository.
    [arch=$(dpkg ‐‐print-architecture): Specifies the architecture (e.g., amd64) dynamically.
    signed-by=/etc/apt/keyrings/docker.asc]: Points to the GPG key for package verification.
    https://download.docker.com/linux/ubuntu: The base URL for the Docker repository.
    $(lsb_release -cs): Dynamically retrieves the distribution codename of the Ubuntu release (e.g., focal, jammy).
    stable: Specifies the repository’s release channel.
    > /etc/apt/sources.list.d/docker.list: Redirects the output to create a new file that contains the Docker repository configuration.

root@ubuntu-desktop:~# cat /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu oracular stable

# Install Docker packages (docker-ce, docker-ce-cli, containerd.io).
echo "Installing Docker CE and related packages..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed after adding Docker repo."; exit 1; fi
if ! pct exec "$CTID" -- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin; then
    echo "Error: Failed to install Docker CE packages."
    exit 1
fi

# Enable and start Docker service.
echo "Enabling and starting Docker service..."
if ! pct exec "$CTID" -- systemctl enable docker; then echo "Error: Failed to enable Docker."; exit 1; fi
if ! pct exec "$CTID" -- systemctl start docker; then echo "Error: Failed to start Docker."; exit 1; fi

# Test Docker installation: skip AppArmor when running hello-world
echo "Testing Docker installation with 'hello-world'…"
if ! pct exec "$CTID" -- \
     docker run --security-opt apparmor=unconfined hello-world; then
    echo "Error: Docker 'hello-world' test failed even with AppArmor unconfined."
    exit 1
fi
echo "Docker installed and tested successfully."

# Passwordless sudo
# It is a severe security vulnerability
# Using passwordless sudo in a Proxmox LXC container for a homelab is generally acceptable
# This configuration grants passwordless sudo access to the root user and any user within the sudo group
pct exec $CTID -- echo "root ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
pct exec $CTID -- echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

# Check IP addresses
echo "Check IP addresses"
pct exec $CTID -- ip a

# Test connectivity.
echo "Testing external connectivity from container $CTID..."
if ! pct exec "$CTID" -- ping -c 3 google.com; then
    echo "Warning: Container $CTID cannot reach google.com. Network connectivity issues may exist."
fi

# ========================
# PREPARE CODE-SERVER CONFIG
# ========================
# Create config dirs on the host
mkdir -p /mypool/code/data/User /mypool/code/extensions /mypool/code/workspace

# Set ownership
chown -R $PUID:$PGID /mypool/code

# ========================================
# DEPLOY VS Code
# ========================================
echo "--- Deploying VS Code Server ---"

pct exec $CTID -- bash -c '
# Ensure Docker is running.
#!/bin/bash
# Wait for Docker to be ready
i=1
while [ "$i" -le 30 ]; do
    # It checks if the Docker service is active. If not, it waits 30 seconds.
    if docker info >/dev/null 2>&1; then
        break
    fi
    echo "Waiting for Docker…"
    sleep 1
    i=$((i+1))
done

# If container is already running, stop/remove it and clean up any existing container (ignore errors)
docker stop code-server 2>/dev/null || true
docker rm -f code-server 2>/dev/null || true
# rm -f code-server attempts to forcefully remove any existing container named code-server, if it exists.
# The output is redirected to /dev/null to suppress any error messages if the container doesn’t exist.
# Pull the last image explicitly
docker pull lscr.io/linuxserver/code-server:latest
# Ensure config directory exists
mkdir -p /mypool/code
# It sets up a new Docker container named code-server running the latest version of the code-server image from the LinuxServer.io
# with AppArmor disabled (--privileged bypasses most security profiles)
# Run the container
docker run -d \
  --name=code-server \
  --privileged \
  -p 8443:8443 \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Etc/UTC \
  -e PASSWORD=Anawim \
  -e SUDO_PASSWORD=Anawim \
  -e DEFAULT_WORKSPACE=/config/workspace \
  -v /mypool/code:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/code-server:latest

echo "VS Code container started successfully!"
'

# Verify container status
pct exec $CTID -- bash -c '
     echo "Container status:"
     docker ps | grep code-server
     echo "Port binding:"
     docker port code-server
'
  1. docker run: create and start a new container from a specified image.
  2. -d: Runs the container in detached mode, meaning it will run in the background.
  3. ‐‐name code-server-ct: assings a name (code-server-ct) to the container for easy identification and management.
  4. ‐‐name privileged: grants the container additional permissions.
  5. -v /mypool/code:/config: This option mounts a volume. It links the /mypool/code directory on my host to the /config directory in the container. This way, files can persist even if the container is removed..
  6. -p 8443:8443: Maps port 8443 of the host to port 8443 of the container. This allows external access to the service (VNC) running on that port inside the container.
  7. -e PUID=1000: Set the user ID for the application inside the container.
  8. -e PGID=1000: Set the group ID for the application inside the container.
  9. -e TZ=Etc/UTC: Set the timezone for the container.
  10. -e PASSWORD=Anawim: Set the password for accessing the code-server.
  11. -e SUDO_PASSWORD=Anawim: Set the password for sudo access in the container.
  12. -e DEFAULT_WORKSPACE=/config/workspace: Set the default workspace directory inside the container.
  13. -v /mypool/code:/config: Mount a volume from the host to the container for data persisence, ensuring that settings, data, and extensions will be retained between container restarts and reboots. This allows for a consistent work environment.
  14. ‐‐restart unless-stopped: This policy ensures that the container will restart automatically unless it was stopped manually.
  15. lscr.io/linuxserver/code-server:latest: specifies the Docker image to use for the container, the latest version of the code-server.
# ========================================
# CONFIGURE SSH ACCESS
# ========================================
echo "--- Configuring SSH Server ---"

# Install OpenSSH Server
if ! pct exec "$CTID" -- apt-get install -y openssh-server; then
    echo "Error: Failed to install OpenSSH server."
    exit 1
fi

# Configure SSH to allow root login with password (temporary, for key setup)
pct exec "$CTID" -- sed -i \
  -e 's/^#PermitRootLogin.*/PermitRootLogin yes/' \
  -e 's/^#PasswordAuthentication.*/PasswordAuthentication yes/' \
  /etc/ssh/sshd_config

# Ensure SSH service starts on boot
pct exec "$CTID" -- systemctl enable ssh

# Restart SSH to apply changes
pct exec "$CTID" -- systemctl restart ssh

# Allow SSH port (22) in container firewall (if UFW is active)
pct exec "$CTID" -- bash -c ' \
  if command -v ufw >/dev/null; then \
    ufw allow 22; \
  fi'

# Output success message
echo "--- Deployment Complete ---"
echo "Code Server LXC container $CTID created with Docker and Kasm installed."
echo "Code Server LXC container $CTID should be running on port 8443."
echo "Access Code Server via: https://$IPADDRESS:8443/"
pct exec "$CTID" -- ip a

VS Code

Troubleshoting & best practices

  1. Try HTTP first. http://dirIP:8443, http://192.168.1.92:8443/.
  2. Some checks: pct enter ContainerID, docker ps -a (check container status), docker logs code-server-ct (check container logs).
  3. Check if the service is actually listening. netstat -tulpn | grep 8443. It requires apt install net-tools.
    root@vscode:~# netstat -tulpn | grep 8443
    tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      821/docker-proxy
    tcp6       0      0 :::8443                 :::*                    LISTEN      827/docker-proxy
    root@vscode:~# docker ps -a
    CONTAINER ID   IMAGE                                    COMMAND    CREATED       STATUS                   PORTS                                         NAMES
    43f5fb811784   lscr.io/linuxserver/code-server:latest   "/init"    2 hours ago   Up 2 hours               0.0.0.0:8443->8443/tcp, [::]:8443->8443/tcp   code-server-ct
    1bd8e1d6b439   hello-world                              "/hello"   7 hours ago   Exited (0) 7 hours ago                                                 pensive_hypatia
    root@vscode:~#
    
  4. Once the container is fully provisioned, run this from your local machine: ssh-copy-id root@192.168.1.92. Now try logging into the machine, with: ssh root@192.168.1.92
  5. Log into Pi-hole’s web interface (e.g., http://192.168.1.43/admin), go to Settings, Local DNS Records, and add myvscode (domain) mapped to 192.168.1.92. Then, ssh root@myvscode, create an alias myvscode = "ssh root@myvscode";.
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.