Man, I see in fight club the strongest and smartest men who’ve ever lived. I see all this potential, and I see squandering. God damn it, an entire generation pumping gas, waiting tables; slaves with white collars. Advertising has us chasing cars and clothes, working jobs we hate so we can buy shit we don’t need. We’re the middle children of history, man. No purpose or place. We have no Great War. No Great Depression. Our Great War’s a spiritual war…our Great Depression is our lives. We’ve all been raised on television to believe that one day we’d all be millionaires, and movie gods, and rock stars. But we won’t. And we’re slowly learning that fact. And we’re very, very pissed off, Fight Club
Portainer is a container management platform that provides a user-friendly interface for deploying, managing, and monitoring containerized applications —especially those running on Docker or Kubernetes.
Portainer lets you:
Before configuring Docker and the Portainer container, make sure you have an Ubuntu set up (this could be a physical home server, a VM, or an LXC container running Ubuntu).
LXC containers are ideal: Low resource overhead, fast provisioning (use Ubuntu 24.04 templates), and easy snapshots and backups via Proxmox.
This guide covers how to set it up via script and using Proxmox Web UI. Script Usage Instructions:
https://192.168.1.40:9443
To create a CT with a custom configuration script, first ssh in your Proxmox server. Then, I will strongly recommend creating some kind of directory structure, e.g., /home/your-user/homelab/mydockers, and finally let’s create the script: vi/nano /home/your-user/homelab/mydockers/portainer.sh:
#!/bin/bash
# PROXMOX LXC AUTOMATION SCRIPT (Portainer)
# Run the script, cd /home/nmaximo7/homelab/mydockers/, chmod +x vportainer.sh, ./portainer.sh
# echo "Access Portainer via: https://192.168.1.40:9443"
# --- Global Script Settings for Robustness ---
set -euo pipefail # Exit immediately if a command exits with a non-zero status, or if a variable is unset, or if a command in a pipeline fails.
# ========================
# CONFIGURATION SECTION
# ========================
CTID=110 # The unique identifier for the container.
OSTEMPLATE="ubuntu-24.10-standard_24.10-1_amd64.tar.zst" # The template file for the Ubuntu image.
TEMPLATE_STORAGE="local" # The storage location for the template (usually 'local')
CONTAINER_STORAGE="mypool" # The storage location for the container’s disk (ZFS)
DISK_SIZE="80" # The size of the container's disk in GB.
PASSWORD="Anawim" # The root password for the container
HOSTNAME="portainer" # A descriptive hostname for the container
MEMORY=4096 # The amount of RAM allocated to the container (4096 = 4GB)
CORES=2 # The number of CPU cores assigned to the container (2-4 for typical workloads)
BRIDGE="vmbr0" # The network bridge for the container (vmbr0 = default)
IPADDRESS="192.168.1.40" # Static IP/CIDR (use /24 for class C)
GATEWAY="192.168.1.1" # Default LAN gateway (router IP)
CIDR="/24" # Adjust if not 255.255.255.0
DNS_SERVERS="1.1.1.1 8.8.8.8" # Cloudflare + Google DNS
EXTENSIONS=(ms-python.python esbenp.prettier-vscode)
# To avoid permission issues on the host volume (/mypool/code), specify the user/group the container should run as via environment variables.
PUID=1000
PGID=1000
# ========================
# CONTAINER MANAGEMENT
# ========================
# Checks if a container with the specified CTID already exists;
# if it is running, it stops and deletes it
if pct status "$CTID" >/dev/null 2>&1; then
echo "Container $CTID exists."
# Check if the container is running
if [ "$(pct status "$CTID" | awk '{print $2}')" = "running" ]; then
echo "Stopping running container $CTID."
if ! pct stop "$CTID"; then
echo "Error: Failed to stop container $CTID. Please check manually."
exit 1
fi
sleep 5 # Give it a moment to stop gracefully
else
echo "Container $CTID is not running. No need to stop."
fi
echo "Proceeding with deletion of container $CTID."
if ! pct destroy "$CTID"; then
echo "Error: Failed to destroy container $CTID. Please check manually."
exit 1
fi
else
echo "Container $CTID does not exist. Proceeding with creation."
fi
The line if pct status $CTID &>/dev/null; then
requires a little more information:
This is an alternatively to avoid hardcoding sensitive credentials withing the script:
# --- Security-Sensitive Variables (DO NOT HARDCODE) ---
# Prompt for the root password for the LXC container.
# This is a critical security improvement to prevent plaintext exposure. (Recommendation from 2.1, 3.1)
read -s -p "Enter root password for LXC container $CTID: " LXC_ROOT_PASSWORD
echo # Newline after password input
Let’s continue with our script:
# ========================
# TEMPLATE HANDLING
# ========================
# Checks if the specified Ubuntu template is already downloaded at $TEMPLATE_STORAGE.
# pveam is the Proxmox VE Appliance Manager, pveam list $TEMPLATE_STORAGE return a list of all templates on storage
# If not, it downloads the template $TEMPLATE_STORAGE on $TEMPLATE_STORAGE.
if ! pveam list $TEMPLATE_STORAGE | grep -q $OSTEMPLATE; then
echo "Downloading Ubuntu template '$OSTEMPLATE'..."
if ! pveam download "$TEMPLATE_STORAGE" "$OSTEMPLATE"; then
echo "Error: Failed to download template '$OSTEMPLATE'. Please check template name and storage."
exit 1
fi
else
echo "Ubuntu template '$OSTEMPLATE' already exists."
fi
# ========================
# CONTAINER CREATION
# ========================
# Create a privileged Ubuntu container
# Create the container with 80 GB disk on mypool
echo "Creating unprivileged container $CTID with hostname $HOSTNAME..."
if ! pct create $CTID $TEMPLATE_STORAGE:vztmpl/$OSTEMPLATE \
--storage $CONTAINER_STORAGE \
--rootfs $CONTAINER_STORAGE:$DISK_SIZE \
--password $PASSWORD \
--hostname $HOSTNAME \
--memory $MEMORY \
--cores $CORES \
--net0 name=eth0,bridge=$BRIDGE,ip=$IPADDRESS$CIDR,gw=$GATEWAY \
--nameserver "$DNS_SERVERS" \
--unprivileged 0; then \ # Set to 1 for unprivileged containers
echo "Error: Failed to create LXC container $CTID. Check parameters and Proxmox logs."
exit 1
fi
# Post-configuration
pct set $CTID --onboot 1
pct set $CTID --features nesting=1,keyctl=1,mknod=1
pct set $CTID --start 1
Use ‐‐password “$LXC_ROOT_PASSWORD” to avoid hardcoding sensitive credentials within the script.
qm set $CTID --ipconfig0 ip=$IPADDRESS$CIDR,gw=$GATEWAY
Configure network interface with DHCP: pct set $CTID --net0 name=eth0,bridge=vmbr0,ip=dhcp,firewall=0
. pct set is used to modify the configuration of an existing LXC container in Proxmox, identified by its container ID ($CTID).
pct set $CTID --nameserver 1.1.1.1 8.8.8.8
# Disable AppArmor for our container
# AppArmor is a Linux security module that enforces Mandatory Access Control (MAC) by restricting what programs can do at the kernel level
# The explicit disabling of AppArmor removes a vital kernel-level security layer, compounding the risk, especially in a privileged container (home lab).
# When AppArmor is enabled, it can sometimes interfere with certain operations within containers, such as networking or accessing certain system resources
echo "Disabling AppArmor for container $CTID at the host level"
echo "lxc.apparmor.profile = unconfined" \
>> /etc/pve/lxc/${CTID}.conf
# Start container
pct start $CTID
# Set locale, install and start D-Bus
echo "--- Set Locale and Installing D-Bus ---"
pct exec "$CTID" -- apt-get update
pct exec "$CTID" -- apt-get install -y locales dbus systemd-sysv
pct exec "$CTID" -- sed -i '/en_US.UTF-8/s/^# //g' /etc/locale.gen
pct exec "$CTID" -- locale-gen
pct exec "$CTID" -- update-locale LANG=en_US.UTF-8
pct exec "$CTID" -- systemctl start dbus
# Wait for the container to boot and for network connectivity.
# Instead of using an unreliable 'sleep 5', we use a robust ping loop.
echo "Waiting for container $CTID to boot and acquire network..."
MAX_ATTEMPTS=30 # Maximum number of attempts to check network connectivity
ATTEMPT=0 # Initialize attempt counter
# Loop to check network connectivity
while ! pct exec "$CTID" -- ping -c 1 -W 1 "$GATEWAY" >/dev/null 2>&1; do
# This starts a loop that will continue executing as long as the condition following it evaluates to true.
# ! The loop will continue as long as the command fails, pct exec "$CTID" -- executes a command inside the LXC container with ID $CTID
# ping -c 1 -W 1 "$GATEWAY": sends a single ping (-c 1) to the specified gateway ("$GATEWAY") and sets a timeout of 1 second for the ping response (-W 1)
# >/dev/null 2>&1: it redirects both standard output (stdout) and standard error (stderr) to /dev/null, effectively discarding any output or error messages from the command.
if [[ $ATTEMPT -ge $MAX_ATTEMPTS ]]; then
echo "Error: Container $CTID did not acquire network connectivity within expected time."
exit 1 # Exit with an error if no connectivity
fi
echo "Network not ready, retrying in 2 seconds..."
sleep 2 # Wait for 2 seconds before retrying
ATTEMPT=$((ATTEMPT + 1)) # Increment attempt counter
done
echo "Container $CTID is online and network is active."
# ========================================
# INSTALL DOCKER INSIDE THE CONTAINER
# ========================================
# Based in dockerdocs, Install Docker Engine on Ubuntu, Install using the apt repository
# Install prerequisite packages for Docker.
echo "--- Install Docker Engine ---"
echo "Updating container packages and installing prerequisites..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed."; exit 1; fi
# ! The block will execute (outputs an error message and exists with a status code of 1, indicating an error) if the command fails (returns a non-zero exit status).
# pct exec "$CTID" -- apt-get update: executes the apt-get update command inside the LXC container
if ! pct exec "$CTID" -- apt-get install -y ca-certificates curl gnupg lsb-release; then
echo "Error: Failed to install Docker prerequisites."
exit 1
fi
# Add Docker GPG key and apt repository inside the LXC
pct exec "$CTID" -- bash -c '
# Enables strict error handling
set -euo pipefail
# Creates the directory for keyrings if it doesn’t exist, ensuring the path is ready for the GPG key.
mkdir -p /etc/apt/keyrings
# Fetch and install Docker’s GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| gpg --batch --yes --dearmor -o /etc/apt/keyrings/docker.gpg
# Sets permissions to allow all users to read the GPG key file.
chmod a+r /etc/apt/keyrings/docker.gpg
# Add the Docker repo; use "jammy" (22.04 LTS) for broad support
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
> /etc/apt/sources.list.d/docker.list
# Updates the package index to include the new Docker repository, making Docker packages available for installation.
apt-get update
'
root@ubuntu-desktop:~# cat /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu oracular stable
# Install Docker packages (docker-ce, docker-ce-cli, containerd.io).
echo "Installing Docker CE and related packages..."
if ! pct exec "$CTID" -- apt-get update; then echo "Error: apt-get update failed after adding Docker repo."; exit 1; fi
if ! pct exec "$CTID" -- apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin; then
echo "Error: Failed to install Docker CE packages."
exit 1
fi
# Enable and start Docker service.
echo "Enabling and starting Docker service..."
if ! pct exec "$CTID" -- systemctl enable docker; then echo "Error: Failed to enable Docker."; exit 1; fi
if ! pct exec "$CTID" -- systemctl start docker; then echo "Error: Failed to start Docker."; exit 1; fi
# Test Docker installation: skip AppArmor when running hello-world
echo "Testing Docker installation with 'hello-world'…"
if ! pct exec "$CTID" -- \
docker run --security-opt apparmor=unconfined hello-world; then
echo "Error: Docker 'hello-world' test failed even with AppArmor unconfined."
exit 1
fi
echo "Docker installed and tested successfully."
# Passwordless sudo
# It is a severe security vulnerability
# Using passwordless sudo in a Proxmox LXC container for a homelab is generally acceptable
# This configuration grants passwordless sudo access to the root user and any user within the sudo group
pct exec $CTID -- echo "root ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
pct exec $CTID -- echo "%sudo ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
# Check IP addresses
echo "Check IP addresses"
pct exec $CTID -- ip a
# Test connectivity.
echo "Testing external connectivity from container $CTID..."
if ! pct exec "$CTID" -- ping -c 3 google.com; then
echo "Warning: Container $CTID cannot reach google.com. Network connectivity issues may exist."
fi
# ========================================
# DEPLOY PORTAINER
# ========================================
echo "--- Deploying Portainer ---"
pct exec $CTID -- bash -c '
# Ensure Docker is running.
#!/bin/bash
# Wait for Docker to be ready
i=1
while [ "$i" -le 30 ]; do
# It checks if the Docker service is active. If not, it waits 30 seconds.
if docker info >/dev/null 2>&1; then
break
fi
echo "Waiting for Docker…"
sleep 1
i=$((i+1))
done
# If container is already running, stop/remove it and clean up any existing container (ignore errors)
docker stop myportainer 2>/dev/null || true
docker rm -f myportainer 2>/dev/null || true
# rm -f myportainer attempts to forcefully remove any existing container named myportainer, if it exists.
# The output is redirected to /dev/null to suppress any error messages if the container doesn’t exist.
# Pull the last image explicitly
docker pull portainer/portainer-ce:latest
# Create the volume that Portainer Server will use to store its database:
docker volume create portainer_data
# It sets up a new Docker container named myportainer running the latest version of the portainer-ce image
# Run the container
docker run -d \
-p 8000:8000 \
-p 9443:9443 \
--name myportainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data portainer/portainer-ce:latest
echo "Portainer container started successfully!"
'
# Verify container status
pct exec $CTID -- bash -c '
echo "Container status:"
docker ps | grep portainer
sudo apt install net-tools
sudo netstat -tuln | grep -E '9443'
'
Run a Portainer container using Docker.
# ========================================
# CONFIGURE SSH ACCESS
# ========================================
echo "--- Configuring SSH Server ---"
# Install OpenSSH Server
if ! pct exec "$CTID" -- apt-get install -y openssh-server; then
echo "Error: Failed to install OpenSSH server."
exit 1
fi
# Configure SSH to allow root login with password (temporary, for key setup)
pct exec "$CTID" -- sed -i \
-e 's/^#PermitRootLogin.*/PermitRootLogin yes/' \
-e 's/^#PasswordAuthentication.*/PasswordAuthentication yes/' \
/etc/ssh/sshd_config
# Ensure SSH service starts on boot
pct exec "$CTID" -- systemctl enable ssh
# Restart SSH to apply changes
pct exec "$CTID" -- systemctl restart ssh
# Allow SSH port (22) in container firewall (if UFW is active)
pct exec "$CTID" -- bash -c ' \
if command -v ufw >/dev/null; then \
ufw allow 22; \
fi'
# Output success message
echo "--- Deployment Complete ---"
echo "LXC container $CTID created with Docker installed."
echo "Portainer container $CTID should be running on port 9443."
echo "Access Code Server via: https://$IPADDRESS:9443/"
pct exec "$CTID" -- ip
# Port 9443 must be allowed through the Ubuntu firewall. Check Firewall Rules.
sudo ufw allow 9443/tcp # Allow HTTPS port
sudo ufw reload
sudo ufw status # Verify
# Create the volume that Portainer Server will use to store its database:
docker volume create portainer_data
# Deploy Portainer
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
# Sometimes, you may need to restart the container
docker restart portainer
Now that the installation is complete, you can log into your Portainer Server instance by opening a web browser and going to: [https:] // localhost:9443, e.g., https://192.168.1.40:9443 (or your container’s IP). On first load, create an admin user and secure the instance.