I’d far rather be happy than right any day, Douglas Adams, The Hitchhiker’s Guide to the Galaxy
Ollama is a lightweight, privacy-focused platform that lets you run large language models (LLMs) locally on your own machine —no cloud dependency or costly monthly subscriptions required. It’s designed to make working with models like Llama 3, DeepSeek, Gemma, and others as simple as running a command in your terminal.
Follow these steps to get up and running with this widely popular platform among developers and researchers who want to experiment with LLMs without relying on external APIs or internet access.
Install Windows Subsystem for Linux 2 (WSL 2). Open PowerShell as Administrator, execute wsl --install
and then restart your machine.
Install Docker Desktop for Windows. This is a native Windows application that simplifies the process of building, sharing, and running containerized applications and microservices. Download Docker Desktop for Windows, run the installer, ensure Enable WSL 2 Windows Features
is selected. Verify WSL 2 Integration by right clicking the Docker whale icon in the system tray, select Settings. In the General tab, confirm that Use the WSL 2 based engine
is selected.
# To confirm successful Docker installation and integration with WSL 2, open a WSL distribution (e.g., Ubuntu) and run the following commands:
➜ ~ docker --version
Docker version 28.1.1, build 4eba3
# Runs a simple built-in Docker image to test the installation.
➜ ~ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
e6590344b1a5: Pull complete
[...]
Hello from Docker!
This message shows that your installation appears to be working correctly.
Install Python from python.org. Click the Download Python button that appears first on the page to download the latest version of the Windows installer. Run the installer and check Add python.exe to PATH to ensure you can use the python command from any terminal and Use admin privileges when installing py.exe if you want all users on the computer to be able to use the Python launcher (py). Click “Install” and wait for the installation to complete. Click on Disable path length limit
and restart your computer.
Open the command prompt as an administrator, type python
, and press Enter. If your Python path was correctly added, Python should launch immediately. Otherwise, press the Win + R keys, type sysdm.cpl
, and click OK to open System Properties. Navigate to the Advanced tab, then click the Environment Variables… button. In the User Variables section (or System variables if installing for all users), double-click on the entry that says Path. Click the New button and paste the path to your Python executable, e.g., C:\Users\Owner\AppData\Local\Programs\Python\Python313\
and the path to its Scripts directory C:\Users\Owner\AppData\Local\Programs\Python\Python313\Scripts\
. Click OK on all open windows to save the changes. Close and reopen any command prompt or terminal windows for the changes to take effect.
Configuring Windows Security. Controlled Folder Access (CFA) is a built-in security feature in Windows designed to protect sensitive files and folders from unauthorized modifications, particularly from ransomware and other malicious software. To prevent CFA from interfering with Python and VSCode operations, these applications must be explicitly added to the allowed list. Open Windows Security, select Virus & threat protection, click Manage ransomware protection, under the Controlled folder access section, click Allow an app through Controlled folder access
, navigate and select the executable files for VSCode (C:\Users\Owner\AppData\Local\Programs\Microsoft VS Code\Code.exe) and Python (C:\Users\Owner\AppData\Local\Programs\Python\Python313\python.exe) -adjust path to your specific VSCode and Python installation.
Installing Visual Studio Code (VSCode). Download the VSCode installer from the official website and follow the on-screen instructions to complete the installation. Select your preferred theme and install the Python extension.
Installing Ollama and Downloading Your First Model. Visit the official Ollama website, download the 64-bit standard Windows installer, and run it as an administrator. and download the DeepSeek-r1 model, ensuring it’s compatible with your system, e.g., ollama pull deepseek-r1:32b
.
# Verify the installation.
# Displays the installed Ollama version.
ollama --version
ollama version is 0.9.1
# Downloading the DeepSeek-R1 Model
ollama pull deepseek-r1:32b
# Lists all models that have been downloaded locally
ollama list
# Starts the Ollama server locally
ollama serve
# Shows currently running models
ollama ps
# Removes a specified model from the system
ollama rm model
# Run the DeepSeek-R1 Model
ollama run deepseek-r1:32b
# You can interact with the model by simply asking questions like ...
How to get started with Ollama?
To get started with Ollama, follow these organized steps:
### Step-by-Step Guide to Getting Started with Ollama
1. **Download and Install Ollama:**
- Visit the [Ollama GitHub repository](https://github.com/ollama/ollama) and navigate to the "Releases" tab.
- Download the appropriate package for your operating system (Windows, macOS, or Linux).
- For Linux users, consider using a script like `curl -s https://raw.githubusercontent.com/ollama/ollama/main/install.sh | bash` but ensure it's safe before execution.
1. **Initialize Configuration:**
- Run the command `ollama init` to create the configuration file in `~/.config/ollama/config.json`.
[...]
# To exit, simply type 'bye' or press Ctrl+Z.
bye
Setting Up Python Integration with Virtual Environments (virtual environments create isolated environments for each project, preventing dependency conflicts and ensuring reproducibility across different development setups).
Create a new directory for your project: mkdir mypython
.
Change directory: cd mypython
.
Create a virtual environment named .venv: python -m venv .venv
. This command creates a .venv folder within your project.
Activate the newly created virtual environment: .\.venv\Scripts\activate
. Now, any Python packages installed via pip (pip install package_name
) will be confined to this environment and won’t affect the global Python installation. Once activated, your terminal prompt will typically change to show the name of the virtual environment (e.g., (.venv) PS C:\Users\Owner\Documents\myPython>).
Deactivating the Virtual Environment. When you’re done, you can deactivate the virtual environment by simply running: deactivate
.
Upon opening the project folder (mypython) in VS Code, the editor will usually prompt to select the newly created .venv Python interpreter for the workspace. Click Yes to use it. Otherwise, open the command palette (Ctrl+Shift+P), type Python: Select Interpreter
, and choose the Python interpreter located within your .venv folder (\mypython\.venv\Scripts\python.exe).
Installing the Ollama Python Library. Install packages (ollama) inside the virtual environment. Ensure the virtual environment is active, then execute: pip install ollama
It downloads and installs the official Ollama Python package into the active virtual environment.
Creating and Running Your First Ollama Python Script. Create a new Python source code file named getinfo.py within your mypython project directory.
import ollama
print("hello world")
# Use the generate function for a one-off prompt
# It specifies the deepseek-r1:32b model and calls ollama.generate for a simple prompt.
result = ollama.generate(model='deepseek-r1:32b', prompt='Why is the sky blue?')
# The model's response is extracted from the result dictionary and printed to the console.
print(result['response'])
Running Your Python Script. Ensure the virtual environment is active in your terminal, then run: python getinfo.py
. The script will execute and the model’s response will be printed out to the console.
VS Code will usually prompt you to select the new .venv Python interpreter. Click Yes to use it for your workspace.