JustToThePoint English Website Version
JustToThePoint en español

Run Ollama Locally on Windows. From Minimal Chatbot to Full-Featured AI CLI

Without pain, without sacrifice, we would have nothing, Chuck Palahniuk - Fight Club

Community

Ollama Interactive Chat Interface

Ollama is a lightweight, privacy-focused platform that lets you run large language models (LLMs) locally on your own machine —no cloud dependency or costly monthly subscriptions required. It makes using models like Llama 3, DeepSeek, Gemma, and others as simple as running a terminal command.

This is the second article in our three-part series Complete Windows AI Dev Setup: WSL 2, Docker Desktop, Python & Ollama. If you haven’t read part 1 yet, go read it first and come back here.

AI Chatbot

This is a very basic Python script that uses the ollama library for chatting with a model.

# The script starts by importing the ollama library.
import ollama

# It sets the model to deepseek-r1:8b, ensuring that this model is available.
model_name = 'deepseek-r1:8b'

# Initialize conversation with a system prompt (optional) and a user message.
# A list called messages is initialized and used for this purpose.
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"},
]

#  The first response from the bot is fetched ...
response = ollama.chat(model=model_name, messages=messages)
# ... and printed.
print("Bot:", response.message.content)

# Conversation Loop: The script enters a loop allowing the user to continue the conversation
while True:
    user_input = input("You: ") # It waits for user input.
    if not user_input: # If the input is empty, the loop exits.
        break
    messages.append({"role": "user", "content": user_input})
    # It appends the user input to the messages list.

    # The bot's response is fetched ...
    response = ollama.chat(model=model_name, messages=messages)
    answer = response.message.content
    # ... and printed,
    print("Bot:", answer)
    #  and the assistant's message is appended to the messages list for context.
    messages.append({"role": "assistant", "content": answer})

Ollama Interactive Chat Interface

This Python script creates a feature-rich command-line chat interface for interacting with Ollama language models. Ollama is a tool that allows you to run large language models locally on your machine. The script goes beyond basic chat functionality by integrating web crawling, search capabilities, and system command execution, making it a comprehensive AI assistant tool.

Core Chat Functionality

"""
getinfo.py. Interactive Chat with Ollama Models

This script provides a conversational interface with Ollama-supported language models.

Features include:
- Color-coded input/output (colorama)
- Command history management
- Graceful error handling
- Interactive help system
- WSL integration for 'man', 'tldr', 'curl'
- Web-crawl, DuckDuckGo search, quote lookup, content creation
- Ollama model interaction
- Windows maintenance tasks
"""

# Importing necessary libraries
import argparse
import sys, os
import readline # For command line input handling
from dotenv import load_dotenv # For loading environment variables from .env files
from wsl import run_wsl_command, run_help_command, Windows_Maintenance # Importing WSL command functions
import asyncio # Importing asyncio for asynchronous programming
from webcrawl import main as webcrawl_main  # Importing the web crawling function from webcrawl module
from colorama import Fore, Style, init as colorama_init  # For colored terminal output
# Importing utility functions for displaying messages and handling quotes
from util import display_message, display_text_color, display_alarm_color, display_log_color, select_quote, today, free_dictionary_lookup
# Importing the messages module for predefined messages
import mymessages
import traceback
from utilollama import call_ollamas, create_content
# Importing the SearXNG search function
from queryweb import searx_search_fallback, duckduckgo_search

Setup, Bootstrap, and Welcome screen

def bootstrap():
    """Load environment, initialize colorama, and setup readline."""
    load_dotenv() # Load environment variables from .env file
    # Initialize colorama for colored terminal output
    colorama_init(autoreset=True)

    # Basic tab-completion of slash commands
    commands = [
        "/clear", "/exit", "/?", "/man", "/help", "/quote",
        "/create", "/searx", "/duckduckgo", "/crawl", "/ollama", "/maintance", "/define", "/today"
    ]
    # Initialize readline for command line input handling
    readline.set_completer(lambda text, state: [
                           c for c in commands if c.startswith(text)][state])
    # Enable tab completion for commands
    readline.parse_and_bind("tab: complete")

def display_welcome():
    """Display welcome message and help information"""
    display_message("Ollama Chat Interface v1.0")  # Display the welcome message with colors
    # Display the commands header in cyan
    commands = {
        "/clear": "Clear chat history",
        "/?":     "Show this help message",
        "/exit":  "Exit program",
        "/man":   "WSL man/tldr/curl lookup",
        "/help":  "Windows help search",
        "/quote": "Display a random quote",
        "/create": "Scaffold new content",
        "/searx": "SearxNG search (DuckDuckGo fallback)",
        "/duckduckgo": "DuckDuckGo search",
        "/crawl": "Crawl Hugo localhost",
        "/ollama": "One-off Ollama prompt",
        "/maintance": "Windows maintenance tasks",
        "/define": "Define a word using the dictionary",
        "/today": "Display today's date, weather, news, quotes, and system information"
    }
    display_text_color("Available commands:", Fore.GREEN)
    for cmd, desc in commands.items():
        display_text_color(f"  {cmd:<10}  {desc}", Fore.YELLOW)
    print()  # blank line

Command Processing

def cmd_man(arg):
    """Handle the '/man' command"""
    if not arg:
        display_alarm_color("Usage: /man ", Fore.RED)
        return
    display_text_color(f"man {arg}", Fore.GREEN)
    # Run WSL commands to display manual pages
    run_wsl_command("man", arg)
    run_wsl_command("tldr", arg)
    run_wsl_command("curl", f"https://cht.sh/{arg}")


def cmd_help(arg):
    """Handle the '/help' command"""
    if not arg:
        display_alarm_color("Usage: /help ", Fore.RED)
        return
    display_text_color(f"help {arg}", Fore.GREEN)
    # Run Windows help command
    run_help_command(arg)


def cmd_clear(messages):
    """Handle the '/clear' command"""
    # Clear conversation history, keeping only the system prompt
    display_text_color("Clearing history…", Fore.YELLOW)
    return messages[:1]  # keep only system prompt


def cmd_quote(arg):
    """Handle the '/quote' command"""
    if arg:
        display_alarm_color("Usage: /quote", Fore.RED)
        return  # No arguments expected for /quote
    display_text_color("Fetching a random quote...", Fore.BLACK)
    # Select a random quote and display it in cyan
    q = select_quote()
    display_text_color(q, Fore.CYAN)


def cmd_create(arg):
    """Handle the '/create' command"""
    """Create content based on the provided topic"""
    if not arg:
        display_alarm_color("Usage: /create ", Fore.RED)
        return
    display_text_color(f"Creating content for: {arg}", Fore.BLACK)
    # Call the create_content function with the provided topic
    create_content(arg)


def cmd_searx(arg):
    """Handle the '/searx' command"""
    """Perform a web search using SearXNG (DuckDuckGo fallback)"""
    if not arg:
        # Display an error message if no argument is provided
        display_alarm_color("Usage: /searx ", Fore.RED)
        return
    display_text_color(f"Searching for: {arg}", Fore.BLACK)
    # Call the SearXNG search function with the provided query
    searx_search_fallback(arg)

def cmd_duckduckgo(arg):
    """Handle the '/duckduckgo' command"""
    """Perform a web search using DuckDuckGo"""
    if not arg:
        # Display an error message if no argument is provided
        display_alarm_color("Usage: /duckduckgo ", Fore.RED)
        return
    display_text_color(f"Searching DuckDuckGo for: {arg}", Fore.BLACK)
    # Call the DuckDuckGo search function with the provided query
    duckduckgo_search(arg)

async def cmd_crawl(arg):
    """Handle the '/crawl' command"""
    """Crawl the Hugo localhost for web content"""
    try:
        if not arg:
            display_alarm_color("Usage: /crawl ", Fore.RED)
            return
        display_text_color(f"Crawling: {arg}", Fore.BLACK)
        # Call the web crawling function with the provided URL
        display_text_color("Starting web crawl...", Fore.YELLOW)
        # Run the web crawling function asynchronously
        await webcrawl_main(arg)
        display_text_color("Crawl complete.", Fore.GREEN)
    except Exception as e:
        display_alarm_color(f"Crawl error: {e}", Fore.RED)


def cmd_ollama(arg):
    """Handle the '/ollama' command"""
    """Call the Ollama API with the provided prompt"""
    if not arg:
        # Display an error message if no argument is provided
        display_alarm_color("Usage: /ollama ", Fore.RED)
        return
    display_text_color(f"Calling Ollama with: {arg}", Fore.BLACK)
    # Call the Ollama API with the provided prompt
    call_ollamas(arg)

def cmd_maintain():
    """Handle the '/maintain' command"""
    """Perform Windows maintenance tasks"""
    if sys.platform != "win32":
        display_alarm_color("This command is only available on Windows.", Fore.RED)
        return
    # Display a message indicating that maintenance tasks are being performed
    display_text_color("Performing Windows maintenance tasks...", Fore.BLACK)
    # Call the Windows maintenance function
    Windows_Maintenance()


def cmd_free_dictionary_lookup(arg):
    """Handle the '/define' command"""
    """Look up a word in the free dictionary"""
    if not arg:
        # Display an error message if no argument is provided
        display_alarm_color("Usage: /define ", Fore.RED)
        return
    display_text_color(f"Defining word: {arg}", Fore.BLACK)
    # Call the free dictionary lookup function with the provided word
    definition = free_dictionary_lookup(arg)
    display_text_color(definition, Fore.CYAN)

def cmd_today():
    """Handle the '/today' command"""
    """Display today's date, weather, news, quotes, and system information"""
    display_text_color("Displaying today's date and information...", Fore.BLACK)
    # Call the today function to display the information
    today()
    display_text_color(
        "Today's date, weather, news, quotes, and system informationn displayed.", Fore.BLACK)

COMMANDS = {
    "/clear":  cmd_clear,
    "/?": lambda: display_welcome(),
    "/exit":   None,  # handled specially
    "/man":    cmd_man,
    "/help":   cmd_help,
    "/quote":  cmd_quote,
    "/create": cmd_create,
    "/searx":  cmd_searx,
    "/duckduckgo":  cmd_duckduckgo,
    "/crawl":  cmd_crawl,
    "/ollama": cmd_ollama,
    "/maintance": cmd_maintain,
    "/define": cmd_free_dictionary_lookup,  # Directly use the dictionary lookup function
    "/today": cmd_today,  # Directly use the today function
}

Main Loop

async def chat_loop(model_name):
    """Primary interactive loop."""
    messages = [
        mymessages.assistant_msg,
        mymessages.myuser_msg,
    ]

    while True:
        try:
            # Get user input
            user_input = input(f"{Fore.BLUE}You:{Style.RESET_ALL} ").strip()
            # If user input is empty, continue to the next iteration
            if not user_input:
                continue
            # If user input does not start with a slash, display an error message
            if not user_input.startswith("/"):
                display_alarm_color(
                    "Commands must start with `/`. Type /? for help.", Fore.RED)
                continue
            # Split the user input into command and argument
            cmd, *rest = user_input.split(" ", 1)
            cmd = cmd.lower() # Normalize command to lowercase
            # Get the argument if provided, otherwise set it to an empty string
            arg = rest[0] if rest else ""

            # Handle special commands
            if cmd == "/exit":
                display_text_color("Goodbye!", Fore.YELLOW)
                break
            # If the command is not recognized, display an error message
            handler = COMMANDS.get(cmd)
            if not handler:
                display_alarm_color(f"Unknown command: {cmd}", Fore.RED)
                continue
            # Execute the command handler
            if cmd == "/clear":
                # If /clear command, pass messages to clear history
                result = handler(messages)
            elif cmd == "/crawl":
                # If /crawl command, handle it asynchronously
                result = await handler(arg)
            elif cmd == "/ollama":
                # If /ollama command, call the Ollama API
                # Append the user input to messages for context
                messages.append({"role": "user", "content": arg})
                result = call_ollamas(arg)
            elif cmd == "/today" or cmd == "/maintance" or cmd == "/?":
                # If /today or /maintance or /? command, handle it directly
                result = handler()
            else:
                # For other commands, just pass the argument
                result = handler(arg)

            # if /clear command, update messages to clear history
            if cmd == "/clear" and result is not None:
                messages = result
                continue

        except KeyboardInterrupt:
            display_alarm_color("\nInterrupted. Exiting…", Fore.YELLOW)
            break
        except Exception:
            display_alarm_color("Unexpected error:", Fore.RED)
            traceback.print_exc()
            break

Entrypoint

def main():
    """Main entry point for the script."""
    # Initialize the environment, colorama, and readline
    bootstrap()
    # Parse command line arguments
    parser = argparse.ArgumentParser(description="Ollama Interactive CLI")
    parser.add_argument(
        "--version", action="version", version="Ollama Interactive CLI v1.0")
    # Add an argument for the Ollama model name
    parser.add_argument("--model", help="Ollama model (e.g. deepseek-r1:8b)")
    # Parse the command line arguments
    args = parser.parse_args()
    # Set the model name from command line argument or environment variable
    model_name = args.model or os.getenv("MODEL", "deepseek-r1:8b")
    # If no model name is provided, display an error message and exit
    if not model_name:
        display_alarm_color("No model specified. Use --model or set MODEL environment variable.", Fore.RED)
        sys.exit(1)
    # Display the model name being used
    display_log_color(f"Using model: {model_name}", Fore.MAGENTA, "DEBUG")
    # Display the welcome message and available commands
    display_welcome()
    # Start the chat loop with the specified model name
    print(f"{Fore.GREEN}Type /? for help.{Style.RESET_ALL}")
    # Run the chat loop asynchronously
    if sys.platform == "win32":
        # If running on Windows, use asyncio.run to start the chat loop
        asyncio.run(chat_loop(model_name))
    # Display a goodbye message
    display_text_color("Goodbye!", Fore.YELLOW)
    # Exit the script
    sys.exit(0)

if __name__ == "__main__":
    main()

Automate the process of setting up and running a Python project

The purpose of this batch script is to automate the process of setting up and running a Python project. Specifically, it performs the following tasks:

  1. Environment Preparation: It clears the terminal screen for a clean output and navigates to a specified directory where the Python project is located.
  2. File and Environment Validation: It checks for the existence of the main Python script (getinfo.py) and verifies that the virtual environment is set up correctly. This ensures that the necessary dependencies are available before running the script.
  3. Virtual Environment Activation: It activates the Python virtual environment, allowing the script to run with the correct dependencies and Python version.
  4. Script Execution: After ensuring that everything is in place, it runs the specified Python script.
  5. Error Handling: The script includes checks and informative error messages to handle common issues, such as missing files or problems activating the virtual environment.
@echo off
REM Python Project Runner - Batch Script
REM This script cleans terminal, navigates to project, activates venv, and runs Python code

echo ========================================
echo Starting Python Project Runner
echo ========================================

REM Clear the screen
cls

REM Navigate to your Python project directory
echo Navigating to Python project directory...
cd /d "C:\Users\Owner\Documents\myPython"

REM Check if we're in the right directory
if not exist "getinfo.py" (
    echo ERROR: getinfo.py not found in current directory!
    echo Current directory: %CD%
    pause
    exit /b 1
)

REM Check if virtual environment exists
if not exist ".venv\Scripts\activate.bat" (
    echo ERROR: Virtual environment not found!
    echo Looking for: .venv\Scripts\activate.bat
    pause
    exit /b 1
)

echo Activating virtual environment...
REM Activate the virtual environment (using .bat instead of .ps1 for batch compatibility)
call ".venv\Scripts\activate.bat"

REM Verify Python is available
python --version >nul 2>&1
if errorlevel 1 (
    echo ERROR: Python not found or virtual environment activation failed!
    pause
    exit /b 1
)

echo Virtual environment activated successfully!
echo Current Python:
python -c "import sys; print(sys.executable)"

echo ========================================
echo Running getinfo.py...
echo ========================================

REM Run your Python script
python getinfo.py

echo ========================================
echo Script execution completed!
echo ========================================

REM Keep the window open so you can see any output/errors
pause
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.