JustToThePoint English Website Version
JustToThePoint en español

Streamlining Local AI Workflows with Ollama: Prompts, Env & Utilities

To err is human, to blame it on someone else is even more human, Jacob’s Law

Topology and Limits

Ollama is a lightweight, privacy-focused platform that lets you run large language models (LLMs) locally on your own machine —no cloud dependency or costly monthly subscriptions required. It’s designed to make working with models like Llama 3, DeepSeek, Gemma, and others as simple as running a command in your terminal.

This is the third article in our three-part series about Complete Windows AI Dev Setup: WSL 2, Docker Desktop, Python & Ollama. In Parts 1 & 2 of Complete Windows AI Dev Setup we covered WSL 2, Docker Desktop, Python, installing Ollama, and coding a full-featured AI CLI. In this Part 3, we’ll show you how to:

  1. Structure, manage, and reuse your system/user prompts via a single mymessages.py module.
  2. Drive all model calls from environment variables loaded with dotenv, avoiding hard-coding variables and values.
  3. Build a utilollama.py helper that:

Prompt Templates (mymessages.py)

We have separated our “system” and “user” instructions into a series of JSON-style message dictionaries that are used to guide the behavior of the model during runtime:

# assistant_msg is the system prompt, telling the model how to behave overall.
# It enforces accuracy, humility (“say ‘I don’t know’”), and avoids needless back-and-forth clarifications.
assistant_msg = {
    'role': 'assistant',
    'content': (
        'You are a helpful assistant designed to provide information and answer questions.'
        'You should always strive to give accurate, comprehensive, and helpful responses.'
        'If you do not know the answer, it is better to say "I do not know" than to provide incorrect information.'
        'You should also avoid making assumptions about the user\'s intent or knowledge level.'
        'Do not ask for clarification if the user\'s question is ambiguous or unclear.'
    )
}

# It makes it clear to the assistant that the user is constructive and intentional — setting the stage for clear dialogue and better results.
myuser_msg = {
    'role': 'user',
    'content': (
        'You are a user who can ask questions and provide input to the assistant.'
        'You should ask clear and specific questions to get the best responses.'
    )
}

# query_msg is used when we run ollama.chat(...) to generate DuckDuckGo queries.
# It ensures the model outputs exactly one line in the prescribed "QUERY: …" format, with no extra commentary.
query_msg = {
    'role': 'system',
    'content': (
        "/no_think\nYou are a DuckDuckGo query generator."
        "Input: a user’s free-form prompt."
        "Output: the single best DuckDuckGo search query, prefixed with 'QUERY: ' (uppercase, followed by a single space), and no other text."
     )
}

# It instructs the assistant to generate well-structured, accurate, and concise summaries of web-scraped content.
query_summarize = {
    'role': 'system',
    'content': (
        'You’re an expert text writer.'
        'The input is raw HTML/text extracted from a web page (it may still contain leftover tags, scripts, ads, boilerplate, etc.).'
        'Your job:'
        '1. Strip out any HTML or irrelevant markup (do not include it in your summary).'
        '2. Capture all main ideas, arguments, data points, and nuances.'
        '3. Produce a well-structured, easy to read, and comprehensive document in Markdown.'
    )
}
# Ollama query for generating frontmatter'
query_frontmatter = {
    'role': 'system',
    'content': (
        'You are an AI model that generates a frontmatter for a Hugo page.\n'
        'The frontmatter must include: title, date (YYYY-MM-DD), description, categories, and keywords.\n'
        'The title should be a concise and descriptive name for the page.\n'
        'The description should explain the content of the page.\n'
        'Categories and keywords should be relevant to the page.\n'
        'The keywords line holds comma-separated terms relevant to the content.\n'
        'Format categories as a YAML array.\n'
    )
}
# Ollama query for expanding content
query_expand = {
    'role': 'system',
    'content': (
        'You are an AI model that clarifies, proofreads, explains, and expands a given text.\n'
        'The expanded text should be more detailed, comprehensive, easy-to-read with many examples and clear headings.\n'
    )
}

# Ollama query for proofreading and validating the content.
check_proof = {
    'role': 'system',
    'content': (
        'You are a meticulous academic proofreader and text validator.\n'
        'Analyze ONLY the substantive content below, ignoring all front matter headers.\n'
        'Proofread, validate, fix spelling and grammar, punctuation, and flag ambiguous phrasing.\n'
        'Provide a detailed report of any issues, including the location\n.'
    )
}

By cleanly separating prompt templates from runtime configuration, our code remains modular, testable, and easy to extend.

Environment Variables (.env)

The .env file lets you adjust runtime parameters without changing your code. We load these via python-dotenv early in main():

# Local Hugo Server URL
BASE_URL = "http://192.168.1.36:1313"

# Global timeout in seconds for all HTTP requests.
# Prevents our CLI from hanging indefinitely if a site is unresponsive.
REQUEST_TIMEOUT = 15

# Name of the Ollama model we will chat with by default
MODEL = 'deepseek-r1:8b'

# Models for tasks
MYMODEL_FRONTMATTER=qwen3:8b # Default model for frontmatter
MYMODEL_QUERY=qwen3:8b # Default model for queries
MYMODEL_SUMMARIZE=qwen3:8b # Default model for summarization
MODELS_NEWCONTENT=qwen2.5:72b llama3.1:70b deepseek-r1:70b  # Default models for new content creation
MAX_RESULTS=2 # Maximum number of results to return
MAX_RETRIES=4 # Maximum number of retries for API calls
RSS_FEED_URLS=http://feeds.bbci.co.uk/news/rss.xml https://rss.cnn.com/rss/edition.rss # RSS feed URLs to fetch news from

utilollama.py

nvim utilollama.py

import mymessages
from util import display_text_color, display_alarm_color, call_ollama  # Importing utility functions for colored text display and Ollama API calls
from dotenv import load_dotenv # For loading environment variables from .env files
import os
from colorama import Fore

script_dir = os.path.dirname(os.path.abspath(__file__))
dotenv_path = os.path.join(script_dir, '.env')
# Load environment variables from .env file
load_dotenv(dotenv_path, override=True)
# Get the model name for summarization from environment variables
model_query = os.getenv("MYMODEL_QUERY", "qwen3:8b")
# Get the list of models for content expansion from environment variables
models_newcontent = os.getenv("MODELS_NEWCONTENT", "").split()

def summarize(content):
    """Summarize content using Ollama API.

    Args:
        content (str): The content to summarize.
    Returns:
        str: The summarized content.
    """
    # Check if content is provided and is a string
    # This is important to ensure the function works correctly
    if not content:
        display_alarm_color("No content provided for summarization.", Fore.RED)
        return None
    if not isinstance(content, str):
        display_alarm_color("Content must be a string.", Fore.RED)
        return None
    # Check if content length is within reasonable limits
    # This is important to avoid sending too much data to the API
    if len(content) > 100000:
        display_alarm_color("Content is too long for summarization. Please provide shorter content.", Fore.RED)
        return None
    # Display the content being summarized
    # This is a debug print statement
    display_text_color(f"Summarize from {content}", Fore.BLACK)

    # Call Ollama API
    try:
        # Call the Ollama API with the provided content and system prompt
        # This function will handle the API call and return the summarized content
        # The model name is passed as an argument to the call_ollama function
        # The role is set to "user" to indicate that this is a user query
        # The system prompt is set to mymessages.query_summarize to provide context for the summarization
        return call_ollama(content = content, system_prompt = mymessages.query_summarize, model_name= model_query, role = "user")  # Call Ollama API with the response
    except Exception as e:
            # Graceful error reporting
            display_alarm_color(f"Ollama summarization failed: {e}", Fore.RED)
            # Raise a RuntimeError with a specific message
            # This is important to ensure that the exception is handled properly
            # This will allow the calling function to handle the error gracefully and avoid crashing the program
            raise RuntimeError(f"Ollama summarization failed: {e}. summarize, utilollama.py")

def call_ollamas(content: str) -> None:
    """Call the Ollama API to expand the content using multiple models.

    Args:
        content (str): The content to expand.
    Returns:
        None
    """
    # Check if content is provided and is a string
    # This is important to ensure the function works correctly
    if not content:
        display_alarm_color("No content provided for expansion.", Fore.RED)
        return None
    if not isinstance(content, str):
        display_alarm_color("Content must be a string.", Fore.RED)
        return None
    # Check if content length is within acceptable limits
    # This is important to avoid sending too much data to the API
    if len(content) > 100000:
        display_alarm_color(
            "Content is too long for expansion. Please provide shorter content.", Fore.RED)
        return None

    # Display the content being expanded
    # This is a debug print statement
    display_text_color(f"Expanding content: {content}", Fore.BLACK)

    # Check if models_newcontent is provided and is a list
    # This is important to ensure the function works correctly
    if not models_newcontent:
        display_alarm_color(
            "No models provided for content expansion.", Fore.RED)
        return None
    if not isinstance(models_newcontent, list):
        display_alarm_color("Models must be a list.", Fore.RED)
        return None  # Exit if models_newcontent is not a list
    # Check if models_newcontent is empty
    # This is important to ensure that at least one model is provided for content expansion
    if len(models_newcontent) < 1:
        display_alarm_color(
            "At least one model is required for content expansion.", Fore.RED)
        return None
    # Display the models being used for content expansion
    display_text_color(
        f"Models for content expansion: {', '.join(models_newcontent)}", Fore.BLACK)
    # Loop through the list of models
    for model_content in models_newcontent:
        # Display the model being used for content expansion
        # This is a debug print statement
        display_text_color(f"Expanding content {content} with model: {model_content}", Fore.BLACK)
        # Call Ollama API to expand the content
        call_ollama(content=content,
                    system_prompt=mymessages.query_expand,
                    model_name=model_content,
                    role="user",
                    temperature=0.7,
                    max_tokens=20000)

def create_content(subject : str) -> None:
    """Generate and expand content given a subject.

    Args:
        subject (str): The subject for content creation.
    Returns:
        None
    """
    from queryweb import my_duckduckgo_search
    # Check if subject is provided and is a string
    # This is important to ensure the function works correctly
    if not subject:
        display_alarm_color("No subject provided for content creation.", Fore.RED)
        return None
    if not isinstance(subject, str):
        display_alarm_color("Subject must be a string.", Fore.RED)
        return None
    # Check if subject length is within acceptable limits
    # This is important to avoid sending too much data to the API
    if len(subject) > 100000:
        display_alarm_color("Subject is too long for content creation. Please provide shorter content.", Fore.RED)
        return None
    # Display the subject being used for content creation
    # This is a debug print statement
    display_text_color(f"Creating content for subject: {subject}", Fore.BLACK)
    # Call Ollama API to create content
    call_ollamas(content=subject)
    display_text_color("Content expansion completed successfully.", Fore.BLACK)
    # Call DuckDuckGo search with the subject
    # This function will handle the search and return the results
    my_duckduckgo_search(subject, "qwen3:8b")
    search_web(subject)  # Search the web for the subject

def search_web(subject: str) -> None:
    """Search the web using Google, Bing, and DuckDuckGo.

    Args:
        subject (str): The subject to search for.
    Returns:
        None
    """
    # Check if subject is provided and is a string
    # This is important to ensure the function works correctly
    display_text_color("Searching the web...", Fore.BLACK)
    if not subject:
        display_alarm_color("No subject provided for web search.", Fore.RED)
        return None
    if not isinstance(subject, str):
        display_alarm_color("Subject must be a string.", Fore.RED)
        return None
    # Check if subject length is within acceptable limits
    # This is important to avoid sending too much data to the search engines
    if len(subject) > 100000:
        display_alarm_color(
            "Subject is too long for web search. Please provide shorter content.", Fore.RED)
        return None

    # Encode the subject for URL compatibility
    # This is important to ensure that the subject can be used in URLs without issues
    query = urllib.parse.quote_plus(subject)

    # Display the subject being searched
    # This is a debug print statement
    display_text_color(f"Searching the web for: {subject}", Fore.BLACK)
    # Prepare the URLs for web search
    urls = [
      f"https://www.google.com/search?q={query}",
      f"https://www.bing.com/search?q={query}",
      f"https://duckduckgo.com/?q={query}",
      f"https://www.perplexity.ai/search?q={query}",
      f"https://andisearch.com/?query={query}",
      f"https://you.com/search?q={query}",
      ]

    # Register the web browser to open URLs in Google Chrome
    # This is important to ensure that the URLs are opened in the correct browser
    chrome_path = r"C:\Program Files\Google\Chrome\Application\chrome.exe"
    try:
        # Attempt to register the Chrome browser with webbrowser module
        # This will allow the script to open URLs in Chrome
        if not os.path.exists(chrome_path):
            display_alarm_color(
                "Chrome browser not found, using default browser.", Fore.YELLOW)
            return None
        # If Chrome is available, register it with webbrowser module
        webbrowser.register(
            'chrome',  # Register the browser with a name
            None, # Use the default browser type
            webbrowser.BackgroundBrowser(chrome_path) # Specify the path to the Chrome executable
        )
        browser = webbrowser.get('chrome')  # Get the registered Chrome browser

    except Exception:
        # If Chrome is not available, fallback to the default browser
        display_alarm_color(
            "Chrome browser not found, using default browser.", Fore.YELLOW)
        browser = webbrowser  # fallback to default

    # Open the URLs in new tabs of the browser
    # This will open each URL in a new tab of the registered browser
    for u in urls:
        try:
            browser.open_new_tab(u)  # Open each URL in a new tab
            display_text_color(f"Opened URL: {u}", Fore.GREEN)
        except Exception as e:
            # If there is an error opening the URL, display an alarm color message
            display_alarm_color(f"Failed to open URL {u}: {e}", Fore.RED)

if __name__ == "__main__":
    """Main function to run the script."""
    # Example usage of the functions
    subject = "Python webbrowser automation"
    create_content(subject)
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.