A Python command-line wrapper for the Ollama REPL that supports both local and remote Ollama instances.
- Python 3.7 or higher
- Ollama installed (locally or on a remote server)
- pipx (recommended for installation)
# Install pipx if you haven't already
python -m pip install --user pipx
python -m pipx ensurepath
# Install ol
pipx install .On first use, ol will automatically:
- Create the configuration directory at
~/.config/ol/ - Initialize default configuration in
~/.config/ol/config.yaml - Set up command history tracking in
~/.config/ol/history.yaml - Create directories for templates and cache
Note: Initialization happens when you first run the ol command, not during installation. This ensures the package can be imported without side effects.
pip install .For development:
pip install -e .You can use ol with a remote Ollama instance by setting the OLLAMA_HOST environment variable or using the -h/--host and -p/--port flags:
# Basic text prompt with remote instance (using environment variable)
OLLAMA_HOST=http://server:11434 ol "What is the meaning of life?"
# Using CLI flags (overrides OLLAMA_HOST for this command)
ol -h server -p 11434 "What is the meaning of life?"
# Code review with specific model
OLLAMA_HOST=http://server:11434 ol -m codellama "Review this code" file.py
ol -h server -p 11434 -m codellama "Review this code" file.py
# Local custom port
ol -h localhost -p 11435 -m llama3.2 "Hello"
# Remote with custom port
ol -h api.myhost.com -p 11434 -m codellama "Review this" file.py
# Vision model with remote instance (requires absolute path)
OLLAMA_HOST=http://server:11434 ol "What's in this image?" /absolute/path/to/image.jpg
# List available models on remote instance
OLLAMA_HOST=http://server:11434 ol -l
ol -h server -p 11434 -l
# Debug mode shows exact commands
OLLAMA_HOST=http://server:11434 ol -d "Your prompt here"
# Save Modelfile from remote instance
OLLAMA_HOST=http://server:11434 ol -m llama3.2 --save-modelfileWhen using vision models with a remote Ollama instance:
- Use absolute paths for image files
- Images are base64-encoded and sent in the API payload via the
/api/chatendpoint - The image data is transmitted directly to the remote Ollama API, not as file paths
- Vision and mixed-content requests automatically use the
/api/chatendpoint, while text-only requests use/api/generate
For local Ollama instances, simply run commands without the OLLAMA_HOST variable:
# List available models
ol -l
# Use a specific model
ol -m llama3.2 "Your prompt here"
# Include file contents in the prompt
ol "Your prompt here" file1.txt file2.txt
# Use a different model with files
ol -m codellama "Review this code" main.py test.py
# Show debug information
ol -d "Your prompt here" file1.txt
# Use default prompt based on file type
ol main.py # Will use the default Python code review prompt
# Save a model's Modelfile
ol -m llama3.2 --save-modelfile
# Save Modelfile to custom directory
ol -m llama3.2:latest --save-modelfile --output-dir ~/.config/ol/templates-l, --list: List available models (works with both local and remote instances)-m MODEL, --model MODEL: Specify the model to use (default: from config)-d, --debug: Show debug information including API request details-h HOST, --host HOST: Ollama host (default: localhost). Overrides OLLAMA_HOST for this command.-p PORT, --port PORT: Ollama port (default: 11434). Overrides OLLAMA_HOST for this command.--set-default-model TYPE MODEL: Set default model for type (text or vision). Usage:--set-default-model text codellama--set-default-temperature TYPE TEMP: Set default temperature for type (text or vision). Usage:--set-default-temperature text 0.8--temperature TEMP: Temperature for this command (0.0-2.0, overrides default)--save-modelfile: Download and save the Modelfile for the specified model-a, --all: Save Modelfiles for all models (requires --save-modelfile)--output-dir DIR: Output directory for saved Modelfile (default: current working directory)--version: Show version information--check-updates: Check for available updates--update: Update to the latest version if available--help, -?: Show help message and exit"PROMPT": The prompt to send to Ollama (optional if files or STDIN are provided)FILES: Optional files to inject into the prompt
Note:
- Running
olwithout any arguments displays the current configuration defaults (host, models, temperatures). - You can pipe input to
olusing|or redirect files using<. STDIN input is automatically used as the prompt.
The tool uses a YAML configuration file located at ~/.config/ol/config.yaml. This file is created automatically during installation with default settings.
~/.config/ol/
├── config.yaml # Main configuration file
├── history.yaml # Command history
├── templates/ # Custom templates directory
└── cache/ # Cache directory for responses
models:
text: llama3.2 # Default model for text
vision: llama3.2-vision # Default model for images
last_used: null # Last used model (updated automatically)
hosts:
text: null # Default host for text models (null = use OLLAMA_HOST or localhost)
vision: null # Default host for vision models (null = use OLLAMA_HOST or localhost)
temperature:
text: 0.7 # Default temperature for text models (0.0-2.0)
vision: 0.7 # Default temperature for vision models (0.0-2.0)
default_prompts:
.py: 'Review this Python code and provide suggestions for improvement:'
.js: 'Review this JavaScript code and provide suggestions for improvement:'
.md: 'Can you explain this markdown document?'
.txt: 'Can you analyze this text?'
.json: 'Can you explain this JSON data?'
.yaml: 'Can you explain this YAML configuration?'
.jpg: 'What do you see in this image?'
.png: 'What do you see in this image?'
.gif: 'What do you see in this image?'You can download and save a model's Modelfile using the --save-modelfile flag:
# Save Modelfile to current directory
ol -m llama3.2 --save-modelfile
# Save Modelfile with tag (colons replaced with underscores in filename)
ol -m llama3.2:latest --save-modelfile
# Save to custom directory
ol -m llama3.2 --save-modelfile --output-dir ~/.config/ol/templates
# Save from remote instance
OLLAMA_HOST=http://server:11434 ol -m llama3.2 --save-modelfile
# Save Modelfiles for ALL models
ol --save-modelfile --all
# Save all Modelfiles to custom directory
ol --save-modelfile --all --output-dir ~/.config/ol/templates
# Save all Modelfiles from remote instance
OLLAMA_HOST=http://server:11434 ol --save-modelfile --allThe saved Modelfile will be named using the pattern: <modelname>-<hostname>-<YYYYMMDD-HHMMSS>.modelfile
- Model names are sanitized for filesystem safety: path-hostile characters (
/,\,:, spaces, etc.) are replaced with underscores - The hostname is automatically detected from your system
- Timestamp is in local time format
YYYYMMDD-HHMMSS - When using
--all, each model's Modelfile is saved with its own timestamp - If a model fails to save (e.g., due to filesystem issues), the process continues with remaining models
# Local instance
ol "Explain this code" main.py
ol -m codellama "Review for security issues" *.py
# Remote instance (using environment variable)
OLLAMA_HOST=http://server:11434 ol "Explain this code" main.py
OLLAMA_HOST=http://server:11434 ol -m codellama "Review for security issues" *.py
# Remote instance (using CLI flags)
ol -h server -p 11434 "Explain this code" main.py
ol -h server -p 11434 -m codellama "Review for security issues" *.py
# Local custom port
ol -h localhost -p 11435 -m llama3.2 "Hello"You can pipe input or redirect files to ol:
# Pipe text input
echo "What is Python?" | ol
# Redirect file content
ol < file.txt
# Combine STDIN with prompt argument
echo "def hello():" | ol "Review this code"
# Pipe with files
cat code.py | ol main.py
# Pipe with remote instance
echo "Explain this" | OLLAMA_HOST=http://server:11434 ol
# Pipe with model selection
echo "Review this code" | ol -m codellama
# Multiline input via pipe
cat <<EOF | ol "Analyze this code"
def example():
return True
EOFNote: When STDIN is available (piping/redirection), it's automatically used as the prompt. If both STDIN and a prompt argument are provided, STDIN is combined with the prompt argument.
# Local instance
ol "What's in this image?" image.jpg
# Remote instance (requires absolute path)
OLLAMA_HOST=http://server:11434 ol "What's in this image?" /home/user/images/photo.jpg# Show API request details and debug information
ol -d "Your prompt" file.txt
# Debug with remote instance
OLLAMA_HOST=http://server:11434 ol -d "Your prompt" file.txt# Display current defaults (host, models, temperatures)
olYou can set default models, temperatures, and hosts using CLI commands:
# Set default text model
ol --set-default-model text codellama
# Set default vision model
ol --set-default-model vision llava
# Set default text temperature
ol --set-default-temperature text 0.8
# Set default vision temperature
ol --set-default-temperature vision 0.5
# Set default host for vision models
ol --set-default-host vision http://remote-server:11434
# Set default host for text models
ol --set-default-host text http://another-server:11434Note: CLI flags (-h/-p) always override configured hosts for individual commands. Configured hosts are only used when no CLI flags are provided.
Or manually edit the configuration files in ~/.config/ol/:
config.yaml: Main configuration filehistory.yaml: Command historytemplates/: Directory for custom templatescache/: Cache directory for responses
The configuration is automatically loaded and saved as you use the tool.
# Use custom temperature for a single command (overrides default)
ol --temperature 0.9 "Your prompt here"
# Use lower temperature for more focused responses
ol --temperature 0.3 "Explain this code" main.py
# Temperature works with both text and vision models
ol --temperature 0.8 "What's in this image?" photo.jpg# Check current version
ol --version
# Check for available updates
ol --check-updates
# Update to latest version
ol --updateTo uninstall the package:
pipx uninstall olTo also remove configuration files:
rm -rf ~/.config/ol