LLM CLI
llm is a command-line tool for interacting with LLMs directly from the terminal. It is perfect for quick queries, piping data, and building shell-based workflows. If you work in the terminal (and as a data scientist, you should), llm is an essential tool.
Installation
bash
# Install with uv
uv tool install llm
# Verify installation
llm --version
# Install a model plugin (e.g., for GPT-4o)
llm install llm-openai
# Set your API key
llm keys set openai
# Paste your key when prompted
Basic Usage
bash
# Simple prompt
llm "What is the capital of France?"
# Use a specific model
llm -m gpt-4o "Explain quantum computing in one paragraph"
# Continue a conversation
llm -c "Now explain it even simpler"
# Pipe input
echo "def hello(): pass" | llm "Add type hints and a docstring to this function"
# Read from a file
llm "Summarize this article" < article.txt
# Save a conversation
llm "Explain Docker" --cid docker-explanation
llm -c "Give me a real-world example" --cid docker-explanation
Piping is powerful
The llm CLI works with Unix pipes, making it easy to integrate into existing workflows:
bash
# Summarize a log file
tail -100 app.log | llm "What errors are occurring?"
# Classify data
cat products.csv | llm "Categorize each product as 'electronics', 'clothing', or 'food'"
Prompt Templates
Templates save reusable prompts with variable placeholders:
bash
# Create a template
llm templates edit code-review
# This opens your editor. Write:
# Review this code for bugs, style issues, and performance problems.
# Be specific and provide fixed code snippets.
#
# Code:
# {{code}}
Use the template:
bash
# With a file
llm -t code-review code="$(cat main.py)"
# With inline code
llm -t code-review code="def add(a, b): return a + b"
# List all templates
llm templates list
# Edit an existing template
llm templates edit code-review
Template with System Prompt
bash
# Create a more complex template
llm templates edit sql-explain
# Template content:
# system: You are a SQL expert. Explain queries clearly with examples.
#
# Explain this SQL query in plain English, then suggest optimizations:
#
# {{query}}
Plugins
llm has a rich plugin ecosystem:
bash
# List available plugins
llm plugins
# Install plugins
llm install llm-claude # Anthropic Claude
llm install llm-gemini # Google Gemini
llm install llm-mistral # Mistral AI
llm install llm-ollama # Local models via Ollama
# List available models
llm models
# Use a specific model
llm -m claude-3.5-sonnet "Write a haiku about data"
llm -m gemini-pro "What is 2+2?"
Logging and History
Every interaction is logged to a SQLite database:
bash
# View recent conversations
llm logs
# View logs for a specific conversation
llm logs --cid docker-explanation
# Search logs
llm logs -q "Docker"
# Export logs as JSON
llm logs --json
# The database is at: ~/.llm/logs.db
# Query it with sqlite-utils:
sqlite-utils ~/.llm/logs.db "SELECT model, count(*) FROM responses GROUP BY model"
Batch Processing
Process multiple items efficiently:
bash
# Process each line of a file
cat names.txt | while read name; do
llm "Generate a professional email for: $name" >> emails.txt
done
# Or use xargs for parallel processing
cat items.txt | xargs -P4 -I{} llm "Classify: {}" >> classifications.txt
Embeddings
llm also supports generating embeddings:
bash
# Install embedding model
llm install llm-openai
llm embeddings -m text-embedding-3-small "Hello world"
# Compare similarity between two texts
llm embeddings -m text-embedding-3-small "The cat sat on the mat" > vec1.json
llm embeddings -m text-embedding-3-small "A feline rested on a rug" > vec2.json
Combine llm with sqlite-utils
Since llm logs to SQLite, you can use sqlite-utils to analyze your LLM usage:
bash
sqlite-utils ~/.llm/logs.db "SELECT datetime(datetime) as date, model, substr(prompt, 1, 50) as prompt_preview FROM conversations ORDER BY datetime DESC LIMIT 10"