Show HN: Runprompt – run .prompt files from the command line
Posted by chr15m 12 days ago
I built a single-file Python script that lets you run LLM prompts from the command line with templating, structured outputs, and the ability to chain prompts together.
When I discovered Google's Dotprompt format (frontmatter + Handlebars templates), I realized it was perfect for something I'd been wanting: treating prompts as first-class programs you can pipe together Unix-style. Google uses Dotprompt in Firebase Genkit and I wanted something simpler - just run a .prompt file directly on the command line.
Here's what it looks like:
--- model: anthropic/claude-sonnet-4-20250514 output: format: json schema: sentiment: string, positive/negative/neutral confidence: number, 0-1 score --- Analyze the sentiment of: {{STDIN}}
Running it:
cat reviews.txt | ./runprompt sentiment.prompt | jq '.sentiment'
The things I think are interesting:
* Structured output schemas: Define JSON schemas in the frontmatter using a simple `field: type, description` syntax. The LLM reliably returns valid JSON you can pipe to other tools.
* Prompt chaining: Pipe JSON output from one prompt as template variables into the next. This makes it easy to build multi-step agentic workflows as simple shell pipelines.
* Zero dependencies: It's a single Python file that uses only stdlib. Just curl it down and run it.
* Provider agnostic: Works with Anthropic, OpenAI, Google AI, and OpenRouter (which gives you access to dozens of models through one API key).
You can use it to automate things like extracting structured data from unstructured text, generating reports from logs, and building small agentic workflows without spinning up a whole framework.
Would love your feedback, and PRs are most welcome!
Comments
Comment by Barathkanna 12 days ago
Comment by chr15m 12 days ago
Comment by anonym29 12 days ago
It certainly doesn't intuitively sound like it matches the "Do one thing" part of the Unix philosophy, but it does seem to match the "and do it well" part.
That said, I can totally understand a counterargument which proposes that schema validation and processing logic should be something else that someone desiring reliability pipes the output into.
Comment by chr15m 11 days ago
Comment by threecheese 11 days ago
I think you mentioned elsewhere that you dont want to have a lot of dependencies, but as the format evolves using the reference impl will allow you to work on real features.
Comment by chr15m 11 days ago
Comment by cootsnuck 12 days ago
I wasn't aware of the whole ".prompt" format, but it makes a lot of sense.
Very neat. These are the kinds of tools I love to see. Functional and useful, not trying to be "the next big thing".
Comment by PythonicNinja 12 days ago
"Chain Prompts Like Unix Tools with Dotprompt"
https://pythonic.ninja/blog/2025-11-27-dotprompt-unix-pipes/
Comment by chr15m 12 days ago
"One-liner code review from staged changes" - love this example.
Comment by dymk 12 days ago
Comment by jedbrooke 12 days ago
seems like it would be, just swap the openai url here or add a new one
Comment by chr15m 11 days ago
Comment by chr15m 12 days ago
Comment by benatkin 12 days ago
Comment by chr15m 12 days ago
Comment by threecheese 11 days ago
Comment by chr15m 11 days ago
Functions require you to specify them on the command line every time they're invoked. I would prefer a tool like this to default to reading the functions from a hierarchy where it reads e.g. .llm-functions in the current folder, then ~/.config/llm-functions or something like that.
In general I found myself baffled when trying to figure out where and how to configure things. That's probably me being impatient but I have found other tools to have more straightforward setup and less indirection.
Basically I like things to be less centralized, magic, and less controlled by the tool.
Another thing, which is not the fault of llm at all, is I find Python based tools annoying to install. I have to remember the env where I set them up. Contrast with a golang application which is generally a single file I can put in ~/bin. That's the reason I don't want to introduce a dep to runprompt if I can avoid it.
The final thing that I found frustrating was the name 'llm' which makes it difficult to conduct searches as it is the generic name for what the thing is.
It is an amazing piece of engineering and I am a huge fan of simonw's work, but I don't use llm much for these reasons.
Comment by khimaros 12 days ago
Comment by tomComb 12 days ago
Comment by oddrationale 12 days ago
https://microsoft.github.io/promptflow/how-to-guides/develop...
Comment by chr15m 12 days ago
Comment by gessha 12 days ago
Comment by __MatrixMan__ 12 days ago
Comment by stephenlf 12 days ago
Comment by chr15m 12 days ago
Comment by __MatrixMan__ 12 days ago
That's typically how we expect bash pipelines to work, right?
Comment by chr15m 11 days ago
Comment by chr15m 10 days ago
Comment by dymk 12 days ago
Comment by __MatrixMan__ 12 days ago
- arrow up
- append a stage to the pipeline
- repeat until output is as desired
If you're gonna write to some named location and later read from it you're drifting towards a different mode of usage where you might as well write a python script.
Comment by meander_water 12 days ago
I've been using mlflow to store my prompts, but wanted something lightweight on the cli to version and manage prompts. I setup pmp so you can have different storage backends (file, sqlite, mlflow etc.).
I wasn't aware of dotprompt, I might build that in too.
Comment by cedws 12 days ago
Comment by leobuskin 12 days ago
#!/bin/bash
file="$1"
model=$(sed -n '2p' "$file" | sed 's/^# \*//')
prompt=$(tail -n +3 "$file")
curl -s https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "content-type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d "{
\"model\": \"$model\",
\"max_tokens\": 1024,
\"messages\": [{\"role\": \"user\", \"content\": $(echo "$prompt" | jq -Rs .)}]
}" | jq -r '.content[0].text'
hello.prompt #!/usr/local/bin/promptrun
# claude-sonnet-4-20250514
Write a haiku about terminal commands.Comment by chr15m 10 days ago
Comment by _joel 12 days ago
If you curl/wget a script, you still need to chmod +x it. Git doesn't have this issue as it retains the file metadata.
Comment by vidarh 12 days ago
#!/bin/env runprompt
---
.frontmatter...
---
The prompt.
Would be a lot nicer, as then you can just +x the prompt file itself.Comment by chr15m 11 days ago
#/usr/bin/env runprompt
Comment by chr15m 12 days ago
Comment by ltbarcly3 12 days ago
Comment by stephenlf 12 days ago
Comment by stephenlf 12 days ago
Comment by journal 12 days ago
Comment by chr15m 12 days ago
Comment by orliesaurus 12 days ago
Comment by jsdwarf 12 days ago
Comment by chr15m 12 days ago
Comment by garfij 12 days ago
Comment by orliesaurus 7 days ago
Comment by swah 12 days ago