Ask HN: How are you managing "prompt fatigue" and lazy LLM outputs?
Posted by thlangu 9 hours ago
I rely heavily on LLMs to help me code side projects and write copy, but lately, I’ve hit a wall with prompt fatigue.
Between my college classes and working my sales shifts, my actual dev time is pretty limited. I started noticing that I was spending 20 minutes just arguing with the models to get what I actually asked for. If I don't write a massive, perfectly structured system prompt every single time, the AI defaults to giving me half-finished code (// insert remaining logic here) or wraps everything in that sterile, generic voice (always using words like 'delve' or 'robust').
I got so tired of keeping a messy Notion doc full of "negative constraints" to copy and paste that I ended up just building my own lightweight wrapper (a constraint engine) to front-load all the formatting rules before it hits the model.
But I'm really curious about how power users here are handling this right now.
Are you guys just keeping massive markdown files of system prompts to copy/paste?
What specific constraints or frameworks are you using to force models to write complete, production-ready code on the first try?
Comments
Comment by HalfEmptyDrum 9 hours ago