Ask HN: Are we going to see more job postings asking for only agentic coding?
Posted by ronbenton 1 day ago
Was perusing job postings today and saw this on a Zapier listing:
"You work through AI agents, not alongside them. Your daily development workflow is built around directing and reviewing agent-written code, not writing it by hand. You have opinions about which models to use for which tasks, you've hit real failure modes and built mitigations, and your workflow is actively evolving. Bonus: you use multi-agent patterns, enable others on your team to build faster with AI, or have scaled AI impact beyond yourself."
This took me aback a little as I don't think yet I have seen companies talking about hand-writing code being bad.
Is this happening more often?
Comments
Comment by daringrain32781 1 day ago
Agents are still far too unreliable and dumb for this model and need strict discipline by a developer who really understands fundamentals. And sometimes it’s just faster to do the damn thing yourself instead of writing a whole paragraph to an agent that still might do it wrong.
Comment by soulchild37 1 day ago
Comment by bediger4000 1 day ago
For the time being, agentic coding makes 10x supermen out of existing medium and long-time coders. How much more code do we need, after all?
Second, look at the rate of "AI" improvement. Agents will start writing themselves in a few weeks or months, then all the agent wranglers and LLM jockeys will become 100x supermen. Soon, humans won't be in the loop at all.
The window in which one could become an agentic-only-coder, occupying that sort of market position, is seen to be technologically determined, and technologically finite.
Comment by hackermailman 11 hours ago
The user facing part of your program can be planned out using conceptual design https://essenceofsoftware.com/tutorials/ the author of that book teaches it in MITs old software studio course https://61040-fa25.github.io/schedule the point is to plan out modularity and the prof does enjoy using overly complex language to describe this method but once you read through the slides and tutorials you will understand why he describes it that way because he's trying to differentiate between features and concepts. For example HN has an upvoting concept who's purpose is to establish rank then a seperate concept karma which allows you to downvote but placing both functions inside the upvoting concept breaks modularity and conceptual design makes this obvious once you practice with it. Once everything is planned out this way then generating code is trivial again in my limited experience as I'm no expert on agentic coding but I've had success doing this.
All the code the user won't see can be modeled using one of the 'lightweight' formal methods out there like forge or alloy https://forge-fm.github.io/book/2026/ where a complex protocol you write or entire system can be tested first to find illicit states. Imagine you are designing some company app where there needs to be multiple logins of different security abilities this is how you would model your security plan first and make sure a combination of states doesn't produce an unexpected breach or that you missed a state. A custom network protocol that does kernel bypass is another example. The rules of a game you build is another you don't want the system to reach a state like winner unless they actually won. I now use Forge to plan css too because I don't want to show broken css states since I have limited design experience.
Now generate the whole system as modules and never look at the code. The same property tests I used for the Forge model I make into an Oracle and then blast the agent code with random inputs.
I built several gigantic prototypes this way mostly of papers I read in database designs and screwing around with graphical interfaces for them.
Comment by hkonte 2 hours ago
The failure mode I see is people writing one long prose prompt and iterating on it as a unit. When output is wrong you can not tell which part failed. Was it the role definition, the constraints, the output format? No way to know.
Decomposing the prompt into typed named regions first (role, objective, constraints, context, output format) mirrors the modularity concept from Essence of Software. Each region has a single purpose and can be adjusted independently.
I built github.com/Nyrok/flompt for this, a visual prompt builder that decomposes prompts into 12 semantic blocks and compiles to Claude-optimized XML. Same "design before you build" principle at the prompt layer.
Comment by rl3 1 day ago
Having anticipated this, I'm aiming for 1000x.
Thing is, there's plenty of jerks out there aiming for 10,000x and 100,000x.
It's almost like it's a race to the bottom or something.. Huh.
Comment by codingdave 1 day ago
Look at the bigger picture. In many other industries, LLM-based solutions are in place. They were embraced, implemented, people learned what works and does not, and the solutions were built a while ago. They are up and running and just day-to-day business at this point.
But with coding, we're still fighting to make it happen. We see job postings with all that detail because it does not "just work". We keep trying to find the best models, the best practices. People keep saying that "Real Soon Now", LLMs can do our jobs 100%. But at the end of the day, we're still writing the same apps we've been writing. Our output has not changed, except maybe a little more speed alongside a little more slop. People who do get it to work do so by throwing a lot of money at tokens. Is that all we are doing? Funding the AI platform vendors and stressing ourselves over... a minor speed improvement?
Am I the only one that thinks that the tech industry is actually failing at AI, and all the talk and effort about it just proves that point?
Comment by mattmanser 1 day ago
Apart from the ever customer-hostile automation drive of making people give up on customer service.
Comment by jackyli02 1 day ago
Comment by mattmanser 1 day ago
Ironically, you need it to be right and LLMs don't cut it.
Comment by raw_anon_1111 22 hours ago
https://docs.aws.amazon.com/boto3/latest/
I am just linking to the Python version because it’s all on one page. All of the other supported languages are the same - they are all autogenerated from the same definition file by AWS.
Also consider these same APIs are surfaced by the CLI, Terraform, CloudFormation and the AWS CDK.
I’ve been testing writing code and shell scripts against the AWS SDK since 3.5. It helped then, I can mostly one shot it now as long as the APIs were available when it was trained. Now I just have to tell it to “search for the latest documentation” if its a newer API
Comment by jlongo78 1 day ago