Ask HN: Thinking about memory for AI coding agents

Posted by hoangnnguyen 10 hours ago

Counter5Comment4OpenOriginal

I’ve been experimenting with AI coding agents in real day-to-day work and ran into a recurring problem, I keep repeating the same engineering principles over and over.

Things like validating input, being careful with new dependencies, or respecting certain product constraints. The usual solutions are prompts or rules.

After using both for a while, neither felt right. - Prompts disappear after each task. - Rules only trigger in narrow contexts, often tied to specific files or patterns. - Some principles are personal preferences, not something I want enforced at the project level. - Others aren’t really “rules” at all, but knowledge about product constraints and past tradeoffs.

That led me to experiment with a separate “memory” layer for AI agents. Not chat history, but small, atomic pieces of knowledge: decisions, constraints, and recurring principles that can be retrieved when relevant.

A few things became obvious once I started using it seriously: - vague memory leads to vague behavior - long memory pollutes context - duplicate entries make retrieval worse - many issues only show up when you actually depend on the agent daily

AI was great at executing once the context was right. But deciding what should be remembered, what should be rejected, and when predictability matters more than cleverness still required human judgment.

Curious how others are handling this. Are you relying mostly on prompts, rules, or some form of persistent knowledge when working with AI coding agents?

Comments

Comment by 7777777phil 47 minutes ago

Earlier this month I argued why LLMs need episodic memory (https://philippdubach.com/posts/beyond-vector-search-why-llm...), and this lines up closely with what you’re describing..

But not sure it's a prompts vs rules problem. It’s more about remembering past decisions as decisions. Things like 'we avoided this dependency because it caused trouble before' or 'this validation exists due to a past incident' have sequence and context. Flattening them into embeddings or rules loses that. I see even the best models making those errors over a longer context rn.

My current view is that humans still need to control what gets remembered. Models are good at executing once context is right, but bad at deciding what deserves to persist.

Comment by nasnasnasnasnas 10 minutes ago

Claude.MD /agents.MD can help with this ... You can update it for a project with the base I formation that you want to give it... I think you can also put these in different places in your home dir too (Claude code / open code)

Comment by hoangnnguyen 9 hours ago

I tried to build an experiment, detail in this dev log https://codeaholicguy.com/2026/01/24/i-use-ai-devkit-to-deve...

Comment by gauravsc 6 hours ago

We have built something very close to it, a memory and learning layer called versanovatech.com and we posted about it here: https://news.ycombinator.com/from?site=versanovatech.com