Existing code can and will often win over instructions. Your AI coding assistant, helpful almost to a fault, is constantly at risk of repeating your codebase's compromises[1]. Repeating them faster. And at scale[2].
Don't simply rely on AGENTS.md and subagents to do the heavy lifting. These instructions are hard to get right for all circumstances and will degrade over the course of a session.
Here are my top five things you can do to counteract this:
-
Gold Templates: If there is a repeated pattern in your code, spend a bit of time getting one of those pattern implementations up to a "gold standard". Preferably one that exercises as much of the pattern as practical. When I prompt Claude to "make another one of those" the results are typically excellent. Make these examples prominent.
-
Here be Dragons: The counter-example of the Gold Templates is just as powerful. I mark anti-patterns often. So I sprinkle
AVOIDcomments liberally throughout the code. Add these comments imagining someone is jumping into the file with zero context. If you have three ways of making HTTP calls, make it crystal clear which one is the one to use and the ones to avoid. -
Restructure Your Codebase: Related to the above and I mention this in my 100PRs/day/engineer Post. Refactor your codebase around domains and keep those domains tight. Think of your codebase as an assembly of largely independent libraries with clear interfaces (as far as practical). You can then instruct the AI to work exclusively in one or two domains. Otherwise the AI will (and must) wander your codebase and it'll pick up all sorts of anti-patterns on the way[3].
-
Devtooling: It's more of a fine-grained solution, but push everything you can into formatting and lint rules. Then wire this into AI hooks like
PostToolUse,SessionEnd, or git hooks[4]. If the lint error message is customizable, then make it a prompt so it's very clear what the AI should do: "This is important because XYZ, fix it properly before continuing". The AI is trying to complete the core task and a lint error needs context; otherwise the AI might route around the error rather than fix it. -
Embrace AI Refactoring: The silver lining. AI is excellent at refactoring existing code. So you can tolerate a reasonable degree of variation if you're on top of refactoring it back. I often tolerate DRY violations in the moment and sweep through later with a refactoring pass.
If you have other examples, please let me know!