< Do as I say not as I TODO

Existing code can and will often win over instructions. Your AI coding assistant, helpful almost to a fault, is constantly at risk of repeating your codebase's compromises[1]. Repeating them faster. And at scale[2].

Don't simply rely on AGENTS.md and subagents to do the heavy lifting. These instructions are hard to get right for all circumstances and will degrade over the course of a session.

Here are my top five things you can do to counteract this:

If you have other examples, please let me know!

I write about AI, organizations, and engineering leverage: find out about me and subscribe here.

Discuss and share via the meta page . Filed under AI, Code, and 100PR.

Footnotes

  1. There are a multitude of reasons these compromises exist in the first place. The simplest one is the most straightforward, it was a priority call. A tradeoff was made because the engineer simply didn't have time. The good news is that isn't the case any more. The bottleneck used to be the number of Software Engineers, but that is diminishing (quickly).

  2. This connects to the prompts are wishes idea — the more precisely you can articulate what you want, the less room for the curse.

  3. This isn't just a good vs bad pattern situation. What might be good practice in one part of your codebase might not be elsewhere. A restructure greatly reduces the surface area of what patterns are good and useful for a given context.

  4. We don't have any warnings, it's always an error and thus needs to be fixed. To avoid more pain in my local development you can make these hooks context-aware. For example CLAUDE_CODE_REMOTE=true signifies things running on Claude Code "cloud". I'm way more aggressive for these hooks as they're often running asynchronously.