Update: I've published eslint-formatter-prompter — a custom ESLint formatter that turns lint output into AI-friendly prompts.
I've started treating tool output as a prompt surface -- call it tool augmented prompting. Tell the AI what's wrong, why it matters, and what to do instead. For a lint rule, instead of "Avoid direct database queries", something like: "Direct database queries bypass our audit logging. Use the db.query() wrapper from @app/db instead -- see src/modules/users/queries.ts for examples."[1]
Linters, compilers, test runners, migration scripts, deploy tooling -- the AI reads all this output. And unlike instructions in CLAUDE.md, which can degrade over a long session as context gets compressed, tool output fires right in the moment.
The problem is that most tool output is written for humans who already know the context. "Prefer const over let" is fine for a developer. For an AI it's not enough. Without the why, the AI can route around the error rather than address it[2]. It's trying to complete the core task and the importance of these elements can be lost. Same goes for a cryptic compiler warning or a test failure that just prints an assertion diff.
Worse, these outputs can sometimes be very verbose, clogging your context with unnecessary information.
This applies beyond linting. We have a custom test helper that wraps our permission checks -- when a test fails, the error message explains which permission boundary was violated and points to the decorator pattern to use. Anywhere the AI is going to see an error message, that message is a chance to steer it.
For lint rules I wire these into Claude Code's PostToolUse hook so they run after every file write[3]. The AI sees the feedback immediately and fixes it in the same flow. No separate review cycle.
Every tool in your chain already talks to the AI. Most of them are informative, but not directive. That's the gap tool augmented prompting fills.