AI doesn't have a context problem. You do.

Getting consistent results from AI coding tools is less about the model and more about what you give it. It turns out this is a problem we should have solved a long time ago.

A few days ago I came across a series of articles by Benoît Fontaine on context engineering. The series is written in French, and is very qualitative. If you can read French, I strongly recommend it: part 1, part 2, part 3.

It put words on something I had in mind for a while.

Sometimes AI is great. Sometimes it’s not.

On the left, a robot is coding happily. On the right, a robot is coding sadly

If you use AI coding tools every day, you’ve probably noticed the inconsistency. One day it helps you build something solid in minutes. The next, it confidently generates code that doesn’t match your project at all. Wrong patterns, wrong abstractions, the wrong interpretation of what you want.

Most people blame the model. And sometimes the model is the issue. But more often than not, the real problem is what you gave it to work with.

Context is the difference

No context, and the AI is just guessing. It will produce something, but that something will be generic and disconnected from how your project actually works.

Too much context, and things get messy too. Dumping your entire codebase or a 20-page spec into the conversation doesn’t help. The model loses focus, quality drops, and you start getting contradictions.

The sweet spot is a small, well-defined set of conventions. Short rules. Clear boundaries. Things like: what patterns to use, what to avoid, which layer is responsible for what. Not a novel, a cheat sheet.

This is what context engineering is about. And it works. But there’s a catch.

You can’t scope what you haven’t separated

A robot lost in a mess of cables, gears and machinery trying to fix something

For AI to follow conventions in a specific part of your project, that part needs to actually be a distinct thing. If your domain logic and your infrastructure code are all mixed together, there’s no clean way to tell an agent “handle this area differently” because there is no clear area.

This is where architecture matters more than you might think. Hexagonal architecture, for example, forces a real separation between your domain and the outside world. Dependency injection, interfaces, ports and adapters, all of these create natural walls between concerns. The architecture becomes the map your AI can follow.

The point isn’t to use hexagonal architecture because it’s fashionable. It’s that good architecture creates boundaries, and boundaries are what make targeted, scoped context possible. Without the separation, no amount of well-written conventions will prevent the AI from bleeding across layers.

Not everything fits in the cheat sheet

Some knowledge is too long, too specific, or too rare to keep in the active context all the time. Architecture decision records, detailed design specs, the reasoning behind a technical choice made two years ago, these don’t belong in your base conventions. They’d make the whole thing noisy.

But they do need to exist somewhere. Think of it as a “cold memory”: documents you load on demand when a specific task requires it. Working on the authentication flow? Load the relevant ADR. Refactoring the payment module? Pull in the decision record that explains why it was built that way.

This turns documentation into something useful, instead of something that sits in a Confluence page nobody visits.

Same problem, different day

A robot happily holding a bunch of documentation pages

Here’s where it gets interesting.

Remove AI from the picture for a second. A new developer joins your team. They have access to clear coding conventions, a well-scoped architecture, ADRs explaining past decisions, and short guides per module. How fast do they get up to speed? How many mistakes do they avoid?

Pretty fast. And quite a few.

This is the same effect. A well-contextualized AI and a well-onboarded developer both benefit from the exact same thing: a codebase that has been documented with care. The conventions, the architecture, the decision records, none of this is new. We’ve known we should write this stuff for years. Most teams just don’t.

AI might finally fix that

Not because AI will write the documentation for you, though it can help with that too. But because, for the first time, the cost of not having it is immediate and hard to ignore.

Bad documentation used to be a slow problem. A new developer takes two weeks to find their feet ? Annoying, but survivable. Now, without good context, your AI tools are unreliable every single day. That’s a different kind of pain.

Maybe AI is the push we needed to finally take documentation seriously. Not as a box to check. Not as a wiki nobody reads. But as the actual foundation of how a project communicates what it is and how it works.

The model isn’t the problem. The context is. And the context was always your job to build.