I was grabbing a coffee with a buddy yesterday, and he told me this classic performance horror story. One of his teammates was wrestling with a subsystem that was dragging its feet. The setup was pretty standard for an application of its size: they were using the Outbox pattern to push events to Kafka.

They were deep in the weeds—benchmarking everything, looking at event spikes, trying to figure out why the DB was choking. As a result they implemented Postgres partitioning to handle the scale, as this was what they thought would solve the issue. They came to that conclussion by using a LLM Agent to guide them through the process of figuring out the performance issue.

But here’s the kicker: after all that, they had a conversation, and they talked and talked and they found a better fix. The fix wasn’t a rewrite or a new shiny tool.

It was a simple VACUUM.

That was it. No new code, just a database maintenance task that had been overlooked.

The “Spec” is the new Code Review

This story got me thinking about the world we live in now. With LLMs and AI agents, everyone says, “Oh, you just need to specify the problem correctly and the agent will write the whole codebase.” And sure, that’s mostly true. But we’ve reached a point where specifying the problem is actually more important (and harder) than reviewing the code.

We often forget how much “silent knowledge” we carry around.

When my friend was telling me the story, he mentioned Kafka events and the Outbox pattern. My brain instantly filled in the blanks. I made a dozen assumptions about why those events were there, how the transactionality was handled, and what the partitioning logic looked like. I didn’t have to ask, because I’ve spent years implementing those systems. That knowledge is internalized.

The “Junior” Prompting Gap

Here’s the problem with the “AI will do everything” hype: an engineer without that hands-on experience can’t write a spec that includes the things they don’t know they’re missing.

When I prompt an AI, hopefully, most of the time, my spec is dense with context. I focus on the critical architectural constraints because I already know how the underlying tech behaves. A junior or someone just “using the tools” might generate a massive prompt that misses the one crucial detail—like how Postgres handles dead tuples—because they’ve never had to fix a production outage caused by it.

Something to keep in mind here is that it’s important not to specify too much. You still have to go in small steps, so that there are not too many blank spaces for the agent to fill. On a recent episode of The Pragmatic Engineer podcast, Martin Fowler mentions something similar when he says that writing too much design up front is something we learned not to do when Waterfall was the predominant methodology.

This is something that I think is difficult for less experienced developers as well, how do you know when the step is small enough?

Experience isn’t going anywhere

The recent article from Anthropic research How AI assistance impacts the formation of coding skills Points to the value of hard won knowledge even more.

We’re moving toward a future where a lot of the “doing” becomes transparent. Maybe in a few years, the AI will handle the VACUUM for us, too.

But right now? We aren’t there yet. The value of an experienced engineer isn’t just knowing how to write the code; it’s knowing what not to write. It’s the ability to look at a complex system and say, “Wait, let’s check the basics before we re-engineer the whole partition strategy.”

If you’re a senior dev, don’t sweat the AI tools. Your ability to specify a problem is your superpower. If anything, it’s the folks who rely on the tools without understanding the “why” who should be worried.