DEV Community

Cover image for Most People Use AI Like Google. That's Why It Sucks.
Jono Herrington
Jono Herrington

Posted on • Originally published at jonoherrington.com

Most People Use AI Like Google. That's Why It Sucks.

Defining systems over refining prompts

For the first month I treated Copilot like a junior engineer with ambition and no guardrails.

It would write code I never asked for. I'd be halfway through a function, and Copilot would leap ahead ... creating new methods, suggesting whole classes, trying to take initiative like a junior who wants to prove themselves.

The breaking point came during a refactor. It would output code that didn't just ignore our style guide ... it created variables named R and T. It duplicated logic I'd already abstracted. It wrote the same pattern three times instead of recognizing the abstraction. The tests broke, and it didn't care. Because why would it? I hadn't taught it to care.

I spent two hours cleaning up a mess that should have taken twenty minutes to write correctly.

We teach engineers to take initiative. But with AI, you have to reel it in.


That became the pattern. I'd prompt, it would overreach, I'd prune. Prompt, overreach, prune. I was spending more time editing AI output than I would have spent writing it myself.

Not because the AI was bad. Because I was asking for answers when I should have been defining the system.

The Markdown File Realization

The shift happened when I stopped trying to re-prompt my way out of bad output. Instead, I changed what brain the AI pulled from.

I created markdown files with our engineering standards. How we write requirements. What questions we ask before we scope. The difference between a user story and a task. How we think about tradeoffs. When we prefer duplication over abstraction. What "clean" means in our codebase.

The next time an agent generated code, it didn't improvise. It executed patterns we'd already defined.

That was the difference. I wasn't prompting a junior engineer anymore. I was orchestrating a senior engineer.

What Senior Engineers Actually Do

Senior engineers don't write better code because they type faster. They write better code because they recognize patterns. They know when to abstract and when to duplicate. They understand the constraints that aren't written down but exist in the culture of the codebase.

You can't prompt that into an AI. You have to encode it.

We built it into skills, rules, agents, hooks. We have markdown files for our BA persona. For our solution architect. For how we handle code review and QA. When an agent generates a spec or writes code, it references these files. It's not improvising. It's executing patterns we've already defined.

The old constraint was headcount. "We don't have enough people." Now the constraint is algorithm quality. How well we've encoded judgment. How clearly we've defined what good looks like.

Why Architects Are Winning

Here's what I've noticed. Principal engineers and architects are adopting AI faster and deeper than mid-level developers. Their usage numbers are way higher.

It's not because they're more technical. It's because they already think in systems.

A mid-level engineer treats AI like a pair programmer. Prompt, review, accept, repeat. An architect treats AI like infrastructure. Define the patterns, encode the constraints, let the system execute.

If you can think in systems ... and you can exponentially scale that thinking ... why wouldn't you?

The Skill That Changed

I thought the cost would be craft. The satisfaction of writing a clean function. It wasn't.

The cost was learning a completely different skill. Not coding. Not even prompting. Orchestration.

Understanding how to chain agents. Where to insert human judgment. How to encode standards so rigid that the system can't violate them, but flexible enough that it can still surprise you.

The engineers struggling with AI aren't struggling with the tool. They're struggling because they spent years optimizing for a different skill ... writing code by hand ... and now the game has changed.

The Question

So here's what I watch for now.

When an engineer hits a wall with AI, do they re-prompt ... or do they re-architect? Do they try to steer the AI with better words, or do they change what the AI is pulling from?

The first approach scales linearly. One prompt, one output, one edit. The second scales exponentially. Define the system once, let it execute forever.

Most people use AI like Google because that's what the interface suggests. Type, accept, move on. It feels like progress.

But progress isn't speed of retrieval. It's leverage.

Google gives you answers.

A senior engineer gives you systems.

Which one are you running?


One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.

Top comments (8)

Collapse
 
klaudiagrz profile image
Klaudia Grzondziel •

Thanks for the interesting read and for sharing your experience!

I am now experimenting with AI to improve documentation processes in our company. I am using skills, workflows, and chaining agents, but it seems AI is lost in both cases: when it has too little and too much context. I ended up with the whole set of instructions, and AI always keeps dropping some of them 🙃 Did you happen to have a similar experience?

Collapse
 
jonoherrington profile image
Jono Herrington •

We hit the same wall. Skills, agents, hooks, rules, standards documented. The AI still dropped things. Not because the standards were unclear. Because we were asking it to hold too many contexts at once.

The fix was not more instructions. It was clearer handoffs. We broke the workflow into discrete steps with narrow context windows. Each agent gets one job and one slice of the standard. They do not need to know everything. They need to know the one thing that matters for this step.

If your AI is dropping instructions, the issue is not the instructions. It is the scope. Narrow the scope, not the standard.

Collapse
 
klaudiagrz profile image
Klaudia Grzondziel •

You mean that, for example, instead of giving it an instruction to "write documentation", I should divide it into smaller blocks? "Write architecture document", "write troubleshooting guide", etc.?

Collapse
 
sunychoudhary profile image
Suny Choudhary •

Good take.

Using AI well feels less like asking questions and more like managing a process.

Collapse
 
jonoherrington profile image
Jono Herrington •

The engineers getting the most out of AI are not the best prompters. They are the best architects. They build the machine, then let it run.

Collapse
 
vicchen profile image
Vic Chen •

This hits home. When I started building LLM-powered tools for financial data analysis, the biggest unlock wasn't a better prompt — it was treating the model as a reasoning engine embedded in a pipeline with memory, tools, and structured context.

Once you shift from 'query → response' to 'context → reasoning → action,' the outputs get dramatically more useful. The analogy I always use: Google is a card catalog; a well-structured AI system is a research analyst who actually remembers your last 20 conversations and has access to your data.

For anything beyond simple lookups, the architecture matters way more than the prompt.

Collapse
 
itskondrat profile image
Mykola Kondratiuk •

junior dev is the wrong analogy - I've tried it. they learn from feedback; copilot doesn't internalize anything. every session you're back to square one.

Collapse
 
laura_ashaley_be356544300 profile image
Laura Ashaley •

That’s a fair point if you treat AI like a search box, you miss most of what it can actually do when you give it context, goals, and iteration.