DEV Community

Cover image for The AI-Augmented Developer: How AI Is Changing the Way We Write Code

The AI-Augmented Developer: How AI Is Changing the Way We Write Code

Gavin Cettolo on April 28, 2026

A few months ago, I found myself doing something I hadn’t done before. Not Googling. Not digging through old Stack Overflow threads. I just… aske...
Collapse
 
lucaferri profile image
Luca Ferri

Really enjoyed this article, @gavincettolo! The "copilot, not autopilot" framing clicked for me immediately. I've been using GitHub Copilot for a few months, but I still feel like I'm just accepting suggestions without fully understanding them. Is that a red flag, or is it normal at the beginning?

Collapse
 
gavincettolo profile image
Gavin Cettolo

Hey @lucaferri, thanks so much, glad it resonated! Honestly, it's very common at the start. The key shift I'd suggest is to treat every accepted suggestion as a code review moment. Before you hit Tab, ask yourself: "Can I explain what this does and why?" If not, that's your cue to dig deeper. Think of it like reviewing a junior dev's PR: you wouldn't just merge it without reading it, right?

Collapse
 
lucaferri profile image
Luca Ferri

That's a great analogy! You mention in the article that AI is like a "junior developer that's incredibly fast." But junior devs also make junior mistakes. How do you catch those mistakes when the code looks correct at first glance?

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

Exactly the right concern. My go-to approach is to read the code as if someone else wrote it, because in a way, they did. I also like to ask the AI itself: "What edge cases could this miss?" or "What could go wrong here?" You'd be surprised how often it flags its own weaknesses when prompted directly. Tests help too, obviously, they're your safety net when the code looks fine but behaves weirdly.

Thread Thread
 
lucaferri profile image
Luca Ferri

Oh that's clever — asking the AI to critique itself! I never thought of doing that. On another note, you talk about "starting with your own idea" before prompting. But what about when you're totally stuck and have no starting idea at all? Is it okay to just... ask AI from scratch?

Thread Thread
 
gavincettolo profile image
Gavin Cettolo

Totally fair question. Yes, you can absolutely start from scratch with AI, but I'd still recommend framing it as exploration, not delegation. Instead of "write me a function that does X," try "what are a few different ways I could approach X?" That way, you stay in the driver's seat mentally, even if you don't have a solution yet. It keeps you engaged with the problem, not just the output.

Collapse
 
gavincettolo profile image
Gavin Cettolo • Edited

This reminded me of something I read recently.

A company was experimenting with ranking developers based on how many AI tokens they consumed.
More tokens → higher ranking.

Honestly, I find that… questionable.

It doesn’t encourage better use of AI.
It encourages more use of AI.

And those are not the same thing.

If anything, it risks pushing developers to:

  • rely on AI without thinking
  • optimize for output instead of understanding
  • use the tool just to “score points”

Which is the opposite of what we actually want.

AI should help us think better, not skip thinking entirely.

Curious to hear your take:

👉 should we even try to measure AI usage like this, or is it the wrong metric altogether?

Collapse
 
paolozero profile image
Paolo Zero

I’d push back pretty strongly on that metric—it’s measuring the loudness of AI usage, not its value.

Token count is a classic example of a proxy that’s easy to track but poorly aligned with outcomes. It rewards verbosity and dependence, not clarity or judgment. In fact, some of the best uses of AI are efficient: a well-crafted prompt, a quick validation, or using it to challenge an assumption—not generating pages of code.

If anything, that system risks creating perverse incentives:

  • people prompting more than necessary
  • accepting AI output uncritically
  • optimizing for activity instead of impact

A more meaningful direction (even if harder to measure) would be things like:

  • reduction in iteration time
  • quality of solutions (bugs, maintainability)
  • how effectively AI is used in decision-making, not just generation

But even those are tricky—because good engineering is still largely qualitative.

So I’d say: yes, measure impact, but be very careful measuring usage. When the metric becomes the goal, it tends to distort behavior—and this feels like one of those cases.

Collapse
 
elenchen profile image
Elen Chen • Edited

Great article as always @gavincettolo!

Collapse
 
gavincettolo profile image
Gavin Cettolo

Thank you @elenchen :)

Collapse
 
paolozero profile image
Paolo Zero

Really enjoyed this—especially the framing of AI as a thinking shift rather than just a productivity tool.

The “copilot, not autopilot” idea resonates a lot. In practice, the biggest difference I’ve noticed isn’t just faster coding, but faster iteration on ideas. The loop of “sketch → ask → refine → review” feels like a new kind of feedback cycle that didn’t exist before.

That said, I think the “false confidence” point is the most important one here. AI lowers the friction to produce plausible code, but not necessarily correct or context-aware code. And that gap is where real engineering judgment becomes even more valuable—not less.

One thing I’d add: this shift might gradually redefine what “senior” means. Less about writing code quickly, more about:

asking better questions
spotting weak assumptions
and knowing what not to trust

In that sense, AI doesn’t flatten skill—it amplifies the difference between shallow and deep understanding.

Curious how others are handling this: are you finding AI changes how you think about problems, or just how fast you solve them?

Collapse
 
gavincettolo profile image
Gavin Cettolo

Your point about iteration is spot on. I’ve found myself exploring more alternative approaches than before, simply because the cost of trying something is so low.
The “copilot, not autopilot” idea really comes alive in what you said. The moment you switch to autopilot, that’s when subtle bugs and wrong assumptions sneak in.
I like your take on redefining “senior.” It’s increasingly about judgment, not output. Knowing what not to accept from AI is becoming a core skill.
“Spotting weak assumptions” is such an underrated skill, and AI tends to expose that gap quickly. It will happily build on a flawed premise unless you catch it early.
I’ve noticed something similar: AI doesn’t reduce complexity, it just shifts where the complexity lives, from writing code to validating and steering it.
That idea that AI amplifies the gap between shallow and deep understanding really resonates. Two people can use the same tool and get completely different outcomes.

To your question: for me it’s definitely changing how I think, not just how fast I move. I spend more time framing problems clearly because the quality of the answer depends so much on that.

Appreciate this thoughtful comment, it adds a lot to the discussion. Thank you 🙏

Collapse
 
paolozero profile image
Paolo Zero

Thank you Gavin, really appreciate your answer

Collapse
 
gavincettolo profile image
Gavin Cettolo • Edited

I really like how you described that loop, “sketch → ask → refine → review.” That’s exactly the shift I was trying to capture but you articulated it better. It’s less about speed in isolation and more about compressing the feedback cycle.

Collapse
 
gavincettolo profile image
Gavin Cettolo

Totally agree on false confidence. If anything, AI raises the bar for critical thinking because now you have to constantly ask: does this actually fit my context, or just look right?

Collapse
 
sylwia_sekr_cb9403ef2b2 profile image
Sylwia Sečkár

Nothing change at all that what I think🙃

Collapse
 
gavincettolo profile image
Gavin Cettolo

That’s fair 😄
At the core, we’re still solving problems and building software, that part hasn’t changed.

What I think has changed is the workflow around it:
less searching, more asking; less boilerplate, more reviewing and guiding.

AI doesn’t replace engineering thinking, but it definitely changes how many of us approach coding day to day. Even if the fundamentals stay the same 🙂

Collapse
 
sylwia_sekr_cb9403ef2b2 profile image
Sylwia Sečkár

Absolutely true🙂😅