DEV Community

Cover image for Automation is a trade-off
Sue Smith
Sue Smith

Posted on

Automation is a trade-off

Atrophy of the maintainer's eye

A couple of years ago I wrote a post about abstractions. I’m revisiting it here with AI-assisted coding in mind, because I see similar issues arising. LLMs make it easier than ever to generate code, but when we look at what happens next, the picture becomes a little bit less clear.

When you build an abstraction or automation intended for use by other people, you make a decision about what is valuable for them to know about, and what isn’t. In programming this is extraordinarily hard. Often we accidentally hide information it turns out the user needs or wants to know about. And it can change over time, because in software, context has a huge influence over how and what we build.

The world of APIs grew from the power of being able to import code written by others – we were able to focus on the unique aspects of our own problems / solutions. But choosing to use a dependency has consequences we didn’t always take seriously. It’s a trade-off I wish we were having more explicit conversations about with AI.

⚠️ I should note that I don’t view LLM prompts as programming abstractions but as code generation automations, for reasons I’ve been exploring in recent talks.

Progressive Disclosure?

One approach to managing the trade-off is to provide a kind of escape hatch where you can see and configure the underlying implementation if you want, or ignore it if you don’t need to know what’s going on under there.

Here’s where I get skeptical about vibe coding platforms positioning themselves as empowering. I won’t name the company, but I recently tried generating a simple static website from a service aimed at people without prior coding skills. I exported the app to VS Code. The implementation was ludicrously over-engineered, with a ton of TypeScript files and zero documentation. Not a code comment, not a README, nothing. The output was effectively obfuscated. Even as an experienced developer I would have had an extremely hard time working with this codebase.

Perhaps you’re wondering if this is an edge case, assuming these codebases don’t need to be worked on directly. I pay close attention to accounts from people using these services and that's not what I'm finding. Overwhelmingly they get you to a prototype, but in order to take your app to the next level you do need to engage with the code. Even the platforms themselves have subtly shifted their messaging to reflect this.

The interface is not the LLM

The platforms I'm referring to here are a little different from e.g. using an assistant or agent in your IDE. The underlying prompts and LLM scaffolding are partly or wholly hidden from the user. They could easily instruct the model to prefer human-readable implementations and provide ample documentation – to produce programs that are optimized for the user to learn from, and that act as a foundation to build on. That would be empowering.

Here’s another awkward piece of the puzzle. In order to ask the model for such an enabling starting point, you already need to know something about how software is built. So that precludes anyone without prior development experience.

Let’s talk about the trade-offs

Here are some considerations I think are valuable when it comes to either designing or using an automation for code generation:

  • What details are being hidden from the user?
  • What decisions are being made about the implementation?
    • Are the decisions accidental or deliberate, what is driving them?
  • How can these decisions be changed?

Importantly:

  • What might the user need to know later that they don’t know now?

Taking a moment to answer these questions might help us make better choices about the abstractions we build, and those we choose to adopt.

What does empowerment look like?

Sometimes I imagine a platform like Glitch, but that generates apps from a prompt, while prioritizing ease of extension and subsequent learning – what the Glitch Hello apps were meticulously designed to do. Such a thing is absolutely feasible, at least technically. But the reality is that software company decisions are determined by economic dynamics. They are incentivized to prioritize engagement over enablement – in many cases what this looks like is platform lock-in.

I do see people being enabled to build software using LLMs, usually with the support of other human beings in their workplaces or communities. There is tremendous potential for opening the gateway to developer skills with this technology, but it won’t happen by accident, and we should scrutinize claims of empowerment that don’t check out when we scratch the surface.

What does accountability look like?

There’s a parallel problem even professional developers are experiencing, as AI-generated code is harder to work with, increasing the maintenance burden over time. Using LLMs to generate code is reducing our ability to build strong mental models of our codebases. Again, I don’t believe this has to be the case, we could use these automations while still preserving agency and understanding.

For working developers, the question of accountability plays a huge role here. When things go wrong, who is held responsible, and what are the consequences? Understanding what you’re committing to a codebase has implications I don’t believe we’ve fully reckoned with yet.

We can shape this

As we begin to discover the longer term consequences of using LLMs to generate code, I would love to see more honest conversations about what is truly involved in enabling people over both the short and long term. Because we can build for that if we want to.

Top comments (7)

Collapse
 
peacebinflow profile image
PEACEBINFLOW

That distinction between an abstraction you can peer into versus one that's opaque by default feels like the core of it. The "progressive disclosure" idea works when there's an escape hatch, but what you described—zero comments, over-engineered TypeScript, no README—isn't hiding complexity behind a clean interface. It's generating a maze and calling it a shortcut.

What this reminds me of is how we used to talk about "leaky abstractions" as a bug, but now we're dealing with something closer to brittle abstractions. They don't leak information you need—they just fracture when you try to modify them, because there was no intentional structure underneath. The platform optimized for generation, not for evolution.

The part about economic incentives pushing toward lock-in rather than enablement is where it gets uncomfortable. A tool that generates clean, documented, maintainable code is technically possible—you said it yourself. But it's also a tool that makes it easier to leave. The friction isn't accidental; it's quietly serving a business model.

I keep thinking about your question: "What might the user need to know later that they don't know now?" That feels like a question we should be asking not just about the generated code, but about the generation tool itself. If the platform disappears or pivots, what understanding did it leave you with?

Have you seen any smaller tools or workflows—maybe outside the big platforms—that seem to be making a genuine effort at empowerment over engagement?

Collapse
 
suesmith profile image
Sue Smith

Yeah I've heard a few people compare this to a leaky abstraction but that doesn't seem adequate for the reasons you've mentioned here!

You're absolutely right about the code generation tools, these platforms are so volatile and the pricing shift is going to make them inaccessible to many..

On your last question I haven't seen much yet other than a few anecdotal uses of LLMs to create tutorials and the like, but I think there's a huge opportunity to build community platforms for learning.

Collapse
 
ofri-peretz profile image
Ofri Peretz

The framing that platforms "could easily instruct the model to prefer human-readable implementations" is where I'd push back. Even with careful prompting, LLMs produce structurally inconsistent code — slapping documentation on top doesn't actually build the mental model you need when things break. imo the answer isn't better-documented generated apps, it's smaller generation scope where you're forced to reason about each piece yourself.

Collapse
 
toshihiro_shishido profile image
toshihiro shishido

"You stop noticing the small signals" hits hard. Same on the analytics side — when reporting gets fully automated, numbers keep showing up but no one notices an input has been quietly broken for three weeks.

The hidden cost is the maintainer's eye atrophying.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

running agent workflows, this bites constantly - something fails three abstraction layers deep and the whole team is guessing.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.