Cursor and ChatGPT already changed how I work
A few years ago, coding felt like a solo sport: you, the keyboard, and Stack Overflow when things went sideways. Now there’s an assistant in the editor, another tab for chat, sometimes a whole builder that spits out UI before you’ve finished your coffee.
I’m not here to argue whether that’s good or bad—it’s mostly here, and I’ve adapted. The part that surprised me is not “the model can code.” It’s that most of my pain now comes from communication, not syntax. Cursor and ChatGPT didn’t just speed up typing for me; they shifted the job toward directing: I say what I want, something gets drafted, I react, we iterate.
So if you ask what actually matters in vibe coding now, I’d say it’s boring and unsexy: the prompt. Not the biggest context window, not the trendiest stack—a clear ask, written like you mean it.
The trade nobody warned me about
When I started, I treated prompts like quick texts. “Make a dashboard.” “Fix this.” “Add auth.” I thought I was being efficient.
I wasn’t. I was renting my future self a pile of cleanup. The model would hand back something plausible—almost always wrong in small, expensive ways: the layout was almost right, the state handling was almost right, the happy path worked and everything else melted. I’d burn an hour tracing behavior that would never have gone sideways if I’d spent one extra minute sharpening the request.
My rule of thumb now: one more minute on the prompt often buys back something like an hour of debugging and rework. Not always, but often enough that I’m embarrassed I ever skipped it.
What follows isn’t theory. It’s what I actually do when I want the output to land closer to finished the first time.
What I try to get right every time
Be specific.
I name what I’m building in plain pieces: features, rough layout, vibe of the UI, and what “done” looks like. “Dashboard” tells the model nothing useful. “Three KPIs on top, one line chart for last 30 days, table below with sortable columns, calm spacing like Notion, no neon gradients”—that’s a picture someone can draw.
Give it structure.
I split the prompt into chunks with headers or bullets so neither of us has to hunt. Usually something like: goal, must-haves, nice-to-haves, hard limits (tech, performance, what not to do), and how I’ll know it’s correct. It reads a bit like a tiny ticket. That’s the point.
Point at examples.
If I’ve got a screenshot, a product I like, or even a single phrase—“dense like Linear,” “quiet like Stripe’s docs”—I put it in. I’m not asking for a clone; I’m giving a target so the model stops free-styling typography and spacing I’ll hate.
Work in steps.
I stopped asking for the whole castle in one message. First the skeleton and the main user path. Then CRUD or filters. Then polish. Big all-in-one prompts feel productive for sixty seconds and expensive for the rest of the afternoon.
Refine like you mean it.
First output is a draft. I read it like code review: what’s wrong, what’s missing, what’s structurally off. If the same mistake shows up twice, I fix the prompt, not just the line—the spec was probably thin.
Prompts that wasted my time vs. prompts that didn’t
These are shortened, but the pattern is real.
Landing page
Bad (I’ve sent worse):
> Make a nice landing page for my SaaS.
That’s not a brief; it’s a mood. No audience, no sections, no tone—“nice” could mean anything, so the model guesses.
Better (still pasteable):
> Single-page site for a B2B analytics tool, audience = small ecommerce teams.
> Sections: hero with headline, subhead, primary CTA, screenshot placeholder; logo strip; three feature blocks; one quote for social proof; pricing with two tiers; FAQ accordion; simple footer.
> Look: lots of whitespace, neutral background, one accent, something like Inter, thin borders—no glassmorphism.
> Must be responsive. Proper heading order, visible focus states on buttons. Use placeholder copy, don’t invent fake customer names.
> I’m happy when mobile scroll has zero horizontal junk.
Dashboard
Bad:
> Create a dashboard with charts.
Better:
> Admin shell: sidebar (Dashboard / Users / Settings), top bar with search + user menu.
> Dashboard body: four KPI cards—Revenue, Orders, Conversion, Refunds—then a 30-day line chart, then a bar chart for top products, then a sortable orders table with id, customer, total, status, date.
> Data from mocked JSON in a `data/` folder. Show loading skeletons; empty state when the table has no rows.
> No paid chart APIs. Small components, semantic HTML.
> This step is only layout + fake data + route placeholders—no real auth yet.
Adding a feature mid-flight
Bad:
> Add user profiles.
Better:
> Profile screen from the user menu. Fields: name (required), avatar URL optional, bio max 240 chars, timezone dropdown.
> Inline validation; Save disabled until valid; toast on success. Optimistic updates optional—don’t block on that.
> Explicitly out of scope: password, email, billing.
> Match whatever spacing and components the app already uses; if there’s no form primitive, add a single `ProfileForm` instead of copy-pasting markup everywhere.
A scrap of template I keep in Notes
When I’m blanking, I paste this and fill what I know:
Goal:
Who it’s for:
Main happy-path (a short paragraph):
Must ship:
Nice if there’s time:
Visual refs (apps, screenshots, adjectives):
Tech / limits / “do not”:
Empty & error behavior:
Definition of done:
Blank lines mean “model will improvise here” on purpose. I only leave them blank when I’m fine with that.
Last thing
I used to think long prompts were overhead. They’re not—they’re where the work moved. The model is fast either way; I slow down when my ask is fuzzy.
So yeah: spend the extra minute. Your future you, the one who isn’t diff-hunting at midnight, will notice.
Top comments (2)
The "build in steps" principle is the one that quietly does the most work. Asking for the whole castle in one prompt feels efficient in the moment—one message, one response, everything at once. But what you actually get is a castle where the foundation, walls, and roof were all generated simultaneously, with no opportunity to course-correct between them. The roof assumes a foundation shape that doesn't exist. The walls don't meet at the corners. By the time you notice, you're three features deep and the structural problems are baked into everything that followed.
What I find myself thinking about is how this maps onto the way senior developers already work with junior developers. You don't give a junior a one-sentence spec and expect a production-ready feature. You break it into pieces. Start with the skeleton. Review. Add the data flow. Review. Add error handling. Review. Each step is small enough that mistakes are caught before they compound. The AI isn't a junior developer, but it has the same property: it will confidently build on top of its own mistakes if you let it. The step-by-step approach isn't about the model's capability. It's about limiting the blast radius of the model's errors.
The template at the end is interesting because it treats blank lines not as omissions but as deliberate delegations. "Model will improvise here" is a decision, not an accident. That's a more sophisticated relationship with the tool than "specify everything" or "specify nothing." It's saying: I care about these parts enough to constrain them, and I trust the model to fill these other parts without breaking what I've specified. The skill is knowing which parts are which. Have you found that there are categories of things you consistently leave blank because the model handles them well, and other categories you've learned to always specify because the model's defaults are reliably wrong?
I’m currently applying this approach—breaking problems into smaller pieces—but the real challenge is effectively managing those pieces to maintain efficiency while still achieving high-quality results.