DEV Community

Cover image for Vibe Coding is Fun Until You Hit Production
Mamoor Ahmad
Mamoor Ahmad Subscriber

Posted on β€’ Originally published at dev.to

Vibe Coding is Fun Until You Hit Production

Three hours. That's all it took.

I described a SaaS dashboard to Cursor. It generated the React components. I prompted it again β€” backend routes, database schema, auth flow. Another prompt. Deployment config. CI pipeline. Landing page.

By lunchtime, I had a working product. Live URL. Login flow. Data persistence. Dark mode. ✨

I posted on Slack: "Just shipped a new tool in one morning. Vibe coding is insane."

By dinner, I had 43 messages from users. Not the good kind.

"I can see other people's data."
"The export button returns an empty file."
"I got logged in as someone else."

That last one. 😬

Three hours to build. Three weeks to fix. And a very uncomfortable conversation with my manager about what "shipped" actually means.


🎡 What Even Is Vibe Coding?

If you've been anywhere near tech Twitter in 2026, you've heard the term. Vibe coding is the practice of building software by describing what you want to an AI and iterating through conversation rather than writing code manually.

The term was coined in early 2025 and has since become one of the most debated practices in software development.

The promise: Anyone can build software. Just describe what you want. The AI handles the rest.

The reality: Anyone can build software that looks like it works. The gap between "looks like it works" and "works in production" is where careers go to die.

I'm not anti-vibe coding. I still do it. But I learned β€” the hard way β€” that vibes have a shelf life, and that shelf life ends at git push origin main.

Here's what I've learned from shipping AI-generated code to real users.


πŸ’₯ The 7 Ways Vibe Coding Breaks in Production

1. πŸ” The Security You Didn't Think About

This is the big one. The one that gets you on a call with legal.

When I vibe-coded my dashboard, the AI generated auth middleware that looked secure. JWT tokens, bcrypt passwords, rate limiting. Textbook stuff.

What it didn't do:

  • Sanitize the input on the search endpoint (hello, SQL injection)
  • Validate that users could only access their own data (hello, IDOR vulnerability)
  • Set proper CORS headers (hello, any website can call my API)
What I asked for: "Add user authentication"
What I got: A login form that works
What I needed: A security review by someone who thinks like an attacker
Enter fullscreen mode Exit fullscreen mode

The AI doesn't think like an attacker. It thinks like a tutorial. It gives you the happy path, not the threat model.

πŸ“Œ The rule: Never ship AI-generated auth, payments, or user data handling without a human security review. Period.


2. πŸ—„οΈ The Database Schema That Worked Until It Didn't

The AI designed my database schema. It was clean. Normalized. Made sense on paper.

It also stored user sessions in the same database as user data, with no foreign key constraints, no indexes on the columns I was querying every 50ms, and a deleted_at column that nothing actually checked.

AI's schema:  βœ… Looks clean
Reality:      ❌ No indexes = full table scan on every request
Reality:      ❌ No constraints = orphaned records everywhere
Reality:      ❌ Soft delete that nothing respects = ghost data
Enter fullscreen mode Exit fullscreen mode

When I hit 200 concurrent users, my database response time went from 50ms to 12 seconds. The AI never mentioned indexes. I never asked. That's the trap β€” you don't know what you don't know.

πŸ“Œ The rule: If you don't understand database design, vibe-code the feature, then ask a human to review the schema before you populate it with real data.


3. πŸ§ͺ The Tests That Test Nothing

Here's a conversation I had with Cursor:

Me: "Write tests for the payment module"
AI: *writes 23 tests*
Me: "Run the tests"
AI: "All 23 tests passed βœ…"
Me: *ships it*
Enter fullscreen mode Exit fullscreen mode

Two weeks later: a user was charged twice for the same subscription. How?

The AI wrote tests that verified the function calls were made. It never tested what happened when the webhook fired twice. It never tested idempotency. It never tested the thing that actually broke.

AI-generated tests optimize for coverage numbers, not for finding bugs.

They test that the code does what the code does. They don't test that the code does what the business needs.

πŸ“Œ The rule: Write your own test cases for critical paths. Use AI to generate the boilerplate, but you define the scenarios.


4. πŸ“¦ The Dependency Avalanche

When you vibe-code, you prompt: "Add email sending." The AI adds nodemailer.

Then: "Add HTML email templates." It adds mjml and handlebars.

Then: "Add email scheduling." It adds bull and redis.

Then: "Add email tracking." It adds open-pixel and three more packages.

By the end of a 3-hour session, your package.json has 47 new dependencies. You didn't choose any of them. You don't know what half of them do. And one of them has a known CVE that's been open for 6 months.

The Dependency Growth Curve

πŸ“Œ The rule: After every vibe coding session, run npm audit, read the dependency list, and ask: "Do I actually need all of these?"


5. 🎭 The UI That Looks Done But Isn't

AI is incredible at generating beautiful UI. Give it a prompt, get back a polished component with animations, responsive layout, and dark mode.

But:

  • The "Submit" button doesn't have a loading state β†’ users click it 5 times
  • The form doesn't validate on blur, only on submit β†’ frustration
  • The error message says "Something went wrong" β†’ zero debugging info
  • The mobile layout technically works but the touch targets are 20px β†’ rage tapping
  • The modal doesn't trap focus β†’ accessibility nightmare

Looking done and being done are different things. AI excels at the first. You have to deliver the second.

πŸ“Œ The rule: After the AI generates UI, test it like a frustrated user. Click fast. Resize the window. Use keyboard only. Try to break it.


6. πŸ”‡ The Error Handling That Doesn't Handle

// What the AI wrote:
try {
  const result = await processPayment(amount);
  return { success: true, data: result };
} catch (error) {
  return { success: false, error: error.message };
}
Enter fullscreen mode Exit fullscreen mode

Looks fine, right? Now what happens when processPayment throws a TimeoutError? The user sees "TimeoutError" on their screen. Not "Payment is processing, please check back in a minute." Just... a raw error message.

What happens when the network drops mid-request? The AI doesn't retry. It doesn't queue. It doesn't tell the user what state they're in.

AI writes error handling that catches errors. It doesn't write error handling that handles errors.

πŸ“Œ The rule: For every error catch block, ask: "What does the user see? What do they do next?" If you can't answer both, rewrite it.


7. πŸ“ˆ The Performance Cliff

My dashboard loaded in 200ms with 10 test users.

With 500 real users? 8 seconds.

The AI had:

  • No pagination (loading all records at once)
  • No caching (same query on every page load)
  • No lazy loading (every component hydrated on mount)
  • Three API calls that could've been one

These aren't bugs. The code works correctly. It just works slowly. And the AI never mentioned performance because you never asked about performance.

That's the core problem with vibe coding: it optimizes for the request you made, not the requirements you forgot.

πŸ“Œ The rule: Before shipping, test with realistic data volumes. 10 test records tell you nothing about 10,000 real ones.


βœ… When Vibe Coding Actually Works

I'm not here to trash vibe coding. It's genuinely powerful when used correctly. Here's where it shines:

πŸ† The Sweet Spots

Use Case Why It Works
Prototyping Speed > quality. Get the idea on screen fast.
Personal tools You're the only user. Bugs are learning opportunities.
Boilerplate Config files, CRUD routes, migration scripts.
Learning "Explain this code" is the best prompt in vibe coding.
UI exploration "Try 5 different layouts for this dashboard."

⚠️ The Danger Zones

Use Case Why It's Risky
Auth & security AI doesn't think like an attacker.
Payments Real money, real consequences.
User data Privacy laws don't care that "the AI wrote it."
Production systems Reliability requires understanding, not just output.
Team codebases Others have to maintain what you vibe-coded.

🧭 The 7 Rules I Now Follow

After shipping broken code and spending weeks fixing it, here's my personal vibe coding framework:

Rule 1: 🎯 Prompt with Purpose, Not Hope

❌ "Build me a user dashboard"
βœ… "Build a user dashboard with: 
   - Server-side pagination (20 items/page)
   - Input sanitization on all form fields
   - Error boundaries with user-friendly messages
   - Loading states for every async operation
   - WCAG 2.1 AA compliance"
Enter fullscreen mode Exit fullscreen mode

Specificity is quality control. Vague prompts produce vague code.

Rule 2: πŸ” Read Every Line Before Shipping

I know. The whole point of vibe coding is not reading code. But if it's going to production, you need to understand what it does. At least at the architecture level.

You don't need to understand every regex. But you do need to know:

  • Where does user input go?
  • How is auth handled?
  • What happens when things fail?
  • What data leaves the server?

Rule 3: πŸ§ͺ Write Your Own Critical Tests

Use AI to generate unit tests for utility functions. But for the paths that matter β€” login, payments, data access β€” write the test scenarios yourself.

Ask yourself: "What's the worst thing that could happen if this breaks?" Then write a test for exactly that.

Rule 4: πŸ” Security Review Before Deploy

Run npm audit. Check for hardcoded secrets. Verify CORS. Test authentication with two different accounts. Try to access data that isn't yours.

If you don't know how to do these things, learn them before you vibe-code a production app.

Rule 5: πŸ“Š Test with Real Data Volumes

Populate your database with 10,000 records. See what happens. If the page takes 5 seconds to load, you have a problem. Better to find it now than when users are complaining.

Rule 6: πŸ—οΈ Vibe the Feature, Engineer the Foundation

Use AI to generate the feature code. But the architecture β€” the database schema, the API design, the auth flow β€” design that yourself. Or have someone review it.

Features are expendable. Foundations are not.

Rule 7: πŸ“ Document What You Don't Understand

If the AI generated something and you don't fully understand it, write a comment. Not for others β€” for future you. Because when it breaks at 2am, you won't remember what that 40-line function does.


πŸ“Š The Vibe Coding Maturity Model

I've started thinking about vibe coding on a spectrum:

Level 0: "What's vibe coding?"
Level 1: "I use AI for autocomplete"  
Level 2: "I describe features and AI builds them"
Level 3: "I review and understand everything AI generates"
Level 4: "I architect the system, AI handles implementation"
Level 5: "I use AI as a tool, not a crutch"
Enter fullscreen mode Exit fullscreen mode

Most people are at Level 2. The goal is Level 4-5.

The danger isn't vibe coding itself. It's getting stuck at Level 2 and thinking you're at Level 5.

The Vibe Coding Maturity Spectrum


πŸ’¬ Let's Be Honest

I still vibe code every day. It's an incredible tool for the right problems. I've built personal tools, prototypes, and internal dashboards in hours that would've taken days.

But I've also learned β€” through broken auth, angry users, and 2am debugging sessions β€” that shipping to real users requires more than vibes.

It requires judgment. It requires understanding. It requires the humility to say: "The AI wrote this, but I need to verify it works correctly."

The developers who'll thrive aren't the ones who reject vibe coding. They're the ones who know when to vibe and when to engineer.


Where are you on the Vibe Coding Maturity Model? And what's the worst thing you've shipped with AI-generated code?

I'll go first: I shipped a payment integration that double-charged users. The AI's test suite passed with flying colors. 🫠

Your turn. πŸ‘‡


If this saved you from a production incident, share it with a fellow vibe coder. We've all been there.

More on navigating the AI coding era:

Top comments (2)

Collapse
Β 
mamoor_ahmad profile image
Mamoor Ahmad β€’

Okay, I'll admit my worst vibe coding moment. The bar is low. πŸ˜…

I shipped a "simple" internal tool that let team members submit expense reports. Vibe-coded the whole thing in an afternoon. Looked great. Worked perfectly in my testing.

Three days later, the finance team pings me: "Why can I see everyone else's expenses?"

Turns out the AI-generated API endpoint had zero authorization checks. /api/expenses?user_id=* β€” literally anyone could pass any user ID and see their receipts, amounts, everything.

The fix took 20 minutes. The trust recovery took months.

What I learned:

  • AI is phenomenal at generating the happy path
  • It's terrible at thinking about who should access what
  • "It works" and "It's secure" are completely different conversations

Since then I have a personal checklist I run before shipping ANY AI-generated code:

  1. βœ… Can user A see user B's data?
  2. βœ… What happens if I send garbage input?
  3. βœ… Are there any hardcoded secrets?
  4. βœ… Does it handle the case where the network drops?

Four simple questions. Saved me from at least 3 more incidents.

What's your worst vibe coding horror story? I know I'm not alone here. 🫠

Collapse
Β 
konark_13 profile image
Konark Sharma β€’

Awesome Article, Mamoor. Thanks for the mention, it means a lot.😊