Insights/Field Notes
Field NoteApril 25, 2026 · 9 min read

Vibe Coding Will Cost You More Than It Saves

Your CEO probably thinks AI-assisted development and vibe coding are the same thing. They're not, and that confusion will cost you money.

There's a word making the rounds in leadership circles right now that makes every serious builder wince: vibe coding.

It sounds fun. It sounds fast. It sounds like the future. And for a lot of executives watching demos and reading headlines, it has become shorthand for "anyone can build software now, so why are we paying for this?"

Here's the problem: the conversation has created a false binary. On one side, vibe coding — fast, accessible, anyone can do it. On the other, traditional software development — slow, expensive, gatekept by engineers. Most leaders think those are the only two options.

They're not. And the companies that figure out the third path are going to run circles around the ones stuck in either camp.

What Vibe Coding Actually Is

The term was coined by Andrej Karpathy in early 2025. The idea is simple: describe what you want in plain English, let an AI generate the code, and ship it without deeply understanding what was produced. The vibe is the product. If it works, it works.

And sometimes it does work — spectacularly well for prototypes, internal tools, and proof-of-concept demos. The speed is real. The accessibility is real. Non-developers can now produce functional software in hours instead of months. That's genuinely transformative.

But "functional" and "production-ready" are separated by a canyon, and the data on what's in that canyon is sobering.

The Numbers Are Ugly

A December 2025 analysis of 470 GitHub pull requests found that AI-generated code was 2.74 times more likely to contain security vulnerabilities than human-written code. Nearly half — 45% — of AI-generated code contains flaws, according to Veracode. Common problems include hardcoded API keys, exposed credentials, disabled security policies, and APIs fabricated from outdated training data.

One auditing firm reports finding 8 to 14 security issues in a typical vibe-coded application. These aren't theoretical concerns. A social networking platform called Moltbook, built entirely through vibe coding, was found to have a misconfigured database that exposed 1.5 million authentication tokens and 35,000 email addresses. The founder publicly stated he "didn't write one line of code."

Apple recently removed AI-powered app builders from the App Store over concerns about unreviewed code flooding the ecosystem. Fortune ran a headline that captures the moment perfectly: in the age of vibe coding, trust is the real bottleneck.

But Traditional Development Has Its Own Problem

Here's where the conversation gets more nuanced than most people make it.

While vibe coding fails from a lack of judgment, a lot of traditional development teams are failing from the opposite direction — too much process applied in the wrong places.

Most engineering organizations built their workflows for a world where code was expensive to produce. Three-person pull request reviews. Sprint planning ceremonies. Backlog grooming sessions. Stakeholder sign-offs before a feature moves from "ready for dev" to "in progress." These processes made sense when the bottleneck was writing code. They rate-limited the scarce resource.

AI flipped the bottleneck. Code is now cheap to produce. The scarce resource is review, integration, and architectural judgment. But many teams haven't adapted their processes to match. They're running AI-generated output through workflows designed for human-speed development, and the result is counterintuitive: AI is actually making them slower.

An engineer who previously wrote 200 lines of code per day now generates 2,000. But every line still needs to go through the same review chain, the same CI pipeline, the same approval cycle. The backlog doesn't shrink — it explodes. The team ships at the same pace or worse, but now they're drowning in code they didn't write and don't fully understand.

This is the legacy development trap. Right judgment, wrong process. The expertise is real, but the operating model was designed for a different era.

The Third Path: Directed Development

There's a third category that doesn't get enough attention, and it's the one that actually works.

Directed development is what happens when someone with deep operational knowledge and systems thinking uses AI as the execution layer — not as a replacement for judgment, but as a tool that operates under their judgment.

The directed developer isn't a senior engineer with twenty years of writing code. They might be an operations leader. A technical founder. A systems-minded operator who understands business processes, data architecture, and what "production-ready" actually means.

The key distinction is this: the vibe coder accepts whatever the AI produces. The directed developer tells the AI what to build, understands the architecture behind it, and verifies the output against standards they can defend. They know what row-level security is and why it matters. They understand authentication flows. They can read a git diff and catch when something changed that shouldn't have. They know the difference between a working prototype and a hardened production system.

This isn't a natural talent. It's a learned discipline, and it doesn't happen overnight.

The 2,000-Hour Reality

Here's the part of the story that most AI content skips over: becoming a directed developer is a serious investment.

It would be easy for someone with twenty years of operational leadership to assume that experience automatically translates into building production systems with AI. It doesn't. An experienced operator could absolutely fall into vibe coding without knowing it — because they don't know what they don't know.

Understanding database architecture and row-level security. Learning how to harden authentication and the different ways to secure a system. Knowing how to use Git correctly — not just committing and pushing, but branching strategies, diff reviews, and rollback patterns. Understanding the relationship between front end and back end. Building a proper development and testing environment instead of coding directly against production. Learning to build and understand the code rather than accepting black-box output from shortcut tools.

That kind of knowledge takes a concerted, sustained effort. Not a weekend course. Not a YouTube playlist. Hundreds of hours of deliberate practice, building real systems, breaking things, learning why they broke, and developing the instinct to ask "what happens when this fails?" before it fails.

The operators who put in that work become something the market desperately needs: people who combine business fluency with technical execution capability. They can have a strategic conversation with a CEO in the morning and architect a production database schema in the afternoon. That combination is rare, and it's becoming the most valuable skill set in the mid-market.

The Recognition Problem

This creates a real challenge for leadership teams: how do you tell the difference?

Every company right now has early AI adopters. Some of them are producing genuinely impressive work — building internal tools, automating workflows, creating systems that save real time and money. Others are producing impressive-looking demos that wouldn't survive a week in production.

From the outside, these look identical. The flashy PowerPoint pops just as quickly whether the person behind it spent 2,000 hours learning systems architecture or 20 minutes in a prompt window. The internal dashboard looks the same whether it has row-level security and proper authentication or whether it's wide open to anyone with the URL.

You can't evaluate AI-assisted work by looking at the surface. You have to ask the harder questions. What's the data model behind this? How does it handle authentication? What happens when two users submit conflicting data at the same time? Is there an audit trail? Can you roll back a bad deployment? What's the disaster recovery plan?

The people who can answer those questions confidently are your directed developers. The people who look uncomfortable or say "the AI handled that" are your vibe coders. Both built something that works today. Only one built something that will work six months from now.

The Spectrum

So the real landscape isn't a binary. It's a spectrum with failure modes on both ends.

On one end: vibe coding. No judgment, no process. Fast, fragile, and increasingly dangerous as the stakes rise. The risk is obvious — security vulnerabilities, black-box systems nobody can maintain, and the growing body of evidence that nearly half of AI-generated code ships with flaws.

On the other end: legacy development. The right judgment applied through the wrong process. Teams with genuine expertise who are drowning in review cycles, sprint ceremonies, and approval chains designed for a world where code was expensive to produce. AI made them faster at generating code and slower at everything else.

In the middle: directed development. The right judgment, applied through a process designed for AI-speed execution. Operators and builders who understand architecture, security, and systems thinking — and who use AI as a force multiplier for that expertise rather than a substitute for it.

The companies that win aren't going to be the ones with the biggest engineering teams or the fastest vibe coders. They're going to be the ones that invest in building directed development capability — either internally or through partners who've already done the work.

Where This Leaves Mid-Market Companies

If you're running a mid-market company, you probably have some combination of all three categories in your organization right now. You have people vibe coding internal tools that are running in production without anyone reviewing what's under the hood. You might have a dev team or outside firm applying traditional processes that are getting slower, not faster. And if you're lucky, you have one or two people who've put in the hours to become genuine directed developers.

The question isn't whether to use AI to build. That ship has sailed. The question is whether the people using AI in your organization have the judgment to do it safely and the process discipline to do it at speed.

Building the initial application is roughly 20% of the work. Hardening it — security, performance, integration, monitoring, governance — is the other 80%. That ratio doesn't change just because AI wrote the first draft faster.

Judgment over syntax. Process over speed. Every time.


Iron Pine builds production AI systems for mid-market companies — with the architectural discipline and operational judgment that separates working software from working software that's actually safe to run. If you've got AI-built systems in production and you're not sure what's under the hood, we should talk.

Iron Pine helps mid-market companies integrate AI into how they actually operate — grounded in your data, embedded in your workflows, adopted by your people, and operated with production discipline.

Talk to Us About an Assessment Try the AI Health Check
Continue Reading
← Back to Insights