Developers are shifting from writing every line to guiding A.I., and facing fresh challenges in review and oversight. Unsplash+

An emerging trend known as “vibe coding” is changing the way software gets built. Rather than painstakingly writing every line of code themselves, developers now guide an A.I. assistant— like Copilot or ChatGPT—with plain instructions, and the A.I. generates the framework. The barrier to entry drops dramatically: someone with only a rough idea and minimal technical background can spin up a working prototype.

The capital markets have taken notice. In the past year, several A.I. tooling startups raised nine-figure rounds and hit billion-dollar valuations. Swedish startup Lovable secured $200 million in funding in July—just eight months after its launch—pushing its value close to $2 billion. Cursor’s maker, Anysphere, is approaching a $10 billion valuation. Analysts project that by 2031, the A.I. programming market could be worth $24 billion. Given the speed of adoption, it might get there even sooner.

The pitch is simple: if prompts can replace boilerplate, then making software becomes cheaper, faster and more accessible. What matters less than whether the market ultimately reaches tens of billions is the fact that teams are already changing how they work. For many, this is a breakthrough moment, with software writing becoming as straightforward and routine as sending a text message. The most compelling promise is democratization: anyone with an idea, regardless of technical expertise, can bring it to life.

Where the wheels come off

Vibe coding sounds great, but for all its promise, it also carries risks that could, if not managed, slow future innovation. Consider safety. In 2024, A.I. generated more than 256 billion lines of code. This year, that number is likely to double. Such velocity makes thorough code review difficult. Snippets that slip through without careful oversight can contain serious vulnerabilities, from outdated encryption defaults to overly permissive CORS rules. In industries like healthcare or finance, where data is highly sensitive, the consequences could be profound.

Scalability is another challenge. A.I. can make working prototypes, but scaling them for real-world use is another story entirely. Without careful design choices around state management, retries, back pressure or monitoring, these systems can become brittle, fragile and difficult to maintain. These are all architectural decisions that autocomplete models cannot make on their own.

And then there is the issue of hallucination. Anyone who has used A.I. coding tools before has come across examples of nonexistent libraries of data being cited or configuration flags inconsistently renamed within the same file. While minor errors in small projects may not be significant, these lapses can erode continuity and undermine trust when scaled across larger, mission-critical systems.

The productivity trade-off

None of these concerns should be mistaken for a rejection of vibe coding. There is no denying that A.I.-powered tools can meaningfully boost productivity. But they also change what the programmer’s role entails: from line-by-line authoring to guiding, shaping and reviewing what A.I. produces to ensure it can function in the real world.

The future of software development is unlikely to be framed as a binary choice between humans and machines. The most resilient organizations will combine rapid prototyping through A.I. with deliberate practices—including security audits, testing and architectural design—that ensure the code survives beyond the demo stage.

Currently, only a small fraction of the global population writes software. If A.I. tools continue to lower barriers, that number could increase dramatically. A larger pool of creators is an encouraging prospect, but it also expands the surface area for mistakes, raising the stakes for accountability and oversight.

What comes next

It’s clear that vibe coding should be the beginning of development, not the end. To get there, new infrastructure is needed: advanced auditing tools, security scanners and testing frameworks designed just for A.I.-generated code. In many ways, this emerging industry of safeguards and support systems will prove just as important as the code-generation tools themselves.

The conversation must now expand. It’s no longer enough to celebrate what A.I. can do; the focus should also be on how to use these tools responsibly. For developers, that means practicing caution and review. For non-technical users, it means working alongside engineers who can provide judgment and discipline. The promise of vibe coding is real: faster software, lower barriers, broader participation. But without careful design and accountability, that promise risks collapsing under its own speed.

Shipping at the Speed of Prompt: What Vibe Coding Changes and Breaks


By