What Is Vibe Coding? A Plain-English Guide for IT Leaders
Vibe coding — AI-generated software from natural language descriptions — is a $4.7B market with 63% non-developer adoption. It creates ungoverned code, shadow IT at unprecedented scale, and credential exposure that security assessments are designed to catch.
Your employees are building software. Not the development team — the operations manager, the marketing lead, the finance analyst. They are describing what they want in plain English, and AI is writing the code. This is vibe coding, and it is the single biggest shift in how software gets created since the smartphone put a computer in everyone’s pocket.
The term comes from Andrej Karpathy — former Tesla AI director, OpenAI co-founder, and one of the most respected voices in AI research. In February 2025, he described a new way of programming where “you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” He was not being casual. He was describing what he saw coming: a world where the person building the software never reads the code, accepts changes without reviewing diffs, and copies error messages to the AI to fix issues until they disappear.
Fourteen months later, we are living in that world. Vibe coding is a $4.7 billion market growing at 38% annually. Gartner forecasts 60% of all new code will be AI-generated by the end of 2026. Collins English Dictionary named “vibe coding” its Word of the Year for 2025. Merriam-Webster added it to its trending terms within weeks of Karpathy’s post.
And Karpathy himself has already moved on. In January 2026, he declared vibe coding “passe” and introduced a new term: agentic engineering — a structured paradigm where “you are not writing the code directly 99% of the time. You are orchestrating agents who do, and acting as oversight.” The progression tells you everything about how fast this is moving. The casual version already has a more serious successor, and both are happening simultaneously across every industry.
Here is what that means if you lead IT, own security, or run a business where employees use technology — which is all of them.
Where Vibe Coding Is Today
The current tools fall into three tiers, and it matters which ones your people are using because the risk profile is different for each:
Tier 1: Inline code assistants — GitHub Copilot, Cursor, Windsurf. These sit inside a developer’s editor and autocomplete code based on context. The developer is still writing code, still reviewing output, still running tests. Adoption is near-universal: 92% of US developers use AI coding tools daily. This tier is the least risky because a professional developer is in the loop.
Tier 2: Agentic coding environments — Claude Code, Replit Agent, Devin. These are what Karpathy now calls “agentic engineering.” The developer describes a feature or a bug in natural language. The AI writes the implementation across multiple files, runs tests, commits code, and iterates. Claude Code operates directly in the terminal, reads the full codebase, and makes coordinated changes across dozens of files. The developer reviews and approves, but may not read every line. Y Combinator’s Winter 2026 batch saw 40% of startups operating with only one or two engineers — AI agents handled the rest.
Tier 3: No-code AI builders — Bolt, Lovable, v0, Replit. These target non-developers entirely. A marketing manager describes a dashboard. A finance analyst describes a reporting tool. The platform generates a working application. No terminal. No editor. No code review. This tier is where the risk concentrates — and where adoption is accelerating fastest. 63% of vibe coding users are non-developers.
87% of Fortune 500 companies have adopted at least one vibe coding platform. The code your employees are generating is not hypothetical — it is in production.
Where Vibe Coding Is Going: Just-in-Time Applications
Here is the part most people have not thought through yet.
Today, businesses buy SaaS subscriptions for every function — project management, CRM, invoicing, reporting, scheduling. Each tool is a general-purpose product designed for thousands of companies, configured by the buyer, and paid for monthly whether it is used or not.
Vibe coding makes a different model possible: just-in-time applications. Instead of buying a $50/month reporting tool and spending a week configuring it, an employee describes what they need and AI generates a purpose-built application in minutes. The application does exactly one thing, does it exactly the way the business needs it done, and costs nearly nothing to create.
This is already happening. An internal data product that would have taken six weeks to build through traditional development was built in 20 minutes using AI agents. TechCrunch called it the “SaaSpocalypse” — the unbundling of SaaS by AI-generated alternatives that are cheaper, faster, and perfectly fitted to the use case.
The end state is disposable software. Teams generate dozens of micro-apps per month, each solving a single task, deployed on serverless infrastructure, and retired when the task is done. Built for the moment, not for the decade.
Karpathy sees this as part of a larger pattern. In his view, LLMs are “the next major computing paradigm, similar to computers of the 1970s and 80s.” He predicts we will see equivalents of personal computing (everyone has their own AI), microcontrollers (a “cognitive core” — a small, always-on AI model that lives on every device), and the internet (a network of AI agents communicating with each other). Software is not just getting easier to build. The entire concept of what software is — who builds it, how long it lasts, whether it is a product or a disposable artifact — is being rewritten.
For any organization managing an IT environment, the implication is direct: the software inventory inside your organization is about to expand by an order of magnitude. Not through procurement. Through generation. Applications will appear that no one purchased, no one approved, and no one assessed.
The Security Problem Nobody Has Solved
I want to be direct about this: vibe coding is creating ungoverned software faster than any governance framework can assess it. And the security data is getting worse, not better.
Georgia Tech’s Vibe Security Radar tracked 35 new CVE entries directly caused by AI-generated code in March 2026 alone — up from six in January. Nearly half of all AI-generated code contains security flaws, with no improvement across larger or newer models. Developer trust in AI-generated code has dropped from 77% to 60% as the community encounters more production failures.
The most visible example: Moltbook, a startup whose founder publicly stated he “did not write a single line of code.” Within three days of launch, security researchers found the application had exposed its entire production database, including 1.5 million API authentication tokens. That is not an edge case. That is the predictable outcome when no one reviews the code.
Vibe coding creates three categories of risk:
Ungoverned code in production. When a non-developer generates an application and deploys it, the code has not been reviewed by anyone who understands security. This is not a tooling problem that better AI will fix. It is a human oversight problem. Karpathy’s own evolution from “forget that the code exists” to “agentic engineering with oversight and scrutiny” tells you he reached the same conclusion.
Shadow IT at unprecedented scale. Vibe coding lowers the barrier to creating applications from “hire a developer” to “describe what you want.” The result is shadow IT that is harder to detect because the applications are custom-built, not downloaded from an app store. They do not appear in standard SaaS discovery tools. They connect to production data through OAuth tokens and API keys that nobody reviewed.
Credential and data exposure. Non-technical users generating code do not think about secrets management. API keys, database credentials, and access tokens end up hardcoded in source files, pasted into AI prompts, or stored in public repositories.
What Organizations Should Be Doing
The instinct is to block vibe coding tools entirely. That will fail. The tools are browser-based, they operate through standard HTTPS, and employees will use personal devices to route around restrictions. Prohibition did not work for cloud adoption, BYOD, or SaaS sprawl. It will not work here.
The smarter approach has three parts: visibility, structure, and secure development practices.
1. Get Visibility Into What Already Exists
You cannot govern what you cannot see. Most organizations already have vibe-coded applications running in their environment — they just do not know it yet.
Deploy automated discovery tooling. Manual audits will not keep pace with the speed at which AI-generated applications appear. Implement tools that continuously monitor for new OAuth app consents in Entra ID or Google Workspace, API keys issued through cloud admin portals, new service principal registrations, LLM API calls leaving your network (OpenAI, Anthropic, Google AI endpoints), and unauthorized cloud service connections that appeared without change tickets.
The goal is a living inventory of every application — purchased or generated — that touches your data. Quarterly manual reviews are no longer sufficient when employees can generate and deploy a new integration in an afternoon.
2. Create Structure for Approved Vibe Coding
Vibe coding has legitimate business value. An employee who can generate a purpose-built reporting tool in 20 minutes instead of waiting six weeks for IT to prioritize it is a real productivity gain. The answer is not to eliminate that — it is to put guardrails around it.
Establish an AI-acceptable-use policy that defines which vibe coding tools are approved (and which are not), what data can be used as input to AI tools (no customer PII, no credentials, no proprietary source code, no regulated data), what environments generated code can run in (sandbox first, production only after review), who is authorized to deploy AI-generated applications, and what documentation is required (what was the business purpose, what data does it access, who owns it).
Create a lightweight approval pathway. If the business case is clear — an employee needs a tool, vibe coding can produce it faster and cheaper than procurement — give them a way to do it within governance. A simple intake form (what does it do, what data does it touch, who owns it) and a 24-hour review cycle is better than a six-month procurement process that employees will bypass entirely.
The organizations that benefit most from vibe coding will be the ones that treat it like any other development capability: enabled, but governed.
3. Adapt Secure Development Practices for AI-Generated Code
Traditional secure software development practices still apply to vibe-coded applications — but they need to be adjusted for a world where the “developer” may not understand the code they produced.
NIST SP 800-218, the Secure Software Development Framework (SSDF), provides the baseline. NIST has already published SP 800-218A, which extends the SSDF specifically for AI-generated code and generative AI systems. Together, these frameworks give organizations a practical structure for securing vibe-coded output.
Several SSDF practices translate directly to vibe coding governance. These are the ones that matter most:
Review all AI-generated code before deployment (SSDF PW.7). This is the single most important control. PW.7 requires that code be reviewed against secure coding standards before deployment. When a non-developer accepts AI-generated output, they cannot perform this review themselves. For high-risk applications (anything touching authentication, customer data, or financial systems), a security-qualified reviewer should sign off. For lower-risk internal tools, automated static analysis (SAST) is the minimum viable gate.
Test AI-generated code for vulnerabilities (SSDF PW.8). Non-developers will not write security tests. Automated testing — static analysis, dynamic analysis, fuzz testing — must be built into the deployment pipeline and enforced as a gate. AI-generated code needs higher test coverage than human-written code, not lower, because there is less certainty the AI understood the requirements correctly.
Scan dependencies and generate SBOMs (SSDF PW.4). AI coding tools frequently suggest outdated or vulnerable libraries, non-existent packages (dependency confusion), or poorly maintained dependencies. Software Composition Analysis (SCA) scanning should run automatically on any AI-generated project. Generate a Software Bill of Materials (SBOM) for every deployed application — including the disposable ones.
Enforce secure coding practices automatically (SSDF PW.5). AI tools routinely produce insecure patterns: hardcoded credentials, overly permissive IAM roles, missing input validation, unauthenticated API endpoints. Custom SAST rule sets targeting these AI-specific vulnerability patterns should run automatically — the “developer” will not catch them manually.
Define who can build what (SSDF PO.2). When non-technical employees become “developers” through AI tools, the organization must define what they are authorized to build, provide basic security training on AI-generated code risks, and establish clear escalation procedures for applications that touch sensitive data. Not everyone who can prompt an AI should be deploying applications that handle customer PII.
Scan for secrets and data exposure. Non-technical users generating code do not think about secrets management. API keys, database credentials, and access tokens end up hardcoded in source files or pasted into AI prompts. Automated secret scanning — in repositories, in CI/CD pipelines, and in the AI tools themselves — catches these before they reach production.
Treat AI-generated applications as software, not spreadsheets. The biggest governance gap is organizational. Most companies have a software development lifecycle for their engineering team and nothing for the applications their business users generate. Vibe-coded applications need ownership, maintenance schedules, vulnerability monitoring (SSDF RV.1), and decommission plans — especially the disposable, just-in-time applications that are easy to create and easy to forget about.
What Leadership Should Be Asking
If your security program does not include AI-generated code, it has a gap that is growing every week.
Security policies that define “approved software” as a list of purchased products need to expand to cover generated software. Quarterly reviews that only examine procured SaaS will miss the custom applications employees are generating between reviews. The organization that gets ahead of this — “here is what our people are building with AI, here is the risk, here is the governance framework” — avoids the incident response scramble later.
Karpathy compared LLMs to the computing paradigm of the 1970s and 80s. If he is right — and the adoption data suggests he is — then vibe coding is the equivalent of the first personal computers showing up on office desks without IT’s permission. The organizations that governed that transition early thrived. The ones that ignored it spent years cleaning up the mess.
We are at the same inflection point. The question is whether your organization’s AI-generated code will be governed or ungoverned — and whether leadership addresses it proactively or reactively.
Genesis helps organizations assess and govern their cloud environments — including the ungoverned applications that vibe coding creates. If your security program has not accounted for AI-generated software, we can help you find out where you stand.
Contact us to start the conversation.
Frequently Asked Questions
- What is vibe coding?
- Vibe coding is a software development approach where the person describes what they want in natural language and AI generates the code. The term was coined by Andrej Karpathy in February 2025. It ranges from AI-assisted coding inside a developer's editor (GitHub Copilot, Cursor) to fully autonomous application generation from a text description (Replit Agent, Bolt). Karpathy has since introduced 'agentic engineering' as the more structured evolution — AI agents handle implementation while humans provide oversight. Both approaches are in widespread use.
- Why should IT leaders care about vibe coding?
- Because your employees are already doing it. 63% of vibe coding users are non-developers. They are generating applications, automations, and integrations that connect to production data — without IT approval, security review, or change management. These applications create OAuth consents, API connections, and service principal permissions in your cloud environment. If you are not looking for them, you are missing a growing attack surface.
- What are just-in-time applications?
- Just-in-time applications are purpose-built software generated on demand by AI — designed for a specific task, used until the task is complete, then retired. Instead of buying a SaaS subscription and configuring it, an employee describes what they need and AI generates it in minutes. Andrej Karpathy frames this as part of a broader shift: LLMs as 'the next major computing paradigm,' with disposable, purpose-built software replacing general-purpose SaaS tools for an increasing share of business functions.
- What is the difference between vibe coding and agentic engineering?
- Vibe coding, as Karpathy originally described it, means accepting AI-generated code without closely reviewing it — 'forget that the code even exists.' Agentic engineering, the term Karpathy introduced in January 2026, adds structure: AI agents handle implementation, but the human acts as architect and reviewer with oversight and scrutiny. The distinction matters because agentic engineering produces more reliable output, while pure vibe coding by non-developers is where the security risk concentrates.