Security Research

    Vibe Coding Security Risks

    AI coding tools ship fast but introduce predictable security vulnerabilities. This is the complete list of risks we see across thousands of scans of apps built with Cursor, Lovable, Bolt, Replit, and v0.

    Why vibe-coded apps are vulnerable

    AI models optimize for working code, not secure code. They reproduce patterns from training data without understanding your threat model. The result: apps that work perfectly in demos but expose user data, payment flows, and admin access in production.

    Code Generation Risks

    Hallucinated Security Functions

    Critical

    AI invents non-existent security libraries or methods that look legitimate but provide zero protection. Your app appears secure but has no actual defenses.

    Outdated Vulnerability Patterns

    Critical

    AI models trained on older code reproduce deprecated patterns with known CVEs. You inherit vulnerabilities from code written years ago.

    Copy-Paste Propagation

    High

    A single insecure pattern generated early in a session gets repeated across your entire codebase as the AI references its own output.

    Incomplete Error Handling

    High

    AI generates try-catch blocks with empty handlers or generic catches that swallow critical security errors silently.

    Authentication & Authorization Risks

    Client-Side Auth Checks Only

    Critical

    AI tools often generate auth guards in React/Vue but skip server-side validation entirely. Anyone with dev tools can bypass them.

    Hardcoded API Keys

    Critical

    AI frequently embeds Supabase anon keys, Firebase configs, and API secrets directly in frontend code visible to anyone.

    Missing Row-Level Security

    Critical

    Supabase and Firebase apps built with AI rarely have proper RLS policies. Any authenticated user can read/modify any data.

    Weak Session Management

    High

    AI generates predictable session tokens, skips expiration logic, or stores tokens insecurely in localStorage.

    Data Exposure Risks

    Over-Permissive CORS

    High

    AI defaults to cors({ origin: "*" }) which lets any website make authenticated requests to your API.

    Verbose Error Messages

    High

    Stack traces, database schemas, and internal paths leaked to users through AI-generated error handlers.

    Excessive API Responses

    Medium

    AI returns entire database records including sensitive fields like password hashes, emails, and internal IDs.

    Unprotected Admin Endpoints

    Critical

    AI creates admin routes without authentication middleware, assuming the frontend will handle access control.

    Dependency & Supply Chain Risks

    Phantom Dependencies

    High

    AI suggests packages that do not exist on npm/PyPI. Attackers register these names and publish malicious code.

    Outdated Package Versions

    Medium

    AI recommends specific versions from its training data that now have known security vulnerabilities.

    Unnecessary Dependencies

    Medium

    AI imports heavy libraries for simple operations, expanding your attack surface with code you do not need.

    Missing Lock Files

    Medium

    AI-generated projects often skip lock file configuration, allowing dependency versions to drift silently.

    Infrastructure & Deployment Risks

    Secrets in Source Code

    Critical

    Database URLs, JWT secrets, and payment keys committed to git repositories because AI put them in config files.

    Missing HTTPS Enforcement

    High

    AI configures HTTP servers without TLS redirects, leaving data transmitted in plaintext.

    Debug Mode in Production

    High

    AI leaves development flags enabled: debug logging, hot reload endpoints, source maps exposed to users.

    No Rate Limiting

    High

    AI-generated APIs have zero throttling. Attackers can brute-force login, scrape data, or run up your cloud bill.

    Logic & Business Risks

    Payment Bypass

    Critical

    AI implements payment flows that can be skipped by modifying client-side state or calling API endpoints directly.

    Race Conditions

    High

    AI generates concurrent database operations without transactions or locks, enabling double-spending and data corruption.

    Insecure Direct Object References

    High

    AI uses sequential IDs in URLs without ownership checks. Users can access other users' data by changing the ID.

    Missing Input Validation

    High

    AI trusts all user input. No length limits, type checks, or sanitization on form fields and API parameters.

    Which tools are affected?

    Every AI coding tool produces these risks. The severity depends on how much of the stack the tool controls:

    Full-stack builders

    Highest risk. Generate entire apps including auth, database, and deployment.

    LovableBoltReplitv0Base44

    IDE assistants

    Medium risk. Generate code within existing projects but may miss security context.

    CursorWindsurfCopilotClaude Code

    Code completion

    Lower risk. Suggest snippets but developer controls architecture decisions.

    TabnineCodyDevin

    Related Resources

    Find these risks in your app

    VibeEval scans for all 24 risk categories automatically. Paste your URL and get a security report in under 5 minutes.

    Scan your app for free