Security Research
Vibe Coding Security Risks
AI coding tools ship fast but introduce predictable security vulnerabilities. This is the complete list of risks we see across thousands of scans of apps built with Cursor, Lovable, Bolt, Replit, and v0.
Why vibe-coded apps are vulnerable
AI models optimize for working code, not secure code. They reproduce patterns from training data without understanding your threat model. The result: apps that work perfectly in demos but expose user data, payment flows, and admin access in production.
Code Generation Risks
Hallucinated Security Functions
CriticalAI invents non-existent security libraries or methods that look legitimate but provide zero protection. Your app appears secure but has no actual defenses.
Outdated Vulnerability Patterns
CriticalAI models trained on older code reproduce deprecated patterns with known CVEs. You inherit vulnerabilities from code written years ago.
Copy-Paste Propagation
HighA single insecure pattern generated early in a session gets repeated across your entire codebase as the AI references its own output.
Incomplete Error Handling
HighAI generates try-catch blocks with empty handlers or generic catches that swallow critical security errors silently.
Authentication & Authorization Risks
Client-Side Auth Checks Only
CriticalAI tools often generate auth guards in React/Vue but skip server-side validation entirely. Anyone with dev tools can bypass them.
Hardcoded API Keys
CriticalAI frequently embeds Supabase anon keys, Firebase configs, and API secrets directly in frontend code visible to anyone.
Missing Row-Level Security
CriticalSupabase and Firebase apps built with AI rarely have proper RLS policies. Any authenticated user can read/modify any data.
Weak Session Management
HighAI generates predictable session tokens, skips expiration logic, or stores tokens insecurely in localStorage.
Data Exposure Risks
Over-Permissive CORS
HighAI defaults to cors({ origin: "*" }) which lets any website make authenticated requests to your API.
Verbose Error Messages
HighStack traces, database schemas, and internal paths leaked to users through AI-generated error handlers.
Excessive API Responses
MediumAI returns entire database records including sensitive fields like password hashes, emails, and internal IDs.
Unprotected Admin Endpoints
CriticalAI creates admin routes without authentication middleware, assuming the frontend will handle access control.
Dependency & Supply Chain Risks
Phantom Dependencies
HighAI suggests packages that do not exist on npm/PyPI. Attackers register these names and publish malicious code.
Outdated Package Versions
MediumAI recommends specific versions from its training data that now have known security vulnerabilities.
Unnecessary Dependencies
MediumAI imports heavy libraries for simple operations, expanding your attack surface with code you do not need.
Missing Lock Files
MediumAI-generated projects often skip lock file configuration, allowing dependency versions to drift silently.
Infrastructure & Deployment Risks
Secrets in Source Code
CriticalDatabase URLs, JWT secrets, and payment keys committed to git repositories because AI put them in config files.
Missing HTTPS Enforcement
HighAI configures HTTP servers without TLS redirects, leaving data transmitted in plaintext.
Debug Mode in Production
HighAI leaves development flags enabled: debug logging, hot reload endpoints, source maps exposed to users.
No Rate Limiting
HighAI-generated APIs have zero throttling. Attackers can brute-force login, scrape data, or run up your cloud bill.
Logic & Business Risks
Payment Bypass
CriticalAI implements payment flows that can be skipped by modifying client-side state or calling API endpoints directly.
Race Conditions
HighAI generates concurrent database operations without transactions or locks, enabling double-spending and data corruption.
Insecure Direct Object References
HighAI uses sequential IDs in URLs without ownership checks. Users can access other users' data by changing the ID.
Missing Input Validation
HighAI trusts all user input. No length limits, type checks, or sanitization on form fields and API parameters.
Which tools are affected?
Every AI coding tool produces these risks. The severity depends on how much of the stack the tool controls:
Full-stack builders
Highest risk. Generate entire apps including auth, database, and deployment.
IDE assistants
Medium risk. Generate code within existing projects but may miss security context.
Code completion
Lower risk. Suggest snippets but developer controls architecture decisions.
Related Resources
Find these risks in your app
VibeEval scans for all 24 risk categories automatically. Paste your URL and get a security report in under 5 minutes.
Scan your app for free