← Back to AI Security

    OWASP Top 10 for AI-Generated Code

    AI code generators produce OWASP vulnerabilities at scale. Copilot, Cursor, Lovable, and Bolt all generate code that looks correct but fails basic security checks. This guide maps the most relevant OWASP Top 10 categories to specific patterns found in AI-generated code, with concrete examples and fixes.

    Why OWASP Matters for AI-Generated Code

    The OWASP Top 10 is the standard classification of web application security risks. AI code generators are trained on vast codebases that include both secure and insecure patterns. When generating code, they optimize for functionality -- not security. The result is code that works perfectly in development but ships with well-known vulnerabilities.

    Research from Stanford and NYU found that developers using AI assistants produce significantly less secure code than those writing manually. The problem is not that AI writes uniquely bad code -- it writes the same insecure patterns that human developers have written for decades, but faster and at greater scale. OWASP provides the framework to systematically identify and fix these issues.

    A01: Broken Access Control in AI Code

    Broken access control is the number one OWASP risk, and it is the most common vulnerability in AI-generated applications. AI tools generate routes, API endpoints, and database queries without proper authorization checks. Specific patterns include:

    • Missing auth middleware -- Express routes or Next.js API handlers with no session verification. Anyone with the URL can access protected data.
    • No RLS policies -- Supabase tables created without Row Level Security. The anon_key grants full read/write access to all rows.
    • Open admin panels -- Admin routes protected only by frontend conditional rendering, not server-side role checks. Navigating directly to /admin bypasses the "protection."
    • IDOR vulnerabilities -- API endpoints that accept a user ID parameter without verifying the authenticated user owns that resource.

    The fix is straightforward but requires manual review: every endpoint must verify the user's identity and authorization before returning data. In Supabase apps, this means RLS on every table. In API-based apps, this means auth middleware on every route.

    A02: Cryptographic Failures

    AI tools routinely generate code with cryptographic weaknesses. These are not exotic attacks -- they are basic failures that automated scanners catch immediately:

    • Hardcoded secrets -- API keys, database URLs, and JWT secrets embedded directly in source files. AI generates placeholder values that developers ship to production.
    • Weak hashing -- Using MD5 or SHA-256 for password hashing instead of bcrypt, scrypt, or Argon2. AI defaults to the most common (not the most secure) hashing functions.
    • No HTTPS enforcement -- Missing redirect from HTTP to HTTPS, or API calls made over plain HTTP. Sensitive data transmitted in cleartext.
    • Insecure randomness -- Using Math.random() for session tokens, reset codes, or any security-critical value instead of crypto.getRandomValues().

    Fix these by moving all secrets to environment variables, using established password hashing libraries (bcrypt with cost factor 12+), enforcing HTTPS at the infrastructure level, and using cryptographically secure random number generators.

    A03: Injection

    Injection vulnerabilities remain common in AI-generated code despite decades of awareness. AI tools generate three primary types:

    • SQL injection -- String concatenation in database queries instead of parameterized statements. AI generates `SELECT * FROM users WHERE id = '${userId}'` instead of using prepared statements.
    • NoSQL injection -- MongoDB queries with user input passed directly as query operators, allowing attackers to modify query logic with operators like $gt or $ne.
    • XSS in templates -- User-supplied content rendered with dangerouslySetInnerHTML in React, or without escaping in server-rendered templates. AI generates quick solutions that bypass React's built-in XSS protection.

    Prevention requires parameterized queries for all database operations, input validation on both client and server, and using framework-provided escaping (React's JSX auto-escaping) rather than raw HTML insertion.

    A07: Identification & Authentication Failures

    AI-generated authentication systems frequently have implementation flaws that undermine the entire security model:

    • Weak session management -- Sessions that never expire, tokens stored in localStorage (vulnerable to XSS), or session IDs that can be predicted.
    • Missing MFA -- AI never generates multi-factor authentication unless explicitly prompted. For sensitive applications, this leaves accounts protected by passwords alone.
    • Insecure password reset -- Reset tokens sent in URL parameters, tokens that never expire, or reset flows that leak whether an email exists in the system.
    • No brute force protection -- Login endpoints without rate limiting or account lockout, allowing unlimited password attempts.

    Use established auth libraries (Supabase Auth, NextAuth, Clerk) instead of AI-generated custom auth. Add rate limiting to login endpoints. Store tokens in httpOnly cookies, not localStorage.

    A09: Security Logging & Monitoring Failures

    This is the silent risk in AI-built apps. AI code generators almost never add security logging, audit trails, or monitoring. When a breach occurs, there is no record of what happened:

    • No audit logs -- Failed login attempts, permission changes, and data access are not recorded. Breaches go undetected for weeks or months.
    • console.log in production -- AI uses console.log for debugging, which either leaks information in browser consoles or clutters server logs with noise while missing actual security events.
    • Missing error handling -- Unhandled exceptions crash the application or return stack traces to users, exposing internal implementation details.
    • No alerting -- Even when logs exist, there are no alerts for suspicious activity like multiple failed logins, unusual data access patterns, or privilege escalation attempts.

    Add structured logging for all authentication events, authorization failures, and data access. Use a centralized logging service (Datadog, Sentry, LogTail) and set up alerts for anomalous patterns.

    How to Audit AI Code Against OWASP

    A practical approach to auditing AI-generated code for OWASP compliance:

    1. Run automated SAST -- Use Semgrep with OWASP rulesets, or SonarQube's security hotspot detection. These catch hardcoded secrets, SQL injection, and XSS patterns.
    2. Check every route -- List all API endpoints and verify each has authentication and authorization middleware. No exceptions.
    3. Search for secrets -- Use tools like gitleaks or trufflehog to scan the entire git history for leaked credentials.
    4. Test database access -- For Supabase apps, use the anon_key to attempt reading and writing every table. If you can access data you shouldn't, RLS is missing or misconfigured.
    5. Verify input validation -- Send malformed data to every form and API endpoint. Check that the server rejects invalid input, not just the frontend.
    6. Review auth flows -- Test password reset, session expiry, and role-based access. Try accessing admin features as a regular user.

    Automated scanning catches about 60% of OWASP issues. The remaining 40% -- logic flaws, access control gaps, and business logic bypasses -- require manual testing.

    Related Resources

    Scan Your AI Code for OWASP Vulnerabilities

    VibeEval automatically checks AI-generated code against OWASP Top 10 categories. Get actionable findings for broken access control, injection flaws, and cryptographic failures in minutes.

    Start Free OWASP Scan