FREE SECURITY SELF-AUDIT FOR AI-BUILT APPS

Five free scanners. No signup. Paste a URL into each, get a clean per-category report. Together they cover the top five vulnerability classes we see in AI-built apps — the same ones in the 2026 benchmark.

If you shipped an app with Lovable, Bolt, Cursor, Replit, or V0 — and you do not have a security background — start here. Five free scanners, each focused on one vulnerability class, no signup or card required.

The order matters. Items 1 and 2 catch what loses every user’s data. Items 3 through 5 catch what escalates one user into many.

This is the audit we tell solo founders to run before they accept their first paying user. If you can’t do it yet because the app isn’t deployed, run the pre-launch checklist instead — same priorities, framed for the build phase.

The five free scanners

1. Supabase RLS Checker →

Tests every public Supabase table with your anon key. Reports which tables return rows without authentication — those are publicly readable.

Catches. The single biggest failure mode in Lovable, Bolt, and Cursor apps. 59% of apps in the 2026 benchmark had at least one RLS gap.

What the report looks like. A list of every table the anon key can reach, color-coded by whether it returned data. Tables in red are publicly readable; tables in yellow allow anon writes; tables in green require auth. The recurring shape we keep seeing is a profiles table that returns every user’s email and avatar to anyone with curl — see Supabase RLS misconfiguration atlas for the patterns.

Fix prompt. “For each table flagged in the RLS report, generate a Supabase migration that enables RLS and adds a SELECT policy restricting rows to auth.uid() = user_id. Add equivalent INSERT, UPDATE, and DELETE policies that require ownership. Output the SQL migration file and the CLI command to apply it.”

Time. ~10 seconds to scan, ~5 minutes to fix per table.

2. Token Leak Checker →

Loads your URL like a normal visitor and runs 100+ key signatures against the JavaScript bundle. Catches Stripe secret keys, Supabase service-role keys, AWS keys, OpenAI keys, and 90+ more formats.

Catches. 41% of apps in the benchmark ship at least one secret key in their frontend bundle.

What the report looks like. A list of every match by key type, with the file and line where it was found. Each finding includes the rotation URL for the relevant provider — the right move for a leaked key is always rotate first, then remove from code, never the other way round.

Fix prompt. “Move every secret found in the token leak report from the client bundle to a server-side environment variable. Replace the direct API calls with calls to a new server route that proxies the request. For Stripe, use STRIPE_SECRET_KEY only on the server; for Supabase, use the service-role key only in server functions; for OpenAI, never call from the browser. Update .env.example accordingly.”

Time. ~15 seconds to scan, ~10 minutes per key to rotate and remediate.

3. Firebase Scanner →

If you used Firebase as your backend, this checks firestore.rules and storage.rules for the open-read patterns that ship in default AI-generated Firebase apps.

Catches. The Firebase equivalent of missing RLS. Firebase apps were a smaller share of the benchmark but had the highest critical rate of any backend. The recurring default is match /{document=**} { allow read, write: if true; } — a wildcard that opens every collection to the public.

What the report looks like. A flat list of every rule block, marked safe / risky / open. The “open” ones are the priority; the “risky” ones (e.g., if request.auth != null without a tenant check) are the second pass.

Fix prompt. “Rewrite firestore.rules to require request.auth != null on every collection, and to require request.auth.uid == resource.data.user_id on collections that are owned by a single user. Reject the wildcard match /{document=**} rule unless explicitly justified. Output the new rules file and the deploy command.”

Time. ~5 seconds to scan, ~15 minutes to rewrite the rules file.

4. Security Headers Checker →

Inspects HSTS, CSP, X-Frame-Options, X-Content-Type-Options, and CORS. Reports which headers are missing or set to permissive values.

Catches. CORS allow-all on credentialed endpoints (23% of apps), missing HSTS (most apps shipping their first deploy), no CSP (almost every app at first deploy).

What the report looks like. A grade per header (A through F), with the recommended value alongside the current one. CORS is the highest-impact item — see CORS credentials misconfig for why a credentialed allow-all is a credential-stuffing pivot, not just a “best practice” miss.

Fix prompt. “Add the following headers to the production deploy config: HSTS with max-age=31536000; includeSubDomains; preload; CSP starting with default-src 'self'; X-Frame-Options: DENY; X-Content-Type-Options: nosniff. Replace any Access-Control-Allow-Origin: * with an allowlist of our actual frontend origin. Provide the config diff for Vercel / Netlify / Cloudflare based on what is currently in use.”

Time. ~3 seconds to scan, ~10 minutes to apply via deploy config.

5. Vibe Code Scanner →

Audits the AI-generated repo for common patterns — exposed config files, default secrets, debug routes left open, and the AI-specific anti-patterns we have catalogued across every Lovable, Bolt, Cursor, and v0 app we have looked at.

Catches. The patterns specific to AI generators that other scanners do not look for: source maps shipped to production, /_next/static/ paths leaking the framework version, .env.example committed with real keys, exposed Vercel deployment URLs that bypass auth, debug-only API routes left enabled.

What the report looks like. Findings grouped by source (frontend bundle, deploy config, public path leaks), each tagged with severity. The severity reflects exploitability, not novelty — a leaked source map is high severity even though the bug class is mundane, because it makes every other attack cheaper.

Fix prompt. “For each finding in the vibe-code scan report, generate the minimum config change that resolves it. For source maps, set productionBrowserSourceMaps: false in next.config.js. For exposed config files, add them to .gitignore and remove from git history. For debug routes, gate them behind NODE_ENV === 'development'.”

Time. ~30 seconds to scan, ~20 minutes to apply the cluster of fixes.

What the five together do not catch

The free suite covers the top five categories in the benchmark. It does not cover:

  • BOLA — Broken Object-Level Authorization across roles requires authenticated probing. Manually tested in the pre-launch checklist, automated in the full VibeEval scan. See also the BOLA in AI-generated CRUD pattern.
  • Self-editable role fields — requires authenticated probing of the profile / settings endpoints with a modified payload. Manual test: change is_admin: true in a profile-update request and see if it sticks.
  • SSRF, IDOR variants, LLM prompt injection — require an agent with credentials and stateful probing. Detail in the SSRF / open redirect / OAuth pattern and indirect prompt injection.
  • Dependency CVEs — requires source-code or package-manifest access. The Package Hallucination Scanner is the closest free equivalent for the AI-specific subset (phantom packages); for full CVE coverage, run npm audit and pip-audit locally.
  • Rate limits on auth endpoints — requires sending bursts of traffic safely. Easy to skip and easy to regret; the full scan probes this without false positives, manual testing risks DoSing yourself.
  • Race conditions in money paths — see race conditions in money paths. These need concurrent requests with timing control to surface.
  • Webhook trust — see Stripe webhook and paid-trust. The bug is “we trust the redirect URL, not the webhook” and requires a paid-flow test to surface.

Those gaps are what the full VibeEval agent covers — 310 probes, scheduled, with diff alerts when something regresses. If you only need a one-shot pre-launch verdict, the five free scanners are enough.

The 30-minute self-audit playbook

A clean run of the five scanners takes about 60 seconds. Working through the findings is the work. Here is the order we tell founders to follow:

Minute 0–2: scan all five. Open each tool in a tab, paste the URL, kick off the scan. Do not start fixing yet — collect every finding first so you triage in priority order, not arrival order.

Minute 2–5: rotate any leaked keys. If the Token Leak Checker finds anything, stop and rotate the key in the provider dashboard before doing anything else. A key that has been in your bundle for a week is a key an attacker may already have. Rotation is the only action that closes the window; remediation in code is the second step.

Minute 5–15: fix RLS / Firebase rules. Every “red” table in the RLS report or every if true rule in the Firebase report. This is the largest category by data-loss impact and the cheapest to fix — the migration is one to three lines per table.

Minute 15–25: fix headers and CORS. Apply the recommended values from the Security Headers report. CORS first (highest impact), HSTS second, CSP last (the most likely to break a feature, so deploy and watch for browser console errors).

Minute 25–30: triage the Vibe Code Scanner findings. Anything tagged “high” gets a fix on the same day; anything “medium” goes into a backlog; “informational” findings can wait. Remove source maps from production immediately if found.

Minute 30: re-scan. Re-run all five and confirm the categories you fixed turned green. Anything still red goes onto a follow-up list with a single owner and a deadline.

If you complete the playbook and every scanner is green, you are above the median for AI-built apps in launch readiness. You are not pentested, but you are no longer the easy target.

Run order

  1. Run the Supabase RLS Checker — fix anything red before continuing.
  2. Run the Token Leak Checker — rotate any leaked key immediately, before fixing anything else.
  3. Run the Firebase Scanner if applicable.
  4. Run the Security Headers Checker — fix CORS and HSTS.
  5. Run the Vibe Code Scanner — work through findings in severity order.

After all five are clean, run the full VibeEval scan to verify the long tail. Most builders find one or two additional issues from the longer probe set.

What to do with the output

Every finding from every scanner ships with a fix recommendation phrased as a paste-ready prompt for Claude Code or Cursor. Copy the prompt, paste it into your AI editor, accept the change, redeploy, re-scan to confirm.

That loop — scan, prompt, fix, re-scan — takes about 30 minutes for a typical Lovable or Bolt app. The fact that the AI that built the app is also the cheapest tool to fix the app is the unfair advantage of vibe-coding security.

Two practical notes on this loop:

The same model that wrote the bug usually cannot see it. Self-review hits the same blind spots that produced the original code. The scanner is the second opinion. If the AI editor pushes back on a fix prompt (“this is already secure”), trust the scanner over the model — re-scan after the change and look at the result, not at the assistant’s claim.

Fix one finding per turn. Do not paste the entire report into one prompt. AI editors handle one focused change well; they make a mess of bundled changes. The cost of one prompt per finding is small; the cost of a half-applied bulk change is large.

After the audit: what to monitor

Findings regress. The highest-churn categories — RLS, BOLA, leaked keys, CORS — drift every time you add a feature or change deploy config. Three habits keep the floor in place:

  • Re-scan after every deploy. Bookmark the five tools. The full re-run is under a minute.
  • Re-run the playbook before any external launch. Product Hunt, press, paid traffic — anything that brings strangers to the URL.
  • Watch the categories most likely to regress in your stack. Supabase apps regress on RLS when new tables ship. Firebase apps regress on rules when the schema changes. v0 / Bolt apps regress on token leaks when new env vars land.

Continuous monitoring is what the full VibeEval scanner exists for, but you don’t need it on day one — you need it once you have enough surface area that manual re-scans stop happening on their own. A useful trigger: switch to scheduled scans the first week you ship two deploys without re-running the manual playbook.

COMMON QUESTIONS

01
Is this really free?
Yes. The five scanners linked from this page do not require signup, do not store your URL, and do not gate the report behind email. They are deliberately single-purpose so each one can be used in isolation.
Q&A
02
How does this compare to the paid VibeEval scanner?
The free scanners cover the top five vulnerability categories we see in AI-built apps — about 80% of the failure profile by count, weighted toward the categories that actually leak data. The paid scanner adds 305 more probes (SSRF, prompt injection, dependency CVE, role escalation, advanced BOLA, full auth flow) and runs them on a schedule with diff alerts. Use free for a one-shot pre-launch check, paid for ongoing monitoring.
Q&A
03
Can I run these against an app I do not own?
All five scanners only do what a normal browser would do — they fetch your URL, parse the response, and check for misconfigurations visible from outside. No exploit attempts, no auth bypass, no destructive payloads. That makes them safe to run against any URL, but you should still ask permission before running them against an app you do not own.
Q&A
04
Do you store my URL or scan results?
Server-side, the URL is logged anonymously for rate-limit and abuse purposes; results are not stored. Anyone with the URL could see the same finding, so the scan does not create new exposure. We do not aggregate or share scan results with anyone.
Q&A
05
Why five tools instead of one?
Each one does a single thing well. RLS testing requires actually querying the Supabase REST API. Token leak detection requires loading the full JS bundle. Header checks are HEAD requests. Dependency scanning is unrelated to all of those. Bundling them into one tool would lose the per-category clarity that makes each useful as a citation in a fix prompt.
Q&A
06
I am on Bolt / Cursor / Replit / V0, not Supabase or Firebase. Are these still relevant?
Four of the five do not care which builder you used. The Token Leak Checker scans your bundle regardless of how it was generated. Security Headers and Vibe Code Scanner work on any URL. Only the Supabase RLS Checker and Firebase Scanner are backend-specific — and most Lovable, Bolt, and Cursor apps end up on Supabase or Firebase even if the founder did not pick the backend explicitly.
Q&A
07
Will running these set off any alerts on my hosting provider?
Unlikely. Each scan is a small number of read-only HTTP requests from a single IP. That falls well below the threshold of any rate limiter or WAF rule we have seen trigger. If your provider does flag the traffic, the IP and rate are visible in the per-tool documentation so you can allowlist.
Q&A
08
How long until findings regress after I fix them?
RLS and BOLA regress every time you add a new table or endpoint. Token leaks regress when a new env var lands in client code. Headers regress when the deploy config changes. The fastest practical loop is: scan after every deploy, paste any finding back into the AI editor as a fix prompt, redeploy, re-scan. The whole loop is under ten minutes for most findings.
Q&A

WANT EVERYTHING IN ONE SCAN?

The full VibeEval agent runs all five plus 305 more probes. 14-day trial, no card.

RUN FULL SCAN