← BACK TO UPDATES

LOVABLE SECURITY REPORT APRIL 2026: 380K APPS SCANNED, 5K LEAKING, 5 BRANDS PHISHED ON LOVABLE'S DOMAIN

TEST YOUR APP NOW

Enter your deployed app URL to check for security vulnerabilities.

380,000

Vibe-coded assets RedAccess found publicly accessible (Lovable, Base44, Replit, Netlify)

5,000

Of those exposing genuinely sensitive corporate or personal data

1.3%

Apparent exposure rate across the scan

5

Major corporate brands impersonated by phishing sites hosted on Lovable’s own subdomains

Is Your Lovable App Vulnerable?

Enter your deployed Lovable app URL to check for the vulnerabilities described in this report.

RedAccess: 380,000 Apps Scanned, 5,000 Leaking

In April 2026, Israeli cybersecurity firm RedAccess, led by Dor Zvi, conducted a scan of publicly accessible apps across four widely used vibe-coding platforms: Lovable, Base44, Replit, and Netlify. The findings were published externally on May 7, 2026 by WIRED, with Axios independently verifying and subsequent coverage by Cloud Tech Report, PPC.land, Intelekt Žinios, and AI2Work.

The headline numbers, identical across coverage:

  • 380,000 publicly accessible assets — applications, databases, and related infrastructure — built with the four named platforms
  • ~5,000 (1.3%) apps containing sensitive corporate information accessible without authentication

The mechanism, per Zvi: privacy settings on several vibe-coding platforms default to publicly accessible. Users must manually switch them to private. Google then indexes the public URLs. The defaults do the work.

Zvi’s quote, repeated across outlets: “I don’t think it’s feasible to educate the whole world around security. My mother is vibe coding with Lovable, and no offense, but I don’t think she will think about role-based access.”

What Was Exposed

Categories of sensitive material found in the 5,000-app subset, per Cloud Tech Report and VexoWire:

  • Patient conversations at a children’s long-term care facility
  • Hospital doctor-patient summaries
  • Incident response records at a security company
  • Ad-purchasing strategies and go-to-market documents
  • Clinical trial information (per AI2Work)
  • Unredacted customer conversations with chatbots

Depending on jurisdiction and the data involved, these healthcare and financial exposures may trigger regulatory obligations under HIPAA, UK GDPR, or Brazil’s LGPD — flagged in the Cloud Tech Report writeup.

Wix (Base44’s parent company) responded through head of public relations Blake Brodie, per PPC.land. Lovable’s response to the scan, at the time of writing, has not been widely reported in the coverage we tracked.

Phishing Sites on Lovable’s Own Domain

Beyond the data exposure, RedAccess flagged phishing sites hosted on Lovable’s own subdomain infrastructure. The sites impersonated five major corporate brands, listed in PPC.land, Intelekt Žinios, and AI2Work:

  • Bank of America
  • Costco
  • FedEx
  • Trader Joe’s
  • McDonald’s

These sites were built using Lovable’s AI coding tools, then left on Lovable’s subdomains. PPC.land’s writeup notes they “appeared to have been built using Lovable’s AI coding tools, then left on Lovable’s domain infrastructure.”

This is a category beyond “developers shipping insecure apps.” It is the platform as an unwitting phishing host. Lovable’s domain authority — the very thing that helps legitimate apps rank on search engines and pass spam filters — makes any subdomain a more credible phishing surface than a freshly registered look-alike domain. The credential-collection step inherits trust that the actual phisher did not earn.

Comparison: The October 2025 Baseline

For historical anchoring, Cloud Tech Report cites prior research from October 2025 in which Escape.tech scanned 5,600 publicly available vibe-coded applications. RedAccess’s number is 68 times larger by total assets. The category is not new. The scale is.

In other words: the structural risk was visible last fall. April 2026 is the month the scan output stopped fitting on a slide.

TrustFall: The AI Coding Agent as Supply-Chain Attack Vector

In parallel with the exposure-side reporting, The Cipher disclosed a new attack class — TrustFall — targeting AI coding agents at the CLI layer: Claude Code, Cursor CLI, Gemini CLI, GitHub Copilot CLI. Lovable is named in The Cipher’s writeup among the platforms whose users are exposed because their developer workflows depend on these agents.

The mechanic, in three steps:

  1. AI coding agents ingest environment context — codebase, dependencies, project conventions, skill files. That ingestion is what makes them useful.
  2. A malicious open-source package, compromised repo, or poisoned template can plant instructions inside that context.
  3. The agent, doing what it was designed to do, runs those instructions — with file system access, shell execution rights, and the developer’s OAuth tokens.

The Cipher’s framing: “AI coding agents have gone from curiosity to critical infrastructure in about 18 months. The attack surface these agents represent has scaled with their adoption — but the security model largely hasn’t.”

For a Lovable developer who uses Claude Code or Cursor to iterate on their Lovable project locally before pushing, this is a direct supply-chain risk on top of the platform-level exposure.

CLI-Anything and the Agent-Integration Layer

Mind Fortunes reported on May 6, 2026 that CLI-Anything, a tool from the Data Intelligence Lab at the University of Hong Kong, has reached 30,000+ GitHub stars since its March 2026 launch. The pitch: point it at any source repo and it auto-generates a structured CLI that AI coding agents — Claude Code, Codex, OpenClaw, Cursor, GitHub Copilot CLI — can drive with a single command.

Mind Fortunes’ security framing names a third layer of supply-chain risk that SAST and SCA tools do not cover:

The agent-integration layer — config files, skill definitions, and natural-language instruction sets that guide AI agents on how to interact with software.

A poisoned CLI-Anything-generated CLI is not a tampered binary and not a vulnerable dependency. It is an instruction-set poisoning that no standard supply-chain scanner has a detection category for. The OpenClaw proof shipped alongside the disclosure: one command, any repo, instant agent backdoor.

For Lovable developers whose projects pull in open-source dependencies — almost all of them — this is the same class of attack The Cipher named under TrustFall, surfaced one layer down the stack.

DDIPE: Document-Driven Implicit Payload Execution

AI Curated defined a related attack class on May 7, 2026: Document-Driven Implicit Payload Execution (DDIPE). The pattern: malicious payloads hide inside what looks like helpful documentation or configuration templates. The AI agent reads the doc, follows the instructions, executes the payload.

A primary indicator of a bad DDIPE-style doc, per AI Curated: instructions that ask the agent to fetch a URL, run a shell command, or modify environment variables under the guise of “setup steps.”

For Lovable developers consuming community-shared prompts, agent skills, or “secure coding templates,” DDIPE is the named version of the risk those artifacts carry. The Lovable mega-prompt market that grew through Q1 2026 (community-shared prompts for OWASP ASVS controls, RLS templates, input-validation helpers) is precisely the surface DDIPE exploits.

The Defender Response: Bandit for AI-Generated Python

On the defender side, CodeCut published on May 10, 2026 a guide to using Bandit — the PyCQA static-analysis tool — specifically to audit AI-generated Python code. The framing: GitHub Copilot, Cursor, and Claude Code now generate a large share of production Python; the output looks polished enough that pull requests get approved without anyone reviewing every line.

For Lovable apps that ship Python backends or Edge Functions, Bandit is a free, cheap addition to the CI pipeline. It catches a subset of issues — the cleanly named ones, in source — and does not catch the integration-layer problems RedAccess found. Both are necessary. Neither is sufficient.

The Pattern Across April

Five threads in the same month, one shape:

  1. RedAccess scan — 380K assets public by default, 5K leaking sensitive data
  2. Phishing impersonation — Lovable’s domain authority weaponized against five major brands
  3. TrustFall — AI coding agents as the new supply-chain vector
  4. CLI-Anything — the agent-integration layer as a third supply-chain category
  5. DDIPE — documentation and templates as implicit payload carriers

None of these required novel exploits. None of them are vulnerabilities a SAST tool would have caught. All five live in the integration layer: defaults, permission semantics, domain reputation, agent context, document trust. The code, in each case, was doing what it was asked. What it was not asked was security.

This is the pattern we covered in our integration-layer post and the weekly digests through the period. April 2026 is the month the empirical evidence caught up with the structural argument.

What Lovable Developers Should Do Now

If you have a Lovable app — particularly one shared publicly or built before April 2026 — here is the minimum action list, derived from the findings above:

  1. Switch your project from public to private. This is the single biggest delta. The default is public; most users do not change it. Settings → Project Privacy.
  2. Audit your subdomain for impersonation. Search Google for site:lovable.app "your-company-name" and site:lovable.dev "your-company-name". If you find a project you don’t own, report it.
  3. Rotate your Supabase service-role key. If it ever appeared in committed code, a public project, or a chat with the platform, assume it is compromised. Generate a new one.
  4. Enable RLS on every table. Supabase → Authentication → Policies. If RLS is disabled on any user-data table, anyone with your anon_key is reading the table right now.
  5. Write row-ownership policies, not just role checks. auth.uid() = user_id is the policy that actually protects user data. auth.role() = 'authenticated' says “any logged-in user can read this row” — that is the BOLA pattern.
  6. Audit the agent context. If your local dev uses Claude Code, Cursor, or Copilot CLI on the Lovable codebase, treat any skill file, prompt template, or community-shared config as untrusted code until reviewed. Run our validation loop on every external entry point.
  7. Run an end-to-end scan against the deployed app. Static review cannot tell you whether your live API actually enforces the policy. The only way to know is to test it. VibeEval scans for this; a handful of competitors do too.
  8. Add Bandit (or equivalent) to CI. Free static checks on the AI-generated code catch a subset of issues at the code layer. Use them as the first gate, not the only one.

Sources

This report compiles public reporting from the period. Every claim and number above is traceable to one of the sources listed. VibeEval is not affiliated with Lovable, Supabase, RedAccess, Base44, Replit, or Netlify. Questions? Contact our team.

STOP GUESSING. SCAN YOUR APP.

Join the founders who shipped secure instead of shipped exposed. 14-day trial, no card.

START FREE SCAN