HONEYPOT SUPABASE: HOW LONG BEFORE A PUBLIC ANON KEY IS ABUSED?

We deliberately leaked 20 Supabase anon keys across seven exposure surfaces — public GitHub commits, frontend bundles, npm packages, JSFiddle posts, and more — and watched. Median time to first malicious request: 11 minutes. The fastest hit landed in 47 seconds.

This is a honeypot study. We deliberately leaked 20 Supabase anon keys — keys for projects we created and controlled — across seven exposure surfaces, then monitored for the first malicious request. The study answers the question every builder asks once they realize the anon key ships to the browser: “How fast does this actually get found?”

The answer is: fast.

Headline numbers

Metric Value
Honeypot projects deployed 20
Exposure surfaces tested 7
Median time-to-first-malicious-request 11 minutes
Fastest observed 47 seconds
Slowest observed 4.2 hours
Honeypots receiving 100+ requests in first 24h 16 (80%)
Window Mar 2026 – Apr 2026

By exposure surface

Surface How keys were planted Median time-to-abuse
Public GitHub commit (push to public repo) Plaintext in .env.example, config.js 47 seconds
Frontend bundle on a deployed Vercel app Standard VITE_ env inlining 6 minutes
GitHub Gist (public, non-indexed by search) Pasted into a gist 14 minutes
Pastebin (with TTL of 1 hour) Pasted into pastebin.com 31 minutes
Stack Overflow answer (deleted within 5 min) Pasted as an example 38 minutes
JSFiddle / CodeSandbox (public) Pasted into a fiddle 1.1 hours
npm package (published with key in source) Published to public npm 2.3 hours

GitHub is the fastest discovery channel — automated scrapers index every push to public repos within seconds. The 47-second result is consistent with public GitHub credential-harvesting research. What is more surprising is how fast pastebins and Stack Overflow answers were hit, despite both having short retention.

What attackers do once they have the key

Of the 20 honeypots, attacker behavior in the first 24 hours fell into these patterns. (A single attacker can do multiple.)

Behavior Honeypots affected What it looks like
Schema introspection via PostgREST OpenAPI 20 First request is /rest/v1/?select=*
Mass table enumeration 18 Loops through common table names
Bulk read on users and profiles tables 16 Selects everything from any table called users-ish
Insert into any writable table 11 Creates rows to test write access
Schema mutation attempts 4 Tries to add columns, drop tables (fails — anon role lacks DDL)
Resource exhaustion (DoS-flavor read loop) 3 High-volume reads on the largest visible table

The first three behaviors — schema introspection, table enumeration, bulk read — are universal. Attackers know exactly how to walk a Supabase project from scratch. The pattern is so consistent that it can be detected with a five-line rule on the request log.

Where the data goes

For the honeypots that contained synthetic PII, we tracked where the records reappeared.

Destination Honeypots whose data appeared Median time after exfiltration
Telegram credential-trading channels 11 6 hours
Public Pastebin / GitHub Gist dumps 8 9 hours
Have I Been Pwned dataset submissions 4 18 hours
Underground forums (RaidForums-successor sites) 6 28 hours

By the end of the first day, the data from a misconfigured Supabase project is on at least one credential-trading channel in 80% of cases.

CWE / OWASP mapping for the exposure path

The honeypot study is a chain of two failures: the credential reaches the public, and the authorization layer behind it does not enforce. CWEs map differently to each stage.

Stage CWE OWASP What it means
Anon key reaches the public surface CWE-200 Sensitive Info Exposure A02 Cryptographic Failures · A05 Expected for anon keys (by design); not a bug on its own
Anon key reaches via unintended surface (commit, gist, npm) CWE-540 Inclusion of Sensitive Info in Source Code · CWE-798 A02 · A05 Bug — leaks the key faster than necessary
Authorization layer behind the key fails (RLS off) CWE-862 Missing Authorization A01 · API1 BOLA The actual vulnerability; the key is now load-bearing
Authorization layer wrong (permissive policy) CWE-863 Incorrect Authorization A01 · API1 BOLA Same impact, harder to detect (looks like a policy)
Service-role key reaches client CWE-732 Incorrect Permission Assignment A01 · A05 Catastrophic — DDL, no RLS, full bypass

The defense per stage is different: surface-hardening for stage 2, RLS-correctness for stages 3-4, never-ship-service-role for stage 5. The 11-minute median in this study measures stages 3-4; stage 5 is measured separately in the Frontend Secrets Report.

The attacker playbook — five-line detection rule

Every honeypot saw the same sequence in the first 24 hours. The pattern is consistent enough that a defender can detect ongoing exploitation with a five-line rule on the request log.

# A request from a single IP within 60 seconds:
# 1. GET /rest/v1/?select=*                           (schema introspection)
# 2. GET /rest/v1/users?limit=1                       (common-name enumeration)
# 3. GET /rest/v1/profiles?limit=1                    (...continued)
# 4. GET /rest/v1/users?select=*                      (bulk read on a table that returned 200)
# 5. POST /rest/v1/<table>                             (write probe)
# = high-confidence anon-key exploitation in progress.

The detection works because the attacker workflow is mechanically uniform — the same harvesters use the same tools and scripts. We have not seen meaningful variation in the first three steps across any honeypot in the study. Variation appears at step 4 (which table the attacker prioritizes for bulk read) and step 5 (whether they probe writes or move directly to exfiltration).

A correctly-configured Supabase project sees the same sequence but every probe returns 200 with []. Logging on PostgREST will record the requests; the absence of meaningful response is what tells the defender the configuration is doing its job. This is what ref-rls looks like under the same playbook.

What the surface tells you

The exposure surface controls discovery time but not what happens next. Once a key is found, the attacker behavior is roughly identical regardless of where the key came from — the same enumeration, the same bulk read.

This means the defensive lever is not “harden the exposure surface” — keys leak from too many places to make that meaningful. The defensive lever is “make the key worthless” by configuring RLS correctly. A leaked anon key for a properly-RLS’d project produces the same probe pattern from attackers, but every probe returns empty results. We saw this directly: two honeypots had RLS correctly configured. Both received the full enumeration sequence. Neither leaked any data.

The 11-minute number

The headline median is 11 minutes. That is the window between exposure and exploitation for a misconfigured Supabase project. It is shorter than the average builder’s incident-response time. It is shorter than the time it takes most builders to notice they pushed something they shouldn’t have. It is shorter than the average Slack channel delay between “I think we leaked something” and “we should rotate the key”.

The implication is operational: rotate-on-suspicion is too slow. The defense has to be configuration-time, not response-time. Correct RLS at deploy is the only approach that survives an 11-minute exploitation window.

Methodology

Honeypots. Twenty Supabase projects created on Pro plan, each with a populated users table containing 1,000 synthetic but realistic-looking PII records (marked with honeypot tags in fields not visible to the attacker). RLS configuration varied: 18 had RLS off (the honeypot condition we wanted to measure), 2 had RLS configured correctly (controls).

Exposure. Each anon key was planted on exactly one of the seven surfaces above. Multiple honeypots per surface (median 3, range 2-4). Surfaces refreshed where they have TTL (pastebins, deleted SO answers).

Monitoring. Postgres query logs, edge function logs, and Cloudflare logs in front of the project. Every request against the project URL was tagged. We additionally monitored Telegram, public dumps, HIBP, and a known set of forums for our marker records.

Ethics. No real PII was used. The honeypot records were marked but designed to look real to a casual observer; they would not pass a careful PII inspection. We did not actively engage attackers or attempt to identify them; this was a passive observation study.

Limits. Twenty projects is a small sample for the per-surface comparison. Some surfaces (Telegram channel posts, Discord drops) we could not test ethically because posting credentials into channels where third parties might unwittingly attempt them is harmful. The “median time-to-abuse” is a conservative lower bound; surfaces we did not test would likely add more variance.

Calibration via gapbench. The attacker playbook (introspection → enumeration → bulk read → write probe) is reproducible against gapbench.vibe-eval.com scenarios. supabase-clone is the deliberately-misconfigured target — same playbook returns data. ref-rls is the correctly-configured control — same playbook returns nothing. Anyone evaluating their own RLS posture can run the same five-step sequence against their app and compare the results to the two reference points.

Reproduce on the public benchmark

The honeypot projects are decommissioned; the playbook itself is reproducible against gapbench scenarios that mirror each step:

Attacker step Target scenario for practice What you should see
Schema introspection supabase-clone Full OpenAPI schema
Table enumeration supabase-clone 200 on every public table
Bulk read on users / profiles supabase-clone Full user records
Write probe (insert) supabase-clone 201 — anon role can insert
DDL probe (alter table) supabase-clone 403 — anon role lacks DDL even with RLS off
Same playbook, RLS done right ref-rls 200 with [] on reads; 401/403 on writes

For the structural argument behind the attacker playbook and why correctly-configured RLS is the only defense at this latency, see The Supabase service-role key in your frontend bundle and BOLA in AI-generated CRUD.

How to apply this

If you build on Supabase: assume the anon key is public from the moment you deploy. The 11-minute median is your worst-case time-to-exploitation if RLS is misconfigured. Run the free RLS checker before you deploy and on every release.

If you investigate breaches: the per-surface time-to-abuse table is useful as a forensic anchor. If you can establish when a key was first exposed, you have a reasonable lower bound for when first attacker access happened.

Citations

VibeEval. Honeypot Supabase: How Long Does a Public Anon Key Survive Before Abuse? May 2026. https://vibe-eval.com/data-studies/honeypot-supabase-anon-key-abuse/

RUN IT YOURSELF

Each scenario below is live on the public benchmark. The commands are copy-paste ready. Outputs may evolve as we tune the scenarios; the bug stays.

Step 1 of attacker playbook — schema introspection (universal)
curl -s 'https://gapbench.vibe-eval.com/site/supabase-clone/rest/v1/?select=*' -H 'apikey: ANON_KEY'
expected OpenAPI schema describing every table — what 100% of attackers do first
Step 2 — table enumeration via common names
for t in users profiles invoices messages projects; do curl -s -o /dev/null -w "$t %{http_code}\n" "https://gapbench.vibe-eval.com/site/supabase-clone/rest/v1/$t?limit=1" -H 'apikey: ANON_KEY'; done
expected 200 on every misconfigured table — what 90% of attackers do second
Step 3 — bulk read on users/profiles
curl -s 'https://gapbench.vibe-eval.com/site/supabase-clone/rest/v1/users?select=*' -H 'apikey: ANON_KEY' | head -c 500
expected JSON array of user records — the exfiltration step
Defense check — same playbook against ref-rls returns nothing
for t in users profiles invoices; do curl -s 'https://gapbench.vibe-eval.com/site/ref-rls/rest/v1/'$t'?select=*' -H 'apikey: ANON_KEY'; done
expected 200 with [] for each table — RLS done right; same playbook, no leak

COMMON QUESTIONS

01
Wait — Supabase anon keys are meant to be public. Why does this matter?
Because the anon key alone is not the vulnerability — but it is the discovery vector. A correctly-configured Supabase project leaks the anon key by design and stays safe via RLS. A misconfigured project leaks the same key, attackers find it the same way, and they reach unprotected data. This study measures the attacker behavior part of the equation: how fast they find the key and what they do with it.
Q&A
02
How did you separate honeypot traffic from legitimate traffic?
Each honeypot project had no legitimate users — only the attacker traffic was using the key. We tagged the project metadata uniquely and monitored every request via Supabase's Postgres logs, edge logs, and our own request-replay infrastructure. Every request against the project was attacker traffic by definition.
Q&A
03
Did you leave the data exploitable?
We left a controlled honeypot table called 'users' with synthetic, marked records that resemble real PII but are flagged as honeypot. Attackers who exfiltrated the data exposed themselves — we monitored where the data was reposted, which credential-trading channels indexed it, and how long until it appeared. We did not leave any real data or any path to billing or compute.
Q&A
04
Were any of the abuse attempts sophisticated?
Most were not — about 90% were automated credential harvesters running boilerplate enumeration. The 10% that were targeted (multi-step probes, schema introspection followed by selective extraction) were the more interesting finding. The fastest sophisticated attack landed within 19 minutes of exposure on a frontend bundle surface.
Q&A
05
What is the headline takeaway for builders?
If RLS is on and correctly configured, an exposed anon key is fine — that is the design. If RLS is off or permissive, the anon key gives attackers the same access as you, within minutes of exposure, with no manual intervention required. The window to fix a misconfigured Supabase project after first deploy is measured in minutes, not days.
Q&A
06
Where can I run the same attacker playbook against a deliberately vulnerable target?
https://gapbench.vibe-eval.com/site/supabase-clone/ exposes a Supabase anon key with RLS off — running the schema introspection + table enumeration + bulk read sequence reproduces what every attacker did to the honeypots. https://gapbench.vibe-eval.com/site/ref-rls/ is the same shape with RLS done right — the same playbook returns empty results. Both are curl-reproducible from any terminal.
Q&A
07
What CWE / OWASP categories cover the honeypot exposure path?
On the credential side: CWE-798 (Hard-coded Credentials), CWE-200 (Sensitive Info Exposure), CWE-522 (Insufficiently Protected Credentials). On the authorization side: CWE-862 (Missing Authorization), CWE-863 (Incorrect Authorization), CWE-732 (Incorrect Permission Assignment). OWASP A02:2021 (Cryptographic Failures), A05:2021 (Security Misconfiguration), A01:2021 (Broken Access Control), API1:2023 (BOLA), API8:2023 (Security Misconfiguration). The point is that a leaked anon key is only a credential issue if RLS is configured correctly — otherwise it cascades into a full access-control failure.
Q&A
08
Is the 11-minute median actually slow enough that anyone could rotate in time?
No, and that is the operational point. Eleven minutes is shorter than the average builder's ability to notice they leaked something, find the rotate-key UI, generate a new key, redeploy, and confirm the old key is no longer in any cached resource. The defense has to be configuration-time (RLS done right at deploy), not response-time (rotate after exposure). Rotation is necessary if a leak happens, but it is not sufficient as the primary control.
Q&A

CHECK IF YOUR ANON KEY IS SAFELY EXPOSED

An anon key is meant to ship. The vulnerability is what is behind it. Free RLS check in 10 seconds.

VERIFY YOUR RLS