THE 'INTERNAL' SURFACE THAT ISN'T

Half the time, when you find a real-world breach, the entry point is something the team thought was 'internal.' A staging URL nobody put auth on. An ops dashboard one IP hop from production. A Kubernetes dashboard with anonymous access. AI codegen makes more of these, faster.

The scenario referenced below runs on gapbench.vibe-eval.com — a public security benchmark we operate.

“Internal” is a vibe, not a security control

I’ve stopped being surprised by how many breaches start at a URL someone described as internal. The pattern is consistent across companies of every size: someone built a dashboard, didn’t put auth on it, didn’t expect anyone outside the team to find it, and someone outside the team found it.

Find rate is high because the URLs are predictable. /admin, /ops, /dashboard, /internal, /support, /metrics, /grafana, /kibana, /sentry. Subdomains: admin, staging, dev, qa, internal, tools, ops. An attacker doesn’t need to guess; they enumerate. Certificate transparency logs publish every TLS cert your team has ever generated, including the ones for staging URLs you forgot about.

Five distinct scenarios on gapbench cover the major shapes:

Hosting panel bypass

Some deploy platforms — especially the AI-friendly ones like Lovable, Bolt, Replit — include a “preview” mode for the panel that’s reachable without auth on the deployed URL. AI-generated apps sometimes inherit this surface. The bypass might let an attacker view logs, redeploy, or modify configuration. We don’t name specific platforms here because the situation evolves quickly and some platforms have fixed it; the principle stays the same — anything mounted under your domain that wasn’t deliberately protected is a potential bypass.

Live: https://gapbench.vibe-eval.com/site/hosting-panel-bypass/.

Staging environments

You shipped a feature in staging two months ago. Nobody removed the staging deploy. The staging deploy has the latest data because someone restored a production snapshot for QA. Auth on staging is “loose” — maybe a weak shared password, maybe nothing at all because it was for the team and the team is six people.

Attacker enumerates staging.your-site.com, gets in, downloads everything.

The fix:

  • Auth on staging — proper auth, the same shape as production. If your CI can’t authenticate to it, that’s an honest discomfort but it’s the right discipline.
  • Production data does not go to staging. If it has to, it gets anonymized first. There’s no third option.
  • Tear down staging environments when the feature ships. If you can’t, document the exception.

Live: https://gapbench.vibe-eval.com/site/staging-env/.

Internal tools without login

Probably the most-found surface in our scans. The team builds a tool. The tool lives at tools.your-site.com or your-site.com/admin. Auth would have been a half-day of work and the team had three other things to ship. So the tool went up without auth, on the assumption that nobody knows the URL.

Everyone knows the URL. Search engines occasionally index it. Old developers blog about it. Internal Slack messages get archived to public buckets.

Fix is straightforward but the discipline is the issue: any URL on a public domain needs auth. The exceptions list should fit on a postcard. If you have a “no auth on this one because it’s internal” tool, write it down with a sunset date.

Live: https://gapbench.vibe-eval.com/site/internal-tools/ and https://gapbench.vibe-eval.com/site/analytics-dashboard/.

Kubernetes dashboard

The K8s dashboard is the most consequential of all of these because of the leverage it gives. With dashboard access, you read every secret in the cluster — which often means database credentials, API keys for every external service, and any auth tokens stored as secrets. From there you reach every database in the cluster, every pod, every service. One open dashboard, total compromise.

Modern Kubernetes setups don’t expose the dashboard by default, but it’s commonly enabled for debugging and forgotten. Sometimes it’s enabled with anonymous access mode “for the workshop” and never disabled.

Fix: the dashboard requires authentication. Always. If you don’t actively use it, don’t deploy it. If you do, put it behind a VPN or a bastion, not on a public URL.

Live: https://gapbench.vibe-eval.com/site/kube-dashboard-open/.

Elasticsearch and Firebase

Two more surfaces in this family that are technically databases but functionally “internal services exposed to the world”:

  • Elasticsearch open to the internet — https://gapbench.vibe-eval.com/site/elasticsearch-open/. We covered this in the naked databases article; it’s listed here because the impact pattern matches “internal surface that isn’t.”
  • Firebase rules permissive — https://gapbench.vibe-eval.com/site/firebase-rules-open/. The Firebase Realtime Database with rules set to read: true, write: true is functionally identical to a public database with no auth.

A specific incident — staging with prod data

Anonymized. A SaaS team had a staging environment at staging.example.com, set up six months earlier for a now-shipped feature. The staging deploy was reachable from the public internet because the team’s CI integration tests needed to hit it. Auth on staging was a single shared HTTP basic-auth credential, given to the test runner via env var.

The credentials had been pushed to a public GitHub Action workflow file by accident (commit, push, realize, force-push, but the bad commit was already in the GitHub events feed for ~10 minutes). Someone running automated GitHub-event scrapers picked it up. The attacker logged in to staging.

Staging had a snapshot of production data from three months earlier, restored for QA testing. The snapshot included user emails, subscription state, and partial billing records. The attacker pulled it. Three weeks later, customers reported phishing emails referencing recent invoice numbers — clearly tailored from the leaked data.

Three failures stacked: staging had production-shaped data (it shouldn’t have), staging was reachable from the public internet (it should have been VPN-only), the credential leaked via Git history (force-push doesn’t actually remove it). Any one fix would have stopped the chain.

The cleanup was painful. Customer notification, breach disclosure to applicable regulators, monitoring for downstream phishing. The total cost dwarfed what the team had saved by skipping proper staging hygiene at setup time.

A taxonomy of “internal surface that isn’t”

Worth listing because the surface keeps expanding:

  1. Admin panels. /admin, /dashboard, /ops, /internal, /manage. Built for “the team.”
  2. Staging environments. staging.*, dev.*, qa.*, preview.*. Built for “testing.”
  3. Demo/showcase deployments. demo.*, acme-demo.*. Built for “showing prospects.”
  4. Internal tooling. tools.*, support.*, crm.*. Built for “the team.”
  5. Metrics/monitoring. metrics.*, grafana.*, prometheus.*, kibana.*. Built for “observability.”
  6. CI/build infrastructure. ci.*, build.*, jenkins.*, gitlab.*. Built for “engineering.”
  7. Container/cluster dashboards. Kubernetes dashboard, Rancher, Portainer. Built for “ops.”
  8. Database admin UIs. pgAdmin, phpMyAdmin, mongoDB Compass over web. Built for “DBAs.”

Every one of these has been a breach entry point in the last few years. The pattern is the same — built for the team, deployed publicly, auth either weak or absent.

What “auth” should mean for these surfaces

Not “a shared password.” Not “a basic-auth string in CI.” The properties to require:

  1. Per-user identity. When the surface is accessed, you know which user.
  2. MFA enforced. Especially for admin surfaces. Push notifications or hardware keys, not SMS.
  3. VPN or IP allowlist for non-customer-facing. Staging, internal tools, ops dashboards should not have public IPs.
  4. SSO integration. Single sign-on tied to your team’s identity provider, with offboarding integrated. When someone leaves the team, their access disappears within 24 hours.
  5. Audit logging. Every access logged with user, timestamp, action.

The default for “internal” surface should be more security than for customer-facing, not less. The data is more sensitive (operations, all-users), the user pool is smaller (so logging is more useful), and the cost of compromise is higher (one breach yields production access).

Wrong fix vs right fix

# WRONG: basic-auth as the only control on staging
location / {
  auth_basic "staging";
  auth_basic_user_file /etc/nginx/.htpasswd;
  proxy_pass http://app:3000;
}
# WRONG: IP allowlist that includes a CDN's IP range
location / {
  allow 1.2.3.4;  # office IP
  allow 35.0.0.0/8;  # whoever's IP this is — likely Google Cloud, gives anyone GCP access
  deny all;
  proxy_pass http://app:3000;
}
# RIGHT: VPN-required + your team's egress IP
location / {
  allow 10.0.0.0/8;        # internal VPN only
  allow 1.2.3.4;            # office static IP
  deny all;
  proxy_pass http://app:3000;
}
# Plus: app-level SSO check, audit log per request
# RIGHT: K8s dashboard not exposed at all
# Service set to ClusterIP, accessible only via kubectl proxy with auth
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
spec:
  type: ClusterIP
  # No public LoadBalancer, no Ingress

Cross-stack notes

  • Cloudflare Access / Tailscale / IAP: Modern alternatives to VPN. Each puts auth in front of any HTTP service with minimal app-level changes. Heavily recommend for staging and internal tools.
  • AWS API Gateway + Cognito: Same shape, on AWS-native stack.
  • GCP Identity-Aware Proxy: Same, on GCP. Trivially easy to put in front of any GCP-hosted service.

The general advice: treat “internal” as a tag, not a security level. Anything reachable on a public domain has the same auth requirements as anything customer-facing. Maybe stricter, given the leverage.

How we detect

The detection pattern for the whole family:

  1. Enumerate subdomains via DNS, cert transparency, brute-force from a wordlist of common names.
  2. Enumerate paths under the apex with a wordlist of common admin/internal paths.
  3. For each candidate, send unauthenticated requests and observe responses. Anything that looks like an admin/dashboard/ops surface (titles, framework signatures, recognizable HTML) gets flagged.
  4. For Kubernetes dashboards specifically: probe known dashboard ports and paths, look for the dashboard’s API at /api/v1/.

The detections are all reachable-from-the-internet checks. They don’t need source code. The bug is “this URL exists and accepts requests”; we just confirm both halves.

Fix

The single rule: anything mounted on your public domain has authentication, or doesn’t exist there.

The longer rule: keep an inventory. Every quarter, list everything reachable on your domain. Anything you can’t immediately justify — kill, move behind VPN, or document with an explicit risk acceptance.

The framing: if you’re at the size where “we’ll put auth on it later” is the answer, you’re at the size where this is the breach you’ll have. The cost of late-adding auth is hours. The cost of the breach is months and a lot of money.

CWE / OWASP

  • CWE-306 — Missing Authentication for Critical Function
  • CWE-200 — Information Exposure
  • CWE-285 — Improper Authorization
  • OWASP Top 10 — A01:2021 Broken Access Control, A05:2021 Security Misconfiguration
  • OWASP API Top 10 — API5:2023 Broken Function-Level Authorization

Reproduce it yourself

COMMON QUESTIONS

01
What is hosting panel bypass?
Hosting providers and one-click deploy tools have admin panels — for managing the deploy, redeploying, viewing logs, etc. Some of those panels ship with a 'preview' or 'demo' mode that's reachable without authentication, intended for the deploy provider's own marketing. AI-generated apps that include the panel can inherit that bypass and end up with a public admin endpoint.
Q&A
02
Why does staging keep ending up public?
Staging environments tend to be created quickly and audited rarely. The team needs to test something, the AI helps spin up a staging deploy, the deploy gets a public URL, the team uses it for a week, the URL stays. Six months later the staging environment has full production data (because someone restored a snapshot for testing) and no auth. Attackers find these via DNS enumeration and certificate transparency logs.
Q&A
03
What about internal tools and ops dashboards?
Same shape. The team builds an internal dashboard for monitoring, support, or admin. The dashboard has no public-facing UI, but it's deployed at a URL on the same domain. Without auth — because 'no one knows the URL.' Anyone who scans the apex domain finds it within minutes. Common variants we find: /admin, /ops, /dashboard, /internal, /support, /metrics.
Q&A
04
What is the Kubernetes dashboard issue?
The Kubernetes dashboard, if exposed without auth, gives anonymous users full cluster control. It's been documented since 2018, fixed in modern installations by default, and we still find exposed dashboards regularly because someone enabled it for debugging and forgot to disable it. With cluster control, you reach every pod, every secret, and every database in the cluster.
Q&A
05
Where can I see this on a real URL?
https://gapbench.vibe-eval.com/site/hosting-panel-bypass/, https://gapbench.vibe-eval.com/site/staging-env/, https://gapbench.vibe-eval.com/site/internal-tools/, https://gapbench.vibe-eval.com/site/analytics-dashboard/, https://gapbench.vibe-eval.com/site/kube-dashboard-open/, plus the database scenarios at https://gapbench.vibe-eval.com/site/elasticsearch-open/ and https://gapbench.vibe-eval.com/site/firebase-rules-open/.
Q&A
06
What CWE does this map to?
CWE-306 (Missing Authentication for Critical Function), CWE-200 (Information Exposure), CWE-285 (Improper Authorization). OWASP A01:2021 (Broken Access Control), A05:2021 (Security Misconfiguration), API #5:2023 (Broken Function-Level Authorization).
Q&A

SCAN YOUR INTERNAL-SHAPED SURFACES

We probe staging-style subdomains, admin paths, and dashboard endpoints for the 'oops, that's public' shape.

RUN THE SCAN