MASS ASSIGNMENT
Prisma's update method takes whatever you give it. AI generators give it req.body. The attacker gives the request a few extra fields. Suddenly the attacker is an admin.
The scenario referenced below runs on gapbench.vibe-eval.com — a public security benchmark we operate. The client engagement that originally surfaced this pattern is anonymized; the gapbench scenario is the reproducible equivalent.
The shortest dangerous code in modern web
app.patch('/api/me', requireAuth, async (req, res) => {
const user = await db.user.update({
where: { id: req.session.userId },
data: req.body
})
res.json(user)
})
That’s the bug. Three lines. Looks fine. Passes review at a glance. Ships in production.
The attacker opens DevTools, intercepts a PATCH to /api/me, adds "role": "admin" to the JSON body, hits replay. The server runs the update. The role column on their row flips to “admin”. Whatever middleware checks user.role === 'admin' now greenlights them through admin-only endpoints. Done.
This bug has been called “mass assignment” since Rails. Strong Parameters were introduced in Rails 4 specifically to make this harder. Django has had fields on its ModelForm for the same reason. Prisma — and Drizzle, and Knex, and most of the modern TypeScript ORM stack — does not have an equivalent default. You have to write the field allow-list yourself, and AI codegen will not write it for you unless you ask very specifically.
Why the AI does this
Two reasons stack on each other.
First, “spread req.body into the update” is the most token-efficient way to write the handler. The AI minimizes tokens. It does this without thinking about it. The shorter pattern is also the more dangerous one, and there’s no signal that says “use the longer one.”
Second, the safe pattern is more code than it looks. You need a Zod schema (or equivalent) for the allowed fields, you need to parse the body, you need to handle the validation error, you need to make sure the schema doesn’t include sensitive fields, and you have to keep the schema in sync with the model. The AI doesn’t do all of this consistently. It does some of it. Sometimes it generates the schema and forgets to use it. Sometimes it uses the schema but adds the dangerous fields to it because they’re on the model. Sometimes it gets it right. The variance is the issue.
Live demo
Hit https://gapbench.vibe-eval.com/site/mass-assignment/. Sign up. PATCH /api/me with a body that includes "role": "admin". Reload the profile. You’re an admin.
The fix on the same site would be one schema:
import { z } from 'zod'
const UpdateMeBody = z.object({
name: z.string().max(100).optional(),
avatar: z.string().url().optional(),
}).strict() // <-- reject unknown fields
app.patch('/api/me', requireAuth, async (req, res) => {
const data = UpdateMeBody.parse(req.body)
const user = await db.user.update({
where: { id: req.session.userId },
data,
})
res.json(user)
})
.strict() is the line that does the work. Without it, Zod silently drops unknown fields, which is fine — but the AI sometimes uses .passthrough() because it sounds permissive and forgiving, and that re-introduces the bug.
A specific incident
Anonymized. The product was a small B2C app — fitness tracking, social features, premium tier with extra metrics. We found mass-assignment in our scan and reported it. The team’s response was the most interesting part: they had been aware of mass-assignment as a concept, and they had a Zod schema for the PATCH body. The bug was that the schema had been generated automatically from their Prisma schema using a code generator. The generator emitted every User column as optional. Including role, is_premium, created_at, and password_hash.
The schema looked rigorous in code review. It said UpdateUserBody.parse(req.body). The parse “validated.” The output was the parsed body, with type safety. Felt safe, was not safe.
The fix took two minutes once they understood: replace the auto-generated schema with a hand-written one that only listed user-editable fields. They added a CI lint that fails the build if a Zod schema imports from the auto-generated set without an explicit allow-list. We’ve started recommending the same pattern — auto-generation is fine for read-side types, dangerous for write-side validation.
The deeper lesson — auto-generated schemas are a security antipattern
This deserves its own section because we keep seeing it. Modern TypeScript codebases love generating types and schemas from the database schema (prisma generate, drizzle-kit generate, kysely-codegen, etc.). The generators emit:
- Read-side types: useful, low-risk. Knowing that
User.emailis a string is the whole point. - Write-side schemas: dangerous if used for input validation. The generator has no concept of “fields the user is allowed to set” — it just lists the columns.
The trap is that the generated schema looks like input validation. It has .parse(), it throws on type mismatch, it produces typed output. So a developer using it for the PATCH body looks careful. The schema is doing nothing useful from a security standpoint because every column is in the allow-list.
The right pattern: separate the read-side types (auto-generate) from the write-side schemas (hand-write, scoped to the specific endpoint). For a “user updates their own profile” endpoint, the schema includes name, avatar, and that’s it. Adding more fields requires editing the schema, which forces the developer to think about whether the field is OK to expose.
Wrong fix vs right fix
// WRONG: passthrough drops unknown fields silently
// but doesn't reject them, which trains attackers to keep trying
const UpdateMeBody = z.object({
name: z.string().optional(),
}).passthrough()
// WRONG: omit on the auto-generated schema is brittle
// New columns added later inherit the bug
const UpdateMeBody = UserSchema.omit({ id: true, role: true, is_admin: true })
// What about the next sensitive column? balance? credits? is_verified?
// RIGHT: explicit allow-list with strict
const UpdateMeBody = z.object({
name: z.string().min(1).max(100).optional(),
avatar: z.string().url().optional(),
bio: z.string().max(500).optional(),
}).strict() // unknown fields throw
// RIGHT: database-layer field allow-list
const allowedFields = ['name', 'avatar', 'bio'] as const
const data = Object.fromEntries(
Object.entries(req.body).filter(([k]) => allowedFields.includes(k as any))
)
Cross-stack notes
- Rails: Strong Parameters (
params.require(:user).permit(:name, :avatar)) is the canonical safe pattern, introduced after the GitHub mass-assignment incident in 2012. AI-generated Rails code uses it correctly more often than not — it’s been the dominant pattern long enough to dominate the training corpus. - Django:
ModelFormwith explicitfieldslist. Same idea. Django REST Framework’sModelSerializerrequiresfieldsorexclude— failing to set either is loud, which catches some bugs. - Express + Mongoose:
findOneAndUpdate(filter, req.body)with norunValidatorsand no field filter. Bug rate high. The mitigation in Mongoose isselectandvalidateplus an allow-list — none of which is default. - Go (gorm):
db.Updates(req.body)updates everything in the struct. The fix isdb.Select("name", "avatar").Updates(...). AI-generated Go code skips theSelectregularly. - Java (Spring):
@RequestBodywith the entity class binds every field. The fix is a separate DTO class with only the editable fields. Common bug in AI-generated Spring controllers.
How we detect it
We list every PATCH and PUT endpoint we can find. For each one, we fetch the current resource state, then send a PATCH with all the original fields plus a few suspicious additions: role, is_admin, isAdmin, permissions, plan, tier, verified, email_verified, balance, credits. We re-fetch and check whether any of those persisted. If any did, that’s the finding.
The detection is cheap and reliable. The reason scanners that don’t run live often miss it is that the bug requires a session, a shape-aware crawl, and follow-up reads — three things static analysis can’t do.
Fix
Three patterns, in increasing order of robustness:
-
Validation schema with
.strict()— name the fields the user is allowed to update, reject everything else. Easiest, easiest to forget on a new endpoint. -
Database column whitelist per role — model your schema so user-editable columns are in a separate update method.
updateUserProfile(userId, profileFields), whereprofileFieldsis a typed subset of the User model. -
Field-level RLS or column-level grants — Postgres can grant UPDATE per-column. RLS policies can also gate writes on field-by-field rules. This is the most secure but most operationally heavy.
We recommend (1) for most teams and (2) once the API has more than a handful of endpoints.
CWE / OWASP
- CWE-915 — Improperly Controlled Modification of Dynamically-Determined Object Attributes
- CWE-639 — Authorization Bypass Through User-Controlled Key
- OWASP API Security Top 10 — API3:2023 Broken Object Property Level Authorization
Reproduce it yourself
- Live: https://gapbench.vibe-eval.com/site/mass-assignment/
- Adjacent (paid-flag tampering, related shape): https://gapbench.vibe-eval.com/site/stripe-paid-trust/
- Adjacent (audit-log tamper, trusted client fields): https://gapbench.vibe-eval.com/site/audit-log-tamper/
Related reading
- Pattern: BOLA in AI-generated CRUD
- Pattern: Stripe trust on the wrong side
- Tool: nodejs-security-scanner
- Tool: vibe-code-scanner
COMMON QUESTIONS
TEST YOUR PATCH ENDPOINTS
We send the extra fields an attacker would send and tell you which ones your server persisted.