Back to Blog
Incident AnalysisAgent Incident Archive

The Vercel Breach Was an AI Agent Problem — And Every Enterprise With OAuth-Connected AI Is Next

A technical analysis of how a legitimate AI agent with valid OAuth tokens became the initial access vector for one of the first major breaches of the agentic AI era.

Vikas Chamarthi
7 min read
Detection gap analysis \u2014 Vercel AI agent breach

WAF / API Gateway

Authenticated traffic from known OAuth client

Allowed

SIEM / SOAR

No failed logins, no brute-force, no known-bad IP

No alert

Secret Scanner

Env vars accessed via authenticated API, not public exposure

Not triggered

Prompt Filter

Not a prompt-level attack—agent used legitimate tool calls

Irrelevant
vs

Behavioral Baseline Monitor

Env-var enumeration across projects is 3σ from Context.ai's baseline behavior

Flagged

A Vercel employee signed up for an AI productivity tool with their enterprise Google account and clicked “Allow All.” That’s the entire initial access chain for the Vercel April 2026 breach.

There was no exploit. No zero-day. No phishing email with a malicious link. The AI agent was authorized. Every existing security layer approved every action it took. The failure wasn’t a vulnerability in any traditional sense—it was a configuration choice that no existing security tool was designed to question.

1What Happened: The Vercel Breach Timeline

A reconstruction from public disclosures.

April 19

Vercel discloses unauthorized access

Vercel publishes an initial security bulletin confirming unauthorized access to internal systems.

April 20

Initial access vector confirmed

Guillermo Rauch confirms the chain: Context.ai breach → compromised Google Workspace account → Vercel internal environments → enumeration of environment variables. An employee had granted "Allow All" OAuth permissions.

April 20

Scope assessed

Vercel engages Mandiant for incident response. Rauch describes impact as "quite limited." Next.js, Turbopack, and open-source projects confirmed safe.

Unconfirmed

ShinyHunters claims

A threat actor listing on a dark web forum claims access to GitHub/npm tokens and offers data for $2M. Vercel has not confirmed these claims.

2Why the Vercel AI Agent Breach Is a New Class of Attack

The attacker never exploited anything. That’s the problem.

The core mechanism is worth naming precisely: authorized identity + excessive scope + zero behavioral oversight. The AI agent had valid OAuth tokens. Every API call was authenticated. Every scope check passed. This isn’t a vulnerability—it’s a design assumption that became a liability the moment AI agents entered the OAuth ecosystem at scale.

The agent was authorized

No credential theft, no privilege escalation in the classic sense. The OAuth grant was legitimate. The tokens were valid.

The scope was broad

"Allow All" is not a vulnerability. It is a configuration choice that is now a liability at the scale enterprises grant OAuth scopes to AI tools.

The behavior was invisible

Enumerating env vars across projects is not what a note-taking AI agent should do. Nobody was watching for that pattern.

3Why Existing Security Tools Missed the AI Agent Attack

Four layers. Four approvals. Zero alerts.

WAF / API Gateway

Sees authenticated traffic from a known OAuth client. The request headers are valid, the tokens check out, the client ID is registered. The WAF does exactly what it should: it approves.

SIEM / SOAR

No failed logins, no brute-force signature, no known-bad IP, no geographic anomaly. The correlation engine has nothing to correlate. Silence.

Secret Scanners

The environment variables were not exposed to the public internet. They were enumerated through authenticated API calls. Secret scanners monitor repos and public surfaces—this was neither.

Prompt Injection Detectors

Irrelevant. This was not a prompt-level attack. The agent acted on legitimate internal tool calls, not manipulated user input. The prompt was never the attack surface.

Every one of these tools did exactly what it was designed to do. The design didn’t include an AI agent with legitimate credentials going off-pattern.

4What Would Have Caught the Vercel Breach

The technique first, then the tooling.

Behavioral baseline per agent identity

Every OAuth-connected agent has a behavioral fingerprint: which endpoints it calls, in what sequences, at what rate, touching which resources. Context.ai’s legitimate behavior is small: read calendar, read email, generate notes. Enumerating environment variables across Vercel projects is three standard deviations from that baseline on multiple axes. A behavioral monitor catches this on the first anomalous sequence.

Action-sequence drift detection

Model the agent’s tool-call sequences as a Markov chain. A legitimate agent has a small number of high-probability state transitions. An attacker using the agent’s credentials generates sequences with near-zero baseline probability. Score each sequence; alert above threshold. The math is straightforward, the signal is strong.

Observe mode first, enforce later

The objection to any security tool is “what if it blocks something legitimate?” The answer: observe for 7–14 days, auto-generate the baseline, then move to enforce. No rules written by humans. The baseline is derived from what the agent actually does, not what someone guesses it might do.

Cross-deployment pattern sharing

If one customer’s Context.ai grant suddenly starts enumerating environment variables, every other customer with a Context.ai OAuth grant can be flagged in minutes. The anomaly is not unique to one deployment—it’s a signal that propagates across the network.

A legitimate AI agent with valid OAuth tokens enumerated sensitive data at Vercel for an unknown dwell time. No WAF catches that. No SIEM catches that. Behavioral baselines catch it in the first 10 anomalous calls.

VectraGuard builds exactly this: observe-mode behavioral monitoring for OAuth-connected AI agents, with drift detection and cross-deployment pattern sharing. The techniques above are what our enterprise product implements.

5What to Do This Week: AI Agent OAuth Security Checklist

Five actions you can take before the next breach.

1
Audit every OAuth-connected AI tool in your Google Workspace or Microsoft 365 admin console. Export the list of scope grants. If you don’t know how many AI tools have OAuth access to your org, that’s the first problem.
2
Identify any grant wider than the tool’s documented use case requires. A note-taking app does not need access to deployment environments. Revoke or reduce.
3
For Vercel customers specifically: rotate non-sensitive environment variables, mark secrets as sensitive going forward, and check deployment logs for unexpected activity in the last 30 days.
4
Establish a baseline of normal behavior for each AI agent identity before you need it during an incident. You can’t detect drift if you never measured normal.
5
For anything deployed in production: deploy an observe-mode behavioral monitor. Detect anomalies before they become incidents.
Attack chain reconstruction

Employee

Signs up with enterprise Google account

1

OAuth Grant

"Allow All" permissions to Context.ai

2

Context.ai Breached

Attacker gains access to Context.ai

3

Google Workspace

Compromised OAuth tokens used

4

Vercel Systems

Pivot into internal environments

5

Env Vars Enumerated

Non-sensitive variables accessed

6
AuthorizedCompromised

6The Bigger Picture: The First Breach of the Agentic AI Era

Context.ai won’t be the last. Every enterprise is granting OAuth scopes to AI tools at a rate nobody is auditing. The attack surface is the product of three factors: the number of AI-agent identities in your environment, the breadth of their OAuth scopes, and the absence of behavioral monitoring. Multiply those together and you have the breach surface for the agentic era.

This is the first major breach where the initial access vector was an authorized AI agent. It will not be the last. The question is whether you’ll have visibility into what your AI agents are doing when the next one happens.

Free assessment

VectraGuard Agent Security Assessment

Free setup. Observe mode only—no blocking, no disruption. See what your AI agents are actually doing, and whether any of them are drifting from baseline.

Get the assessment

Agent Incident Archive

One technical breakdown per major AI-agent security incident. No fluff. Subscribe to get the next analysis before anyone else.

Subscribe