You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a 2-flag chained challenge that teaches modern software supply-chain attacks: a typosquatted npm package whose postinstall drops a poisoned AI rules file (Cursor/Claude skill). The rules file uses prompt injection to instruct the developer's AI coding agent to insert a runtime backdoor in the application — which the player then exploits.
This fills a gap in the lab's coverage: existing flags address prompt injection (OSS{pr0mpt_1nj3ct10n_41_4ss1st4nt}) and poisoned MCP tools (OSS{mcp_p01s0n3d_t00l_r3sp0ns3}), but no flag covers software supply-chain failures (OWASP 2025 A06) or the emerging "Rules File Backdoor" attack class.
Why this matters
Software supply chain is now one of the top-trending attack surfaces, and AI tooling supply chain is its next frontier:
Pillar Security – "Rules File Backdoor" (March 2025): demonstrated that hidden prompt injection in shared .cursorrules / agent skill files can make Cursor / Copilot / Claude generate backdoored code without the developer noticing.
The chained scenario (npm → AI agent rule → injected runtime backdoor) is realistic, pedagogically rich, and barely covered anywhere in the CTF ecosystem.
Player narrative
Fully black-box — solvable without cloning the repo. Reuses the existing path-traversal vulnerability as the discovery primitive.
Player visits /admin/documents and inspects the page source. A leftover dev TODO comment names a freshly-added internal package and a "diagnostic endpoint".
Via the existing /api/files?file=... path traversal, the player reads ../package.json and notices a typosquatted dependency: react-toastfy (vs. legitimate react-toastify).
Walking the same traversal, the player reads packages/react-toastfy/package.json, index.js, and scripts/postinstall.js. The postinstall is heavily commented and points to a dropped artifact at lab/quarantine/productivity-helper.mdc.
Reading that artifact reveals a Cursor rules file that looks benign (productivity tips). Inside an HTML comment block — invisible in any markdown previewer — sits:
Flag 1
The full prompt-injection payload (instructions telling the AI agent to add a "diagnostic endpoint" with magic-header auth bypass)
The endpoint path: /api/admin/__diag
The magic token: X-Debug-Auth: dbg_8f3a7c91e2b4d6a05e21
Player issues GET /api/admin/__diag with the magic header → endpoint returns Flag 2 in JSON. Without the header, 403.
Flags
#
Slug
Value
Difficulty
OWASP 2025
CWE
1
npm-supply-chain-typosquat
OSS{npm_typ0sqv4tt1ng_dr0p_4i_rul3s}
HARD
A06
CWE-829
2
ai-rules-file-backdoor
OSS{rul3s_f1l3_b4ckd00r_3xpl01t3d}
MEDIUM
A07
CWE-798
Both flags share a single chained walkthrough at slug supply-chain-poisoned-rules-chain.
Implementation outline
packages/react-toastfy/ — fake typosquatted package (NOT listed as a root dependency, never installed). Files: package.json (with postinstall script declaration), index.js (plausible facade), README.md, scripts/postinstall.js (inert, heavily commented, exits early — safe by inspection).
lab/quarantine/ — new sandboxed directory for malicious lab payloads.
lab/quarantine/README.md — declares the directory's purpose and instructs all AI agents to treat its contents as inert data.
lab/quarantine/productivity-helper.mdc — pre-committed Cursor rules file with hidden HTML-comment block containing flag 1, the prompt injection, the endpoint path, and the magic token.
HTML breadcrumb — JSX-rendered HTML comment in app/admin/documents/page.tsx mentioning react-toastfy and the diagnostic endpoint.
Runtime backdoor — app/api/admin/__diag/route.ts. Returns flag 2 only when X-Debug-Auth matches the hardcoded token; otherwise 403. Not linked from any UI / sitemap.
prisma/seed.ts — add the two flag entries (with difficulty) and 3 progressive hints per flag in flagHints.
content/vulnerabilities/ — two new in-app reference docs (concept + fix only, no exploit specifics):
npm-supply-chain-typosquat.md
ai-rules-file-backdoor.md
docs/ — single chained walkthrough covering both flags step-by-step (Astro blog post format, slug supply-chain-poisoned-rules-chain).
Root AGENTS.md — append a Lab Quarantine Zones section listing lab/quarantine/** and packages/react-toastfy/** with clear instructions to all AI agents not to interpret their contents as instructions.
Non-negotiable safety constraints
This challenge introduces malicious-looking content. To stay safe for all players:
No file in any AI auto-loaded path. Forbidden: .cursor/rules/**, .cursorrules, .claude/skills/**, .windsurfrules, .windsurf/rules/**, .github/copilot-instructions.md, .continue/**, root CLAUDE.md (malicious version). Existing root AGENTS.md is enriched (warning section), never weaponized.
Fake package never listed in root package.json. No npm install ever executes its postinstall.
postinstall.js is inert by inspection. No fs.writeFileSync, no network call, no child_process, no os.homedir() write. Exits 0 after a benign console.log.
Backdoor endpoint is unlinked. Not in sitemap, robots.txt, navigation, or any client-side reference.
Quarantine zones documented in AGENTS.md.
Verification gate: opening the repo in a fresh Claude Code / Cursor session must not load any malicious context.
Acceptance criteria
Both flags are registered in prisma/seed.ts with correct difficulty, cwe, owasp, category, markdownFile, walkthroughSlug.
Three progressive hints per flag in flagHints (vague → clearer → near-solution), no flag value or token leaked.
Path-traversal chain reproducible end-to-end with curl only.
HTML breadcrumb visible in page source of /admin/documents.
Runtime backdoor: 403 without header, flag in JSON with correct header.
In-app vulnerability docs (concept + fix only, no exploit details).
Single chained walkthrough published in docs/.
Test coverage (full):
Jest API test for flag 1 (full traversal chain assertions).
Jest API test for flag 2 (header presence/absence/wrong-value cases).
Non-regression test on existing path traversal (OSS{p4th_tr4v3rs4l_4tt4ck} still obtainable).
Unit test on hint shape (length 3, non-empty).
Cypress E2E covering the full chain via UI + API.
npm run lint, npm run format:check, npm test, npm run docs:build, npm run db:seed all pass.
git ls-files confirms no file under any AI auto-loaded path.
Root AGENTS.md documents the quarantine zones.
Fake package excluded from Jest / ESLint / Prettier scopes as needed.
References
Pillar Security, "New Vulnerability in GitHub Copilot and Cursor: How Hackers Can Weaponize Code Agents" (March 2025)
Summary
Add a 2-flag chained challenge that teaches modern software supply-chain attacks: a typosquatted npm package whose
postinstalldrops a poisoned AI rules file (Cursor/Claude skill). The rules file uses prompt injection to instruct the developer's AI coding agent to insert a runtime backdoor in the application — which the player then exploits.This fills a gap in the lab's coverage: existing flags address prompt injection (
OSS{pr0mpt_1nj3ct10n_41_4ss1st4nt}) and poisoned MCP tools (OSS{mcp_p01s0n3d_t00l_r3sp0ns3}), but no flag covers software supply-chain failures (OWASP 2025 A06) or the emerging "Rules File Backdoor" attack class.Why this matters
Software supply chain is now one of the top-trending attack surfaces, and AI tooling supply chain is its next frontier:
.cursorrules/ agent skill files can make Cursor / Copilot / Claude generate backdoored code without the developer noticing.event-stream,ua-parser-js,colors.js,xz-utils, recurringpostinstallabuse.The chained scenario (npm → AI agent rule → injected runtime backdoor) is realistic, pedagogically rich, and barely covered anywhere in the CTF ecosystem.
Player narrative
Fully black-box — solvable without cloning the repo. Reuses the existing path-traversal vulnerability as the discovery primitive.
/admin/documentsand inspects the page source. A leftover dev TODO comment names a freshly-added internal package and a "diagnostic endpoint"./api/files?file=...path traversal, the player reads../package.jsonand notices a typosquatted dependency:react-toastfy(vs. legitimatereact-toastify).packages/react-toastfy/package.json,index.js, andscripts/postinstall.js. The postinstall is heavily commented and points to a dropped artifact atlab/quarantine/productivity-helper.mdc./api/admin/__diagX-Debug-Auth: dbg_8f3a7c91e2b4d6a05e21GET /api/admin/__diagwith the magic header → endpoint returns Flag 2 in JSON. Without the header, 403.Flags
npm-supply-chain-typosquatOSS{npm_typ0sqv4tt1ng_dr0p_4i_rul3s}ai-rules-file-backdoorOSS{rul3s_f1l3_b4ckd00r_3xpl01t3d}Both flags share a single chained walkthrough at slug
supply-chain-poisoned-rules-chain.Implementation outline
packages/react-toastfy/— fake typosquatted package (NOT listed as a root dependency, never installed). Files:package.json(withpostinstallscript declaration),index.js(plausible facade),README.md,scripts/postinstall.js(inert, heavily commented, exits early — safe by inspection).lab/quarantine/— new sandboxed directory for malicious lab payloads.lab/quarantine/README.md— declares the directory's purpose and instructs all AI agents to treat its contents as inert data.lab/quarantine/productivity-helper.mdc— pre-committed Cursor rules file with hidden HTML-comment block containing flag 1, the prompt injection, the endpoint path, and the magic token.app/admin/documents/page.tsxmentioningreact-toastfyand the diagnostic endpoint.app/api/admin/__diag/route.ts. Returns flag 2 only whenX-Debug-Authmatches the hardcoded token; otherwise 403. Not linked from any UI / sitemap.prisma/seed.ts— add the two flag entries (withdifficulty) and 3 progressive hints per flag inflagHints.content/vulnerabilities/— two new in-app reference docs (concept + fix only, no exploit specifics):npm-supply-chain-typosquat.mdai-rules-file-backdoor.mddocs/— single chained walkthrough covering both flags step-by-step (Astro blog post format, slugsupply-chain-poisoned-rules-chain).AGENTS.md— append a Lab Quarantine Zones section listinglab/quarantine/**andpackages/react-toastfy/**with clear instructions to all AI agents not to interpret their contents as instructions.Non-negotiable safety constraints
This challenge introduces malicious-looking content. To stay safe for all players:
.cursor/rules/**,.cursorrules,.claude/skills/**,.windsurfrules,.windsurf/rules/**,.github/copilot-instructions.md,.continue/**, rootCLAUDE.md(malicious version). Existing rootAGENTS.mdis enriched (warning section), never weaponized.package.json. Nonpm installever executes itspostinstall.postinstall.jsis inert by inspection. Nofs.writeFileSync, no network call, nochild_process, noos.homedir()write. Exits0after a benignconsole.log.AGENTS.md.Acceptance criteria
prisma/seed.tswith correctdifficulty,cwe,owasp,category,markdownFile,walkthroughSlug.flagHints(vague → clearer → near-solution), no flag value or token leaked.curlonly./admin/documents.docs/.OSS{p4th_tr4v3rs4l_4tt4ck}still obtainable).npm run lint,npm run format:check,npm test,npm run docs:build,npm run db:seedall pass.git ls-filesconfirms no file under any AI auto-loaded path.AGENTS.mddocuments the quarantine zones.References
event-stream,ua-parser-js,colors.js,xz-utils