- Session page — RSAC 2026 Learning Lab [LAB2-W08]
- Wednesday, March 25, 1:15–3:15 PM PDT
- Dr. Patrick Smyth, Principal Developer Relations Engineer, Chainguard
ML pipelines are vulnerable due to the immaturity of the ecosystem, the large attack surface of popular ML frameworks, and the unique properties of ML models. In this technical workshop, participants will put on their plumber hats and get dirty hardening vulnerable ML pipelines, covering safe model deserialization, training data ingestion, and infrastructure deployment.
Docker and Grype are required. See PREREQUISITES.md for install instructions covering macOS, Windows, and Linux.
Exercise 1: Pickle Deserialization (25 min)
PyTorch models use pickle. Pickle executes arbitrary code on load. We'll exploit this, then switch to SafeTensors as the safe alternative.
Exercise 2: Model Poisoning (30 min)
An attacker poisons a traffic sign dataset so that a yellow sticker on a stop sign makes the model predict "yield." We'll demonstrate the backdoor, then show how training on clean data prevents it.
Exercise 3: Supply Chain CVEs (20 min)
We'll scan Python container images with Grype and compare CVE counts across base image choices — from hundreds of vulnerabilities down to zero.