Skip to content

Initial Release "privacyguard-platform"

Latest

Choose a tag to compare

@mgrange1998 mgrange1998 released this 30 Mar 20:56
· 3 commits to main since this release
Immutable release. Only release title and notes can be modified.

This release sets up pypi support so the PrivacyGuard library can be distributed and installed with pip.

Library Info

PrivacyGuard is a library that allows you to perform a privacy analysis (e.g., Membership Inference, Text Inclusion) of models in PyTorch or LLM models. This repo implements various privacy attacks, alongside analysis nodes to interpet the attack results. With PrivacyGuard, you can:

Run an off-the-shelf analysis to approximately assess privacy leakage and data memorization in an already trained model.
Run deeper analysis to better grasp the privacy issues (for instance, SOTA shadow models attack).
Provide useful primitives for analysis such as grouped or balanced attacks and various metrics such as AUC/ROC or empirical epsilon.
Execute LLM text generation attacks and probabilistic decoding methods.

Why PrivacyGuard?

Extensible API: PrivacyGuard has an extensible API that allows for easy creation of new analyses and attacks. This makes it easy for researchers to extend the library and build off of existing Privacy attacks, reproduce the results of existing attacks on new models and datasets, and develop new attacks.

End to End Privacy Attacks out of the box: PrivacyGuard abstracts away analysis details allowing for quick set up and execution of pragmatic and SOTA privacy attacks.

State-of-the-art methods: PrivacyGuard implements and maintains state of the art attacks, such as LiRA Likelihood Ratio Attack and probabilistic decoding methods

Flexible: PrivacyGuard is highly configurable, allowing researchers to plug in novel privacy attacks, models, datasets, and analyses.

Production ready: PrivacyGuard is a reliable and well supported library with comprehensive testing and CI, ensuring the library remains in a easy to use state.