Skip to content

dapurv5/awesome-red-teaming-llms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Red-Teaming LLMs Awesome

A comprehensive guide to understanding Attacks, Defenses and Red-Teaming for Large Language Models (LLMs).

Red-Teaming LLMs

Twitter Thread arXiv

Contents

Red-Teaming Attack Taxonomy

Taxonomy

Other Surveys

Title Link
SoK: Prompt Hacking of Large Language Models Link

Red-Teaming

Title Link
Red-Teaming for Generative AI: Silver Bullet or Security Theater? Link
Lessons From Red Teaming 100 Generative AI Products Link

If you like our work, please consider citing. If you would like to add your work to our taxonomy please open a pull request.

BibTex

@article{verma2024operationalizing,
  title={Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)},
  author={Verma, Apurv and Krishna, Satyapriya and Gehrmann, Sebastian and Seshadri, Madhavan and Pradhan, Anu and Ault, Tom and Barrett, Leslie and Rabinowitz, David and Doucette, John and Phan, NhatHai},
  journal={arXiv preprint arXiv:2407.14937},
  year={2024}
}