Skip to content

Latest commit

 

History

History
22 lines (13 loc) · 986 Bytes

File metadata and controls

22 lines (13 loc) · 986 Bytes

Delve into Guard Models as Guardrails for LLM Agents

This repository contains the code and data for the evaluation and fine-tuning of models presented in the paper "Delve into Guard Models as Guardrails for LLM Agents". We open-source our resources to facilitate further research in using guard models to enhance the safety and reliability of LLM agents.

Repository Structure

  • agent_evaluation/ Contains the code and datasets used for evaluating the performance and safety of LLM agents.

  • guard_model_evaluation/ Includes the code and data for testing and benchmarking the guard models.

  • finetune/ Provides the codebase for fine-tuning models, along with the synthetic data generated for this process.

  • result/ Stores the experimental results and logs obtained from our studies.

Getting Started

Please refer to the README files within each subdirectory for specific instructions on how to run the code and reproduce the results.