This repository contains the code and data for the evaluation and fine-tuning of models presented in the paper "Delve into Guard Models as Guardrails for LLM Agents". We open-source our resources to facilitate further research in using guard models to enhance the safety and reliability of LLM agents.
-
agent_evaluation/Contains the code and datasets used for evaluating the performance and safety of LLM agents. -
guard_model_evaluation/Includes the code and data for testing and benchmarking the guard models. -
finetune/Provides the codebase for fine-tuning models, along with the synthetic data generated for this process. -
result/Stores the experimental results and logs obtained from our studies.
Please refer to the README files within each subdirectory for specific instructions on how to run the code and reproduce the results.