-
Notifications
You must be signed in to change notification settings - Fork 375
Open
Labels
approvedUsed to note team approval of metadata requestsUsed to note team approval of metadata requestscorrectionfor corrections submitted to the anthologyfor corrections submitted to the anthologymetadataCorrection to paper metadataCorrection to paper metadata
Description
JSON data block
{
"anthology_id": "2024.naacl-long.20",
"abstract": "Evaluating retrieval-augmented generation (RAG) systems traditionally relies on hand annotations for input queries, passages to retrieve, and responses to generate. We introduce ARES, an <i>Automated RAG Evaluation System</i>, for evaluating RAG systems along the dimensions of context relevance, answer faithfulness, and answer relevance. By creating its own synthetic training data, ARES finetunes lightweight LM judges to assess the quality of individual RAG components. To mitigate potential prediction errors, ARES utilizes a small set of human-annotated datapoints for prediction-powered inference (PPI). Across eight different knowledge-intensive tasks in KILT, SuperGLUE, and AIS, ARES accurately evaluates RAG systems while using only a few hundred human annotations during evaluation. Furthermore, ARES judges remain effective across domain shifts, proving accurate even after changing the type of queries and/or documents used in the evaluated RAG systems. We make our code and datasets publicly available on Github."
}Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
approvedUsed to note team approval of metadata requestsUsed to note team approval of metadata requestscorrectionfor corrections submitted to the anthologyfor corrections submitted to the anthologymetadataCorrection to paper metadataCorrection to paper metadata