Skip to content

Commit

Permalink
Add CoT to LLM as judge assessments
Browse files Browse the repository at this point in the history
Signed-off-by: Martín Santillán Cooper <[email protected]>
  • Loading branch information
martinscooper committed Feb 18, 2025
1 parent fe79da3 commit b34470d
Showing 1 changed file with 11 additions and 2 deletions.
13 changes: 11 additions & 2 deletions src/unitxt/llm_as_judge_chat_templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,19 @@
You will assess the quality of the response subject to an evaluation criteria.
###Context:
{context_variables}
###Response:
{response}
###Evaluation criteria:
{criteria_description}
{display_options_instruction}
Briefly assess the quality of the response subject to the evaluation criteria.
Focus on the evaluation criteria during assessment, do not provide a general assessment.
Assessment: """
Assessment:
Lets think step by step """
),
"summarization": InputOutputTemplate(
input_format="""Transform the following assessment into a concise summary that focuses on the key details, excluding references to the assessment itself.
Expand All @@ -41,17 +46,21 @@
This is the context:
{context_variables}
This is the evaluation criteria:
{criteria_name}
{criteria_description}
Response {option_a}:
{response_a}
Response {option_b}:
{response_b}
Keeping the evaluation criteria in mind, briefly assess which response is better.
Focus on the evaluation criteria during assessment, do not provide a general assessment.
Assessment: """
Assessment:
Lets think step by step """
),
"summarization": InputOutputTemplate(
input_format="""Transform the following assessment into a concise summary that focuses on the key details, excluding references to the assessment itself. The summary must clearly state which response won.
Expand Down

0 comments on commit b34470d

Please sign in to comment.