Skip to content

A few copy updates for prompt_evaluations notebooks #49

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@
"\n",
"The next step is telling promptfoo about the particular tests we'd like to run with our specific prompts and providers. Promptfoo gives us many options for how we define our tests, but we'll start with one of the most common approaches: specifying our tests inside a CSV file.\n",
"\n",
"We'll make a new CSV file called `dataset.csv` and write our test inputs inside of it. \n",
"We'll make a new CSV file called `animal_legs_tests.csv` and write our test inputs inside of it. \n",
"\n",
"Promptfoo allows us to define evaluation logic directly inside the CSV file. In upcoming lessons we'll see some of the built-in test assertions that come with promptfoo, but for this particular evaluation all we need to do is look for an exact string match between the model's output and the expected output number of legs.\n",
"\n",
Expand All @@ -209,7 +209,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Create a `dataset.csv` file and add the following to it: \n",
"Create a `animal_legs_tests.csv` file and add the following to it: \n",
"\n",
"```csv\n",
"animal_statement,__expected\n",
Expand All @@ -232,7 +232,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we'll tell promptfoo that it should use our `dataset.csv` file to load tests from. To do this, update the `promptfooconfig.yaml` file to include this code: \n",
"Finally, we'll tell promptfoo that it should use our `animal_legs_tests.csv` file to load tests from. To do this, update the `promptfooconfig.yaml` file to include this code: \n",
"\n",
"```yaml\n",
"description: \"Animal Legs Eval\"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@
"\n",
"We've opted to return a GradingResult dictionary, which must include the following properties:\n",
"\n",
"- `pass_`: boolean\n",
"- `pass`: boolean\n",
"- `score`: float\n",
"- `reason`: a string explanation\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion prompt_evaluations/08_prompt_foo_model_graded/lesson.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Mdel-graded evals with promptfoo\n",
"## Model-graded evals with promptfoo\n",
"\n",
"As with most things in promptfoo, there are multiple valid approaches to writing model-graded evaluations. In this lesson we'll see the simplest pattern: utilizing built-in assertions. In the next lesson, we'll see how to write our own custom model-graded assertion functions.\n",
"\n",
Expand Down