You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/source/guide/prompts_create.md
+4-3
Original file line number
Diff line number
Diff line change
@@ -14,12 +14,12 @@ date: 2024-06-11 16:53:16
14
14
15
15
## Prerequisites
16
16
17
-
* An OpenAI API key or an Azure OpenAI key.
17
+
* An API key for your LLM.
18
18
* A project that meets the [criteria noted below](#Create-a-Prompt).
19
19
20
20
## Model provider API keys
21
21
22
-
You can specify one OpenAI API key and/or multiple Azure OpenAI keys per organization. Keys only need to be added once.
22
+
You can specify one OpenAI API key and/or multiple custom and Azure OpenAI keys per organization. Keys only need to be added once.
23
23
24
24
Click **API Keys** in the top right of the Prompts page to open the **Model Provider API Keys** window:
25
25
@@ -120,10 +120,11 @@ From the Prompts page, click **Create Prompt** in the upper right and then compl
120
120
* For text classification, this means that the labeling configuration for the project must use `Choice` tags.
121
121
* For NER, this means that the labeling configuration for the project must use `Label` tags.
122
122
* The project must have one output type (`Choice` or `Label`) and not a mix of both.
123
+
* The project cannot include multiple `Choices` or `Labels` blocks in its labeling configuration.
123
124
* The project must include text data. While it can include other data types such as images or video, it must include `<Text>`.
124
125
* You must have access to the project. If you are in the Manager role, you need to be added to the project to have access.
125
126
* The project cannot be located in your Personal Sandbox workspace.
126
-
* While projects connected to an ML backend will still appear in the list of eligible projects, we do not recommend using Prompts with an ML backend.
127
+
* While projects connected to an ML backend will still appear in the list of eligible projects, we do not recommend using Prompts with an ML backend as this can interfere with how accuracy and score are calculated when evaluating the prompt.
Copy file name to clipboardexpand all lines: docs/source/guide/prompts_draft.md
+33-4
Original file line number
Diff line number
Diff line change
@@ -18,12 +18,16 @@ With your [Prompt created](prompts_create), you can begin drafting your prompt c
18
18
19
19
1. Select your base model.
20
20
21
-
The models that appear depend on the [API keys](prompts_create#Model-provider-API-keys) that you have configured for your organization. If you have added an OpenAI key, then you will see all supported OpenAI models. If you have added Azure OpenAI keys, then you will see one model per each deployment that you have added.
21
+
The models that appear depend on the [API keys](prompts_create#Model-provider-API-keys) that you have configured for your organization. If you have added an OpenAI key, then you will see all supported OpenAI models. If you have other API keys, then you will see one model per each deployment that you have added.
22
22
23
23
For a description of all OpenAI models, see [OpenAI's models overview](https://platform.openai.com/docs/models/models-overview).
24
24
2. In the **Prompt** field, enter your prompt. Keep in mind the following:
25
-
* You must include the text class. (In the demo below, this is the `review`class.) Click the text class name to insert it into the prompt.
25
+
* You must include the text variables. These appear directly above the prompt field. (In the demo below, this is the `review`variable.) Click the text variable name to insert it into the prompt.
26
26
* Although not strictly required, you should provide definitions for each class to ensure prediction accuracy and to help [add context](#Add-context).
27
+
28
+
!!! info Tip
29
+
You can generate an initial draft by simply adding the text variables and then [clicking **Enhance Prompt**](#Enhance-prompt).
30
+
27
31
3. Select your baseline:
28
32
***All Project Tasks** - Generate predictions for all tasks in the project. Depending on the size of your project, this might take some time to process. This does not generate an accuracy score for the prompt.
29
33
@@ -162,19 +166,44 @@ NER
162
166
</td>
163
167
<td>
164
168
165
-
The cost to run the prompt evaluation based on the number of tokens required.
169
+
The cost to run the prompt based on the number of tokens required.
166
170
167
171
</td>
168
172
</tr>
169
173
</table>
170
174
175
+
## Enhance prompt
176
+
177
+
You can use **Enhance Prompt** to help you construct and auto-refine your prompts.
178
+
179
+
At minimum, you need to insert the text variable first. (Click the text variable name to insert it into the prompt. These appear above the prompts field).
180
+
181
+
From the **Enhance Prompt** window you will need to select the **Teacher Model** that you want to use to write your prompt. As you auto-refine your prompt, you'll get the following:
182
+
183
+
* A new prompt displayed next to the previous prompt.
184
+
* An explanation of the changes made.
185
+
* The estimated cost spent auto-refining your prompt.
186
+
187
+

188
+
189
+
**How it works**
190
+
191
+
The **Task Subset** is used as the context when auto-refining the prompt. If you have ground truth data available, that will serve as the task subset. Otherwise, a sample of up to to 10 project tasks are used.
192
+
193
+
Auto-refinement applies your initial prompt and the Teacher Model to generate predictions on the task subset (which will be ground truth tasks or a sample dataset). If applicable, predictions are then compared to the ground truth for accuracy.
194
+
195
+
Your Teacher Model evaluates the initial prompt’s predictions against the ground truth (or sample task output) and identifies areas for improvement. It then suggests a refined prompt, aimed at achieving closer alignment with the desired outcomes.
196
+
197
+
198
+
199
+
171
200
## Drafting effective prompts
172
201
173
202
For a comprehensive guide to drafting prompts, see [The Prompt Report: A Systematic Survey of Prompting Techniques](https://arxiv.org/abs/2406.06608) or OpenAI's guide to [Prompt Engineering](https://platform.openai.com/docs/guides/prompt-engineering).
174
203
175
204
### Text placement
176
205
177
-
When you place your text class in the prompt (`review` in the demo above), this placeholder will be replaced by the actual text.
206
+
When you place your text variable in the prompt (`review` in the demo above), this placeholder will be replaced by the actual text.
178
207
179
208
Depending on the length and complexity of your text, inserting it into the middle of another sentence or thought could potentially confuse the LLM.
0 commit comments