Skip to content

Commit 0dcc3ca

Browse files
caitlinwheelesscaitlinwheeless
and
caitlinwheeless
authored
docs: DOC-277: Rewrite Prompts API keys pages (#7025)
Co-authored-by: caitlinwheeless <[email protected]> Co-authored-by: caitlinwheeless <[email protected]>
1 parent 689259b commit 0dcc3ca

File tree

6 files changed

+128
-91
lines changed

6 files changed

+128
-91
lines changed

docs/source/guide/prompts_create.md

+1-82
Original file line numberDiff line numberDiff line change
@@ -14,90 +14,9 @@ date: 2024-06-11 16:53:16
1414

1515
## Prerequisites
1616

17-
* An API key for your LLM.
17+
* An [API key](prompts_keys) for your LLM.
1818
* A project that meets the [criteria noted below](#Create-a-Prompt).
1919

20-
## Model provider API keys
21-
22-
You can specify one OpenAI API key and/or multiple custom and Azure OpenAI keys per organization. Keys only need to be added once.
23-
24-
Click **API Keys** in the top right of the Prompts page to open the **Model Provider API Keys** window:
25-
26-
![Screenshot of the API keys modal](/images/prompts/model_keys.png)
27-
28-
Once added, you will have the option to select from the base models associated with each API key as you configure your prompts:
29-
30-
![Screenshot of the Base Models drop-down](/images/prompts/base_models.png)
31-
32-
To remove the key, click **API Keys** in the upper right of the Prompts page. You'll have the option to remove the key and add a new one.
33-
34-
### Add OpenAI, Azure OpenAI, or a custom model
35-
36-
{% details <b>Use an OpenAI key</b> %}
37-
38-
You can only have one OpenAI key per organization. For a list of the OpenAI models we support, see [Features, requirements, and constraints](prompts_overview#Features-requirements-and-constraints).
39-
40-
If you don't already have one, you can [create an OpenAI account here](https://platform.openai.com/signup).
41-
42-
You can find your OpenAI API key on the [API key page](https://platform.openai.com/api-keys).
43-
44-
Once added, all supported OpenAI models will appear in the base model options when you configure your prompt.
45-
46-
{% enddetails %}
47-
48-
{% details <b>Use an Azure OpenAI key</b> %}
49-
50-
Each Azure OpenAI key is tied to a specific deployment, and each deployment comprises a single OpenAI model. So if you want to use multiple models through Azure, you will need to create a deployment for each model and then add each key to Label Studio.
51-
52-
For a list of the Azure OpenAI models we support, see [Features, requirements, and constraints](prompts_overview#Features-requirements-and-constraints).
53-
54-
To use Azure OpenAI, you must first create the Azure OpenAI resource and then a model deployment:
55-
56-
1. From the Azure portal, [create an Azure OpenAI resource](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource).
57-
58-
!!! note
59-
If you are restricting network access to your resource, you will need to add the following IP addresses when configuring network security:
60-
61-
* 3.219.3.197
62-
* 34.237.73.3
63-
* 44.216.17.242
64-
65-
66-
2. From Azure OpenAI Studio, [create a deployment](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model). This is a base model endpoint.
67-
68-
When adding the key to Label Studio, you are asked for the following information:
69-
70-
| Field | Description|
71-
| --- | --- |
72-
| **Deployment** | The is the name of the deployment. By default, this is the same as the model name, but you can customize it when you create the deployment. If they are different, you must use the deployment name and not the underlying model name. |
73-
| **Endpoint** | This is the target URI provided by Azure. |
74-
| **API key** | This is the key provided by Azure. |
75-
76-
You can find all this information in the **Details** section of the deployment in Azure OpenAI Studio.
77-
78-
![Screenshot of the Azure deployment details](/images/prompts/azure_deployment.png)
79-
80-
{% enddetails %}
81-
82-
{% details <b>Use a custom LLM</b> %}
83-
84-
You can use your own self-hosted and fine-tuned model as long as it meets the following criteria:
85-
86-
* Your server must provide [JSON mode](https://python.useinstructor.com/concepts/patching/#json-mode) for the LLM.
87-
* The server API must follow [OpenAI format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format).
88-
89-
Examples of compatible LLMs include [Ollama](https://ollama.com/) and [sglang](https://github.com/sgl-project/sglang?tab=readme-ov-file#openai-compatible-api).
90-
91-
To add a custom model, enter the following:
92-
93-
* A name for the model.
94-
* The endpoint URL for the model. For example, `https://my.openai.endpoint.com/v1`
95-
* An API key to access the model. (Optional)
96-
* An auth token to access the model. (Optional)
97-
98-
{% enddetails %}
99-
100-
10120
## Create a Prompt
10221

10322
From the Prompts page, click **Create Prompt** in the upper right and then complete the following fields:

docs/source/guide/prompts_draft.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,8 @@ With your [Prompt created](prompts_create), you can begin drafting your prompt c
1818

1919
1. Select your base model.
2020

21-
The models that appear depend on the [API keys](prompts_create#Model-provider-API-keys) that you have configured for your organization. If you have added an OpenAI key, then you will see all supported OpenAI models. If you have other API keys, then you will see one model per each deployment that you have added.
21+
The models that appear depend on the [API keys](prompts_keys) that you have configured for your organization.
2222

23-
For a description of all OpenAI models, see [OpenAI's models overview](https://platform.openai.com/docs/models/models-overview).
2423
2. In the **Prompt** field, enter your prompt. Keep in mind the following:
2524
* You must include the text variables. These appear directly above the prompt field. (In the demo below, this is the `review` variable.) Click the text variable name to insert it into the prompt.
2625
* Although not strictly required, you should provide definitions for each class to ensure prediction accuracy and to help [add context](#Add-context).

docs/source/guide/prompts_examples.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ This example demonstrates how to set up Prompts to predict image captions.
9595
```
9696
3. Navigate to **Prompts** from the sidebar, and [create a prompt](prompts_create) for the project
9797

98-
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_create#Model-provider-API-keys).
98+
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_keys).
9999

100100
4. Add the instruction you’d like to provide the LLM to caption your images. For example:
101101

@@ -174,7 +174,7 @@ This example demonstrates how to set up Prompts to evaluate if the LLM-generated
174174

175175
3. Navigate to **Prompts** from the sidebar, and [create a prompt](prompts_create) for the project
176176

177-
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_create#Model-provider-API-keys).
177+
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_keys).
178178

179179
4. Add the instruction you’d like to provide the LLM to best evaluate the text. For example:
180180

@@ -300,7 +300,7 @@ Let’s expand on the Q&A use case above with an example demonstrating how to us
300300

301301
3. Navigate to **Prompts** from the sidebar, and [create a prompt](prompts_create) for the project
302302

303-
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_create#Model-provider-API-keys).
303+
If you have not yet set up API the keys you want to use, do that now: [API keys](prompts_keys).
304304

305305
4. Add instructions to create 3 questions:
306306

docs/source/guide/prompts_keys.md

+106
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
---
2+
title: Model provider API keys
3+
short: API keys
4+
tier: enterprise
5+
type: guide
6+
order: 0
7+
order_enterprise: 229
8+
meta_title: Model provider API keys
9+
meta_description: Add API keys to use with Prompts
10+
section: Prompts
11+
date: 2024-06-11 16:53:16
12+
---
13+
14+
There are two approaches to adding a model provider API key.
15+
16+
* In one scenario, you get one provider connection per organization, and this provides access to a set of whitelisted models. Examples include:
17+
18+
* OpenAI
19+
* Vertex AI
20+
* Gemini
21+
22+
* In the second scenario, you add a separate API key per model. Examples include:
23+
24+
* Azure OpenAI
25+
* Custom
26+
27+
Once a model is added via the API key, anyone in the organization who has access to the Prompts feature can select the associated models when executing their prompt.
28+
29+
You can see what API keys you have and add new ones by clicking **API Keys** in the top right of the Prompts page to open the **Model Provider API Keys** window:
30+
31+
![Screenshot of the API keys button](/images/prompts/model_keys.png)
32+
33+
## OpenAI API key
34+
35+
You can only have one OpenAI key per organization. This grants you access to set of whitelisted models. For a list of these models, see [Supported base models](prompts_overview#Supported-base-models).
36+
37+
If you don't already have one, you can [create an OpenAI account here](https://platform.openai.com/signup).
38+
39+
You can find your OpenAI API key on the [API key page](https://platform.openai.com/api-keys).
40+
41+
Once added, all supported models will appear in the base model drop-down when you [draft your prompt](prompts_draft).
42+
43+
## Gemini API key
44+
45+
You can only have one Gemini key per organization. This grants you access to set of whitelisted models. For a list of these models, see [Supported base models](prompts_overview#Supported-base-models).
46+
47+
For information on getting a Gemini API key, see [Get a Gemini API key](https://ai.google.dev/gemini-api/docs/api-key).
48+
49+
Once added, all supported models will appear in the base model drop-down when you [draft your prompt](prompts_draft).
50+
51+
## Vertex AI JSON credentials
52+
53+
You can only have one Vertex AI key per organization. This grants you access to set of whitelisted models. For a list of these models, see [Supported base models](prompts_overview#Supported-base-models).
54+
55+
Follow the instructions here to generate a credentials file in JSON format: [Authenticate to Vertex AI Agent Builder - Client libraries or third-party tools](https://cloud.google.com/generative-ai-app-builder/docs/authentication#client-libs)
56+
57+
The JSON credentials are required. You can also optionally provide the project ID and location associated with your Google Cloud Platform environment.
58+
59+
Once added, all supported models will appear in the base model drop-down when you [draft your prompt](prompts_draft).
60+
61+
## Azure OpenAI key
62+
63+
Each Azure OpenAI key is tied to a specific deployment, and each deployment comprises a single OpenAI model. So if you want to use multiple models through Azure, you will need to create a deployment for each model and then add each key to Label Studio.
64+
65+
For a list of the Azure OpenAI models we support, see [Supported base models](prompts_overview#Supported-base-models).
66+
67+
To use Azure OpenAI, you must first create the Azure OpenAI resource and then a model deployment:
68+
69+
1. From the Azure portal, [create an Azure OpenAI resource](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource).
70+
71+
!!! note
72+
If you are restricting network access to your resource, you will need to add the following IP addresses when configuring network security:
73+
74+
* 3.219.3.197
75+
* 34.237.73.3
76+
* 44.216.17.242
77+
78+
2. From Azure OpenAI Studio, [create a deployment](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model). This is a base model endpoint.
79+
80+
When adding the key to Label Studio, you are asked for the following information:
81+
82+
| Field | Description|
83+
| --- | --- |
84+
| **Deployment** | The is the name of the deployment. By default, this is the same as the model name, but you can customize it when you create the deployment. If they are different, you must use the deployment name and not the underlying model name. |
85+
| **Endpoint** | This is the target URI provided by Azure. |
86+
| **API key** | This is the key provided by Azure. |
87+
88+
You can find all this information in the **Details** section of the deployment in Azure OpenAI Studio.
89+
90+
![Screenshot of the Azure deployment details](/images/prompts/azure_deployment.png)
91+
92+
## Custom LLM
93+
94+
You can use your own self-hosted and fine-tuned model as long as it meets the following criteria:
95+
96+
* Your server must provide [JSON mode](https://python.useinstructor.com/concepts/patching/#json-mode) for the LLM.
97+
* The server API must follow [OpenAI format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format).
98+
99+
Examples of compatible LLMs include [Ollama](https://ollama.com/) and [sglang](https://github.com/sgl-project/sglang?tab=readme-ov-file#openai-compatible-api).
100+
101+
To add a custom model, enter the following:
102+
103+
* A name for the model.
104+
* The endpoint URL for the model. For example, `https://my.openai.endpoint.com/v1`
105+
* An API key to access the model. An API key is tied to a specific account, but the access is shared within the org if added. (Optional)
106+
* An auth token to access the model API. An auth token provides API access at the server level. (Optional)

docs/source/guide/prompts_overview.md

+17-4
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,6 @@ With Prompts, you can:
3333
| **Supported object tags** | `Text` <br>`HyperText` <br>`Image` |
3434
| **Supported control tags** | `Choices` (Text and Image)<br>`Labels` (Text)<br>`TextArea` (Text and Image)<br>`Pairwise` (Text and Image)<br>`Number` (Text and Image)<br>`Rating` (Text and Image) |
3535
| **Class selection** | Multi-selection (the LLM can apply multiple labels per task)|
36-
| **Supported base models** | OpenAI gpt-3.5-turbo-16k* <br>OpenAI gpt-3.5-turbo* <br>OpenAI gpt-4 <br>OpenAI gpt-4-turbo <br>OpenAI gpt-4o <br>OpenAI gpt-4o-mini<br>[Azure OpenAI chat-based models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)<br>[Custom LLM](prompts_create#Add-OpenAI-Azure-OpenAI-or-a-custom-model)<br><br>**Note:** We recommend against using GPT 3.5 models, as these can sometimes be prone to rate limit errors and are not compatible with Image data. |
3736
| **Text compatibility** | Task text must be utf-8 compatible |
3837
| **Task size** | Total size of each task can be no more than 1MB (approximately 200-500 pages of text) |
3938
| **Network access** | If you are using a firewall or restricting network access to your OpenAI models, you will need to allow the following IPs: <br>3.219.3.197 <br>34.237.73.3 <br>4.216.17.242 |
@@ -43,6 +42,20 @@ With Prompts, you can:
4342

4443
</div>
4544

45+
## Supported base models
46+
47+
<div class="noheader rowheader">
48+
49+
| Provider | Supported models |
50+
| --- | --- |
51+
| **OpenAI** | gpt-3.5-turbo-16k* <br>gpt-3.5-turbo* <br>gpt-4 <br>gpt-4-turbo <br>gpt-4o <br>gpt-4o-mini <br>o3-mini<br><br>**Note:** We recommend against using GPT 3.5 models, as these can sometimes be prone to rate limit errors and are not compatible with Image data. |
52+
| **Gemini** | gemini-2.0-flash-exp <br>gemini-1.5-flash <br>gemini-1.5-flash-8b <br>gemini-1.5-pro |
53+
| **Vertex AI** | gemini-2.0-flash-exp <br>gemini-1.5-flash <br>gemini-1.5-pro |
54+
| **Azure OpenAI** | [Azure OpenAI chat-based models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) <br><br>**Note:** We recommend against using GPT 3.5 models, as these can sometimes be prone to rate limit errors and are not compatible with Image data. |
55+
| **Custom** | [Custom LLM](prompts_create#Add-OpenAI-Azure-OpenAI-or-a-custom-model) |
56+
57+
</div>
58+
4659
## Use cases
4760

4861
### Auto-labeling with Prompts
@@ -67,7 +80,7 @@ By utilizing AI to handle the bulk of the annotation work, you can significantly
6780
3. Go to the Prompts page and create a new Prompt. If you haven't already, you will also need to add an API key to connect to your model.
6881

6982
* [Create a Prompt](prompts_create)
70-
* [Model provider keys](prompts_create#Model-provider-API-keys)
83+
* [Model provider keys](prompts_keys)
7184
4. Write a prompt and evaluate it against your ground truth dataset.
7285

7386
* [Draft a prompt](prompts_draft)
@@ -100,7 +113,7 @@ Additionally, this workflow provides a scalable solution for continuously expand
100113
2. Go to the Prompts page and create a new Prompt. If you haven't already, you will also need to add an API key to connect to your model.
101114

102115
* [Create a Prompt](prompts_create)
103-
* [Model provider keys](prompts_create#Model-provider-API-keys)
116+
* [Model provider keys](prompts_keys)
104117
3. Write a prompt and run it against your task samples.
105118
* [Draft a prompt](prompts_draft)
106119

@@ -136,7 +149,7 @@ This feedback loop allows you to iteratively fine-tune your prompts, optimizing
136149
3. Go to the Prompts page and create a new Prompt. If you haven't already, you will also need to add an API key to connect to your model.
137150

138151
* [Create a Prompt](prompts_create)
139-
* [Model provider keys](prompts_create#Model-provider-API-keys)
152+
* [Model provider keys](prompts_keys)
140153
4. Write a prompt and evaluate it against your ground truth dataset.
141154

142155
* [Draft a prompt](prompts_draft)
Loading

0 commit comments

Comments
 (0)