Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,13 @@ await prompt.push({ text: "You are a helpful assistant called {name}." });

<Tab title="curL" language="curl">

<EndpointRequestSnippet endpoint="POST /v1/prompts" />
For **message** prompts:

<EndpointRequestSnippet endpoint="POST /v1/prompts" example="List-Prompt" />

For **text** prompts:

<EndpointRequestSnippet endpoint="POST /v1/prompts" example="Text-Prompt" />

</Tab>

Expand Down Expand Up @@ -176,7 +182,13 @@ await prompt.update({

<Tab title="curL" language="curl">

<EndpointRequestSnippet endpoint="PUT /v1/prompts/{alias}/versions/{version}" />
For **message** prompts:

<EndpointRequestSnippet endpoint="PUT /v1/prompts/{alias}/versions/{version}" example="List-Prompt" />

For **text** prompts:

<EndpointRequestSnippet endpoint="PUT /v1/prompts/{alias}/versions/{version}" example="Text-Prompt" />

</Tab>

Expand Down Expand Up @@ -262,13 +274,56 @@ prompt.update(

<Tab title="TypeScript" language="typescript">

TODO
```typescript maxLines={0}
import { Prompt } from "deepeval-ts";

const responseSchema = {
name: "ResponseSchema",
fields: {
answer: "string",
confidence: "float",
}
}

const prompt = new Prompt({ alias: "YOUR-PROMPT-ALIAS" });

await prompt.push({
version: "00.00.01",
messagesTemplate: [
new PromptMessage({ role: "system", content: "You are a helpful assistant." }),
],
modelSettings: {
provider: "OPEN_AI",
name: "gpt-4o",
temperature: 0.7,
maxTokens: 1000,
topP: 0.9,
frequencyPenalty: 0.1,
presencePenalty: 0.1,
stopSequence: ["END"],
reasoningEffort: "MINIMAL",
verbosity: "LOW",
},
outputType: OutputType.SCHEMA,
outputSchema: responseSchema,
});

await prompt.update({
version: "latest",
modelSettings: {
provider: "OPEN_AI",
name: "gpt-4o",
temperature: 0.8,
maxTokens: 2000,
}
});
```

</Tab>

<Tab title="curL" language="curl">

TODO
<EndpointRequestSnippet endpoint="PUT /v1/prompts/{alias}/versions/{version}" example="Model-Settings" />

</Tab>

Expand Down
4 changes: 2 additions & 2 deletions fern/docs/pages/llm-evaluation/prompts/version-prompts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ from deepeval.prompt.api import PromptMessage
prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
prompt.push(messages=[PromptMessage(role="...", content="...")])

````
```
</Tab>

<Tab title="Text" language="python">
Expand All @@ -141,7 +141,7 @@ from deepeval.prompt import Prompt

prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
prompt.push(text="...")
````
```

</Tab>

Expand Down
26 changes: 18 additions & 8 deletions fern/openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -903,20 +903,24 @@ paths:
text: "Updated hello, {{name}}!"
interpolationType: "FSTRING"
outputType: "TEXT"
modelSettings:
provider: "OPEN_AI"
name: "gpt-4o"
temperature: 0.7
List-Prompt:
messages:
- role: user
content: "What is the updated weather like in {{city}}?"
interpolationType: "FSTRING"
outputType: "TEXT"
Model-Settings:
text: "Updated hello, {{name}}!"
interpolationType: "FSTRING"
outputType: "TEXT"
modelSettings:
provider: "OPEN_AI"
name: "gpt-4o"
temperature: 0.7
maxTokens: 1000
topP: 0.9
frequencyPenalty: 0.1
presencePenalty: 0.1
responses:
"200":
description: ""
Expand Down Expand Up @@ -1006,20 +1010,24 @@ paths:
text: "Updated hello, {{name}}!"
interpolationType: "FSTRING"
outputType: "TEXT"
modelSettings:
provider: "OPEN_AI"
name: "gpt-4o"
temperature: 0.7
List-Prompt:
messages:
- role: user
content: "What is the updated weather like in {{city}}?"
interpolationType: "FSTRING"
outputType: "TEXT"
Model-Settings:
text: "Updated hello, {{name}}!"
interpolationType: "FSTRING"
outputType: "TEXT"
modelSettings:
provider: "OPEN_AI"
name: "gpt-4o"
temperature: 0.7
maxTokens: 1000
topP: 0.9
frequencyPenalty: 0.1
presencePenalty: 0.1
responses:
"200":
description: ""
Expand Down Expand Up @@ -3983,6 +3991,7 @@ components:
description: This is the array of fields that define the output schema structure.
required:
- fields
- name

OutputSchemaField:
type: object
Expand All @@ -4007,6 +4016,7 @@ components:
required:
- id
- type
- name

UpdatePromptRequest:
type: object
Expand Down
Loading