Skip to content

Commit d43efec

Browse files
committed
Improve Prompts documentation based on feedback (#1302)
## Motivation and Context These changes were made based on comments in this PR #1287 ## Breaking Changes --- #### Type of the changes - [ ] New feature (non-breaking change which adds functionality) - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [x] Documentation update - [ ] Tests improvement - [ ] Refactoring #### Checklist - [ ] The pull request has a description of the proposed change - [ ] I read the [Contributing Guidelines](https://github.com/JetBrains/koog/blob/main/CONTRIBUTING.md) before opening the pull request - [ ] The pull request uses **`develop`** as the base branch - [ ] Tests for the changes have been added - [ ] All new and existing tests passed ##### Additional steps for pull requests adding a new feature - [ ] An issue describing the proposed change exists - [ ] The pull request includes a link to the issue - [ ] The change was discussed and approved in the issue - [ ] Docs have been added / updated
1 parent 74a839c commit d43efec

File tree

12 files changed

+382
-292
lines changed

12 files changed

+382
-292
lines changed

docs/docs/basic-agents.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,15 +32,15 @@ To use the `AIAgent` functionality, include all necessary dependencies in your b
3232

3333
```
3434
dependencies {
35-
implementation("ai.koog:koog-agents:$koog_version")
35+
implementation("ai.koog:koog-agents:VERSION")
3636
}
3737
```
3838

3939
For all available installation methods, see [Install Koog](getting-started.md#install-koog).
4040

4141
### 2. Create an agent
4242

43-
To create an agent, create an instance of the `AIAgent` class and provide the `executor` and `llmModel` parameters:
43+
To create an agent, create an instance of the `AIAgent` class and provide the `promptExecutor` and `llmModel` parameters:
4444

4545
<!--- INCLUDE
4646
import ai.koog.agents.core.agent.AIAgent

docs/docs/llm-parameters.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ val prompt = prompt(
3131
```
3232
<!--- KNIT example-llm-parameters-01.kt -->
3333

34-
For more information about prompt creation, see [Prompts](prompts/structured-prompts.md).
34+
For more information about prompt creation, see [Prompts](prompts/prompt-creation/index.md).
3535

3636
- When creating a subgraph:
3737

docs/docs/llm-providers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Koog lets you work with LLM providers on two levels:
5353
It can switch between providers
5454
and optionally fall back to a configured provider and LLM using the corresponding client.
5555
You can either create your own executor or use a pre-defined prompt executor for a specific provider.
56-
For details, see [Prompt executors](prompts/llm-clients.md).
56+
For details, see [Prompt executors](prompts/prompt-executors.md).
5757

5858

5959
Using a prompt executor offers a higher‑level layer over one or more LLMClients.

docs/docs/prompts/handling-failures.md

Lines changed: 48 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -41,22 +41,15 @@ val response = resilientClient.execute(prompt, OpenAIModels.Chat.GPT4o)
4141

4242
### Configuring retry behavior
4343

44-
Koog provides several predefined retry configurations:
45-
46-
| Configuration | Max attempts | Initial delay | Max delay | Use case |
47-
|----------------------------|--------------|---------------|-----------|-------------------------|
48-
| `RetryConfig.DISABLED` | 1 (no retry) | - | - | Development and testing |
49-
| `RetryConfig.CONSERVATIVE` | 3 | 2s | 30s | Normal production use |
50-
| `RetryConfig.AGGRESSIVE` | 5 | 500ms | 20s | Critical operations |
51-
| `RetryConfig.PRODUCTION` | 3 | 1s | 20s | Recommended default |
52-
53-
You can use them directly or create custom configurations:
44+
By default, `RetryingLLMClient` configures an LLM client with the maximum of 3 retry attempts, a 1-second initial delay,
45+
and a 30-second maximum delay.
46+
You can specify a different retry configuration using a `RetryConfig` passed to `RetryingLLMClient`.
47+
For example:
5448

5549
<!--- INCLUDE
5650
import ai.koog.prompt.executor.clients.openai.OpenAILLMClient
5751
import ai.koog.prompt.executor.clients.retry.RetryConfig
5852
import ai.koog.prompt.executor.clients.retry.RetryingLLMClient
59-
import kotlin.time.Duration.Companion.seconds
6053
6154
val apiKey = System.getenv("OPENAI_API_KEY")
6255
val client = OpenAILLMClient(apiKey)
@@ -67,7 +60,30 @@ val conservativeClient = RetryingLLMClient(
6760
delegate = client,
6861
config = RetryConfig.CONSERVATIVE
6962
)
63+
```
64+
<!--- KNIT example-handling-failures-02.kt -->
65+
66+
Koog provides several predefined retry configurations:
67+
68+
| Configuration | Max attempts | Initial delay | Max delay | Use case |
69+
|----------------------------|--------------|---------------|-----------|----------------------------------------------------------------------------------------------------------|
70+
| `RetryConfig.DISABLED` | 1 (no retry) | - | - | Development, testing, and debugging. |
71+
| `RetryConfig.CONSERVATIVE` | 3 | 2s | 30s | Background or scheduled tasks where reliability is more important than speed. |
72+
| `RetryConfig.AGGRESSIVE` | 5 | 500ms | 20s | Critical operations where fast recovery from transient errors is more important than reducing API calls. |
73+
| `RetryConfig.PRODUCTION` | 3 | 1s | 20s | General production use. |
7074

75+
You can use them directly or create custom configurations:
76+
77+
<!--- INCLUDE
78+
import ai.koog.prompt.executor.clients.openai.OpenAILLMClient
79+
import ai.koog.prompt.executor.clients.retry.RetryConfig
80+
import ai.koog.prompt.executor.clients.retry.RetryingLLMClient
81+
import kotlin.time.Duration.Companion.seconds
82+
83+
val apiKey = System.getenv("OPENAI_API_KEY")
84+
val client = OpenAILLMClient(apiKey)
85+
-->
86+
```kotlin
7187
// Or create a custom configuration
7288
val customClient = RetryingLLMClient(
7389
delegate = client,
@@ -80,7 +96,7 @@ val customClient = RetryingLLMClient(
8096
)
8197
)
8298
```
83-
<!--- KNIT example-handling-failures-02.kt -->
99+
<!--- KNIT example-handling-failures-03.kt -->
84100

85101
### Retry error patterns
86102

@@ -102,7 +118,7 @@ You can use the following pattern types and combine any number of them:
102118
* `RetryablePattern.Regex`: Matches a regular expression in the error message.
103119
* `RetryablePattern.Custom`: Matches a custom logic using a lambda function.
104120

105-
If any pattern returns `true`, the error is considered retryable, and the LLM client can retry the request.
121+
If any pattern returns `true`, the error is considered retryable, and the LLM client retries the request.
106122

107123
#### Default patterns
108124

@@ -150,7 +166,7 @@ val config = RetryConfig(
150166
)
151167
)
152168
```
153-
<!--- KNIT example-handling-failures-03.kt -->
169+
<!--- KNIT example-handling-failures-04.kt -->
154170

155171
You can also append custom patterns to the default `RetryConfig.DEFAULT_PATTERNS`:
156172

@@ -165,7 +181,7 @@ val config = RetryConfig(
165181
)
166182
)
167183
```
168-
<!--- KNIT example-handling-failures-04.kt -->
184+
<!--- KNIT example-handling-failures-05.kt -->
169185

170186

171187
### Streaming with retry
@@ -199,11 +215,12 @@ val config = RetryConfig(
199215
val client = RetryingLLMClient(baseClient, config)
200216
val stream = client.executeStreaming(prompt, OpenAIModels.Chat.GPT4o)
201217
```
202-
<!--- KNIT example-handling-failures-05.kt -->
218+
<!--- KNIT example-handling-failures-06.kt -->
203219

204220
!!!note
205221
Streaming retries only apply to connection failures that occur before the first token is received.
206-
After streaming has started, any errors will be passed through.
222+
Once streaming has started, the retry logic is disabled.
223+
If an error occurs during streaming, the operation is terminated.
207224

208225
### Retry with prompt executors
209226

@@ -250,13 +267,23 @@ val multiExecutor = MultiLLMPromptExecutor(
250267
),
251268
)
252269
```
253-
<!--- KNIT example-handling-failures-06.kt -->
270+
<!--- KNIT example-handling-failures-07.kt -->
254271

255272
## Timeout configuration
256273

257274
All LLM clients support timeout configuration to prevent hanging requests.
258275
You can specify timeout values for network connections when creating the client using
259-
the [`ConnectionTimeoutConfig`](https://api.koog.ai/prompt/prompt-executor/prompt-executor-clients/ai.koog.prompt.executor.clients/-connection-timeout-config/index.html) class:
276+
the [`ConnectionTimeoutConfig`](https://api.koog.ai/prompt/prompt-executor/prompt-executor-clients/ai.koog.prompt.executor.clients/-connection-timeout-config/index.html) class.
277+
278+
`ConnectionTimeoutConfig` has the following properties:
279+
280+
| Property | Default Value | Description |
281+
|------------------------|----------------------|---------------------------------------------------------------|
282+
| `connectTimeoutMillis` | 60 seconds (60,000) | Maximum time to establish a connection to the server. |
283+
| `requestTimeoutMillis` | 15 minutes (900,000) | Maximum time for the entire request to complete. |
284+
| `socketTimeoutMillis` | 15 minutes (900,000) | Maximum time to wait for data over an established connection. |
285+
286+
You can customize these values for your specific needs. For example:
260287

261288
<!--- INCLUDE
262289
import ai.koog.prompt.executor.clients.ConnectionTimeoutConfig
@@ -277,7 +304,7 @@ val client = OpenAILLMClient(
277304
)
278305
)
279306
```
280-
<!--- KNIT example-handling-failures-07.kt -->
307+
<!--- KNIT example-handling-failures-08.kt -->
281308

282309
!!! tip
283310
For long-running or streaming calls, set higher values for `requestTimeoutMillis` and `socketTimeoutMillis`.
@@ -342,4 +369,4 @@ fun main() {
342369
}
343370
}
344371
```
345-
<!--- KNIT example-handling-failures-08.kt -->
372+
<!--- KNIT example-handling-failures-09.kt -->

docs/docs/prompts/index.md

Lines changed: 97 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -6,42 +6,38 @@ This section describes how to create and run prompts with Koog.
66

77
## Creating prompts
88

9-
In Koog, all prompts are represented as [**Prompt**](https://api.koog.ai/prompt/prompt-model/ai.koog.prompt.dsl/-prompt/index.html)
10-
objects. A Prompt object contains:
9+
In Koog, prompts are instances of the [**Prompt**](https://api.koog.ai/prompt/prompt-model/ai.koog.prompt.dsl/-prompt/index.html)
10+
data class with the following properties:
1111

12-
- **ID**: A unique identifier for the prompt.
13-
- **Messages**: A list of messages that represent the conversation with the LLM.
14-
- **Parameters**: Optional [LLM configuration parameters](https://api.koog.ai/prompt/prompt-model/ai.koog.prompt.params/-l-l-m-params/index.html)
15-
(such as temperature, tool choice, and others).
12+
- `id`: A unique identifier for the prompt.
13+
- `messages`: A list of messages that represent the conversation with the LLM.
14+
- `params`: Optional [LLM configuration parameters](prompt-creation/index.md#prompt-parameters) (such as temperature, tool choice, and others).
1615

17-
All Prompt objects are structured prompts defined using the Kotlin DSL, which lets you specify the structure of the conversation.
16+
Although you can instantiate the `Prompt` class directly,
17+
the recommended way to create prompts is by using the [Kotlin DSL](prompt-creation/index.md),
18+
which provides a structured way to define the conversation.
19+
20+
<!--- INCLUDE
21+
import ai.koog.prompt.dsl.prompt
22+
-->
23+
```kotlin
24+
val myPrompt = prompt("hello-koog") {
25+
system("You are a helpful assistant.")
26+
user("What is Koog?")
27+
}
28+
```
29+
<!--- KNIT example-prompts-01.kt -->
1830

1931
!!! note
20-
AI agents let you provide a simple text prompt instead of creating a Prompt object.
32+
AI agents can take a simple text prompt as input.
2133
They automatically convert the text prompt to the Prompt object and send it to the LLM for execution.
22-
This is useful for a [basic agent](basic-agents.md) that only needs to run a single request.
23-
24-
25-
<div class="grid cards" markdown>
26-
27-
- :material-code-braces:{ .lg .middle } [**Structured prompts**](structured-prompts.md)
28-
29-
---
30-
31-
Create type-safe structured prompts for complex multi-turn conversations.
32-
33-
- :material-multimedia:{ .lg .middle } [**Multimodal inputs**](multimodal-inputs.md)
34-
35-
---
36-
37-
Send images, audio, video, and documents along with text in your structured prompts.
38-
39-
</div>
34+
This is useful for a [basic agent](../basic-agents.md)
35+
that only needs to run a single request and does not require complex conversation logic.
4036

4137
## Running prompts
4238

4339
Koog provides two levels of abstraction for running prompts against LLMs: LLM clients and prompt executors.
44-
They only accept Prompt objects and can be used for direct prompt execution, without an AI agent.
40+
Both accept Prompt objects and can be used for direct prompt execution, without an AI agent.
4541
The execution flow is the same for both clients and executors:
4642

4743
```mermaid
@@ -76,28 +72,45 @@ flowchart TB
7672

7773
</div>
7874

79-
If you want to run a simple text prompt, wrap it in a Prompt object using the Kotlin DSL,
80-
or use an AI agent, which automatically does this for you.
81-
Here is the execution flow for the agent:
75+
## Optimizing performance and handling failures
8276

83-
```mermaid
84-
flowchart TB
85-
A([Your application])
86-
B{{Configured AI agent}}
87-
C["Text prompt"]
88-
D["Prompt object"]
89-
E{{Prompt executor}}
90-
F[LLM provider]
77+
Koog allows you to optimize performance and handle failures when running prompts.
9178

92-
A -->|"run() with text"| B
93-
B -->|"takes"| C
94-
C -->|"converted to"| D
95-
D -->|"sent via"| E
96-
E -->|"calls"| F
97-
F -->|"responds to"| E
98-
E -->|"result to"| B
99-
B -->|"result to"| A
100-
```
79+
<div class="grid cards" markdown>
80+
81+
- :material-cached:{ .lg .middle } [**LLM response caching**](llm-response-caching.md)
82+
83+
---
84+
85+
Cache LLM responses to optimize performance and reduce costs for repeated requests.
86+
87+
- :material-shield-check:{ .lg .middle } [**Handling failures**](handling-failures.md)
88+
89+
---
90+
91+
Use built-in retries, timeouts, and other error handling mechanisms in your application.
92+
93+
</div>
94+
95+
## Prompts in AI agents
96+
97+
In Koog, AI agents maintain and manage prompts during their lifecycle.
98+
While LLM clients or executors are used to run prompts, agents handle the flow of prompt updates, ensuring the
99+
conversation history remains relevant and consistent.
100+
101+
The prompt lifecycle in an agent usually includes several stages:
102+
103+
1. Initial prompt setup.
104+
2. Automatic prompt updates.
105+
3. Context window management.
106+
4. Manual prompt management.
107+
108+
### Initial prompt setup
109+
110+
When you [initialize an agent](../getting-started/#create-and-run-an-agent), you define
111+
a [system message](prompt-creation/index.md#system-message) that sets the agent's behavior.
112+
Then, when you call the agent's `run()` method, you typically provide an initial [user message](prompt-creation/index.md#user-messages)
113+
as input. Together, these messages form the agent's initial prompt. For example:
101114

102115
<!--- INCLUDE
103116
import ai.koog.agents.core.agent.AIAgent
@@ -116,30 +129,56 @@ fun main() = runBlocking {
116129
// Create an agent
117130
val agent = AIAgent(
118131
promptExecutor = simpleOpenAIExecutor(apiKey),
132+
systemPrompt = "You are a helpful assistant.",
119133
llmModel = OpenAIModels.Chat.GPT4o
120134
)
121135

122136
// Run the agent
123137
val result = agent.run("What is Koog?")
124138
```
125-
<!--- KNIT example-prompts-01.kt -->
139+
<!--- KNIT example-prompts-02.kt -->
126140

127-
## Optimizing performance and handling failures
141+
In the example, the agent automatically converts the text prompt to the Prompt object and sends it to the prompt executor:
128142

129-
Koog allows you to optimize performance and handle failures when running prompts.
143+
```mermaid
144+
flowchart TB
145+
A([Your application])
146+
B{{Configured AI agent}}
147+
C["Text prompt"]
148+
D["Prompt object"]
149+
E{{Prompt executor}}
150+
F[LLM provider]
130151
131-
<div class="grid cards" markdown>
152+
A -->|"run() with text"| B
153+
B -->|"takes"| C
154+
C -->|"converted to"| D
155+
D -->|"sent via"| E
156+
E -->|"calls"| F
157+
F -->|"responds to"| E
158+
E -->|"result to"| B
159+
B -->|"result to"| A
160+
```
132161

133-
- :material-cached:{ .lg .middle } [**LLM response caching**](llm-response-caching.md)
162+
For more [advanced configurations](../complex-workflow-agents.md#4-configure-the-agent), you can also use
163+
[AIAgentConfig](https://api.koog.ai/agents/agents-core/ai.koog.agents.core.agent.config/-a-i-agent-config/index.html)
164+
to define the agent's initial prompt.
134165

135-
---
166+
### Automatic prompt updates
136167

137-
Cache LLM responses to optimize performance and reduce costs for repeated requests.
168+
As the agent runs its strategy, [predefined nodes](../nodes-and-components.md) automatically update the prompt.
169+
For example:
138170

139-
- :material-shield-check:{ .lg .middle } [**Handling failures**](handling-failures.md)
171+
- [`nodeLLMRequest`](../nodes-and-components/#nodellmrequest): Appends a user message to the prompt and captures the LLM response.
172+
- [`nodeLLMSendToolResult`](../nodes-and-components/#nodellmsendtoolresult): Appends tool execution results to the conversation.
173+
- [`nodeAppendPrompt`](../nodes-and-components/#nodeappendprompt): Inserts specific messages into the prompt at any point in the workflow.
140174

141-
---
175+
### Context window management
142176

143-
Use built-in retries, timeouts, and other error handling mechanisms in your application.
177+
To avoid exceeding the LLM context window in long-running interactions, agents can use the
178+
[history compression](../history-compression.md) feature.
144179

145-
</div>
180+
### Manual prompt management
181+
182+
For complex workflows, you can manage the prompt manually using [LLM sessions](../sessions.md).
183+
In an agent strategy or custom node, you can use `llm.writeSession` to access and change the `Prompt` object.
184+
This lets you add, remove, or reorder messages as needed.

0 commit comments

Comments
 (0)