You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`RetryConfig.DISABLED`| 1 (no retry) | - | - | Development, testing, and debugging. |
71
+
|`RetryConfig.CONSERVATIVE`| 3 | 2s | 30s | Background or scheduled tasks where reliability is more important than speed. |
72
+
|`RetryConfig.AGGRESSIVE`| 5 | 500ms | 20s | Critical operations where fast recovery from transient errors is more important than reducing API calls. |
73
+
|`RetryConfig.PRODUCTION`| 3 | 1s | 20s | General production use. |
70
74
75
+
You can use them directly or create custom configurations:
@@ -80,7 +96,7 @@ val customClient = RetryingLLMClient(
80
96
)
81
97
)
82
98
```
83
-
<!--- KNIT example-handling-failures-02.kt -->
99
+
<!--- KNIT example-handling-failures-03.kt -->
84
100
85
101
### Retry error patterns
86
102
@@ -102,7 +118,7 @@ You can use the following pattern types and combine any number of them:
102
118
*`RetryablePattern.Regex`: Matches a regular expression in the error message.
103
119
*`RetryablePattern.Custom`: Matches a custom logic using a lambda function.
104
120
105
-
If any pattern returns `true`, the error is considered retryable, and the LLM client can retry the request.
121
+
If any pattern returns `true`, the error is considered retryable, and the LLM client retries the request.
106
122
107
123
#### Default patterns
108
124
@@ -150,7 +166,7 @@ val config = RetryConfig(
150
166
)
151
167
)
152
168
```
153
-
<!--- KNIT example-handling-failures-03.kt -->
169
+
<!--- KNIT example-handling-failures-04.kt -->
154
170
155
171
You can also append custom patterns to the default `RetryConfig.DEFAULT_PATTERNS`:
156
172
@@ -165,7 +181,7 @@ val config = RetryConfig(
165
181
)
166
182
)
167
183
```
168
-
<!--- KNIT example-handling-failures-04.kt -->
184
+
<!--- KNIT example-handling-failures-05.kt -->
169
185
170
186
171
187
### Streaming with retry
@@ -199,11 +215,12 @@ val config = RetryConfig(
199
215
val client =RetryingLLMClient(baseClient, config)
200
216
val stream = client.executeStreaming(prompt, OpenAIModels.Chat.GPT4o)
201
217
```
202
-
<!--- KNIT example-handling-failures-05.kt -->
218
+
<!--- KNIT example-handling-failures-06.kt -->
203
219
204
220
!!!note
205
221
Streaming retries only apply to connection failures that occur before the first token is received.
206
-
After streaming has started, any errors will be passed through.
222
+
Once streaming has started, the retry logic is disabled.
223
+
If an error occurs during streaming, the operation is terminated.
207
224
208
225
### Retry with prompt executors
209
226
@@ -250,13 +267,23 @@ val multiExecutor = MultiLLMPromptExecutor(
250
267
),
251
268
)
252
269
```
253
-
<!--- KNIT example-handling-failures-06.kt -->
270
+
<!--- KNIT example-handling-failures-07.kt -->
254
271
255
272
## Timeout configuration
256
273
257
274
All LLM clients support timeout configuration to prevent hanging requests.
258
275
You can specify timeout values for network connections when creating the client using
259
-
the [`ConnectionTimeoutConfig`](https://api.koog.ai/prompt/prompt-executor/prompt-executor-clients/ai.koog.prompt.executor.clients/-connection-timeout-config/index.html) class:
276
+
the [`ConnectionTimeoutConfig`](https://api.koog.ai/prompt/prompt-executor/prompt-executor-clients/ai.koog.prompt.executor.clients/-connection-timeout-config/index.html) class.
277
+
278
+
`ConnectionTimeoutConfig` has the following properties:
Use built-in retries, timeouts, and other error handling mechanisms in your application.
92
+
93
+
</div>
94
+
95
+
## Prompts in AI agents
96
+
97
+
In Koog, AI agents maintain and manage prompts during their lifecycle.
98
+
While LLM clients or executors are used for direct prompt execution, agents handle the flow of prompt updates to ensure
99
+
the conversation history is relevant and consistent.
100
+
101
+
The prompt lifecycle in an agent usually includes several stages:
102
+
103
+
1. Initial prompt setup.
104
+
2. Automatic prompt updates.
105
+
3. Context window management.
106
+
4. Manual prompt management.
107
+
108
+
### Initial prompt setup
109
+
110
+
When you [initialize an agent](../getting-started/#create-and-run-an-agent), you define a [system message](prompt-creation/index.md#system-message) that sets the agent's behavior.
111
+
An initial [user message](prompt-creation/index.md#user-messages) is usually provided as input when you call the agent's `run()` method.
112
+
For example:
101
113
102
114
<!--- INCLUDE
103
115
import ai.koog.agents.core.agent.AIAgent
@@ -116,30 +128,52 @@ fun main() = runBlocking {
116
128
// Create an agent
117
129
val agent =AIAgent(
118
130
promptExecutor = simpleOpenAIExecutor(apiKey),
131
+
systemPrompt ="You are a helpful assistant.",
119
132
llmModel =OpenAIModels.Chat.GPT4o
120
133
)
121
134
122
135
// Run the agent
123
136
val result = agent.run("What is Koog?")
124
137
```
125
-
<!--- KNIT example-prompts-01.kt -->
138
+
<!--- KNIT example-prompts-02.kt -->
126
139
127
-
## Optimizing performance and handling failures
140
+
The agent automatically converts the text prompt to the Prompt object and sends it to the prompt executor:
128
141
129
-
Koog allows you to optimize performance and handle failures when running prompts.
As the agent runs its strategy, [predefined nodes](../nodes-and-components.md) automatically update the prompt.
164
+
For example:
136
165
137
-
Cache LLM responses to optimize performance and reduce costs for repeated requests.
166
+
-[`nodeLLMRequest`](../nodes-and-components/#nodellmrequest): Appends the user message and captures the LLM response.
167
+
-[`nodeExecuteTool`](../nodes-and-components/#nodeexecutetool): Adds tool execution results to the conversation history.
168
+
-[`nodeAppendPrompt`](../nodes-and-components/#nodeappendprompt): Inserts specific messages or instructions into the prompt at any point in the workflow.
0 commit comments