You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Improve Prompts documentation based on feedback (#1302)
## Motivation and Context
These changes were made based on comments in this PR #1287
## Breaking Changes
---
#### Type of the changes
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] Breaking change (fix or feature that would cause existing
functionality to change)
- [x] Documentation update
- [ ] Tests improvement
- [ ] Refactoring
#### Checklist
- [ ] The pull request has a description of the proposed change
- [ ] I read the [Contributing
Guidelines](https://github.com/JetBrains/koog/blob/main/CONTRIBUTING.md)
before opening the pull request
- [ ] The pull request uses **`develop`** as the base branch
- [ ] Tests for the changes have been added
- [ ] All new and existing tests passed
##### Additional steps for pull requests adding a new feature
- [ ] An issue describing the proposed change exists
- [ ] The pull request includes a link to the issue
- [ ] The change was discussed and approved in the issue
- [ ] Docs have been added / updated
|`RetryConfig.DISABLED`| 1 (no retry) | - | - | Development, testing, and debugging. |
71
+
|`RetryConfig.CONSERVATIVE`| 3 | 2s | 30s | Background or scheduled tasks where reliability is more important than speed. |
72
+
|`RetryConfig.AGGRESSIVE`| 5 | 500ms | 20s | Critical operations where fast recovery from transient errors is more important than reducing API calls. |
73
+
|`RetryConfig.PRODUCTION`| 3 | 1s | 20s | General production use. |
70
74
75
+
You can use them directly or create custom configurations:
@@ -80,7 +96,7 @@ val customClient = RetryingLLMClient(
80
96
)
81
97
)
82
98
```
83
-
<!--- KNIT example-handling-failures-02.kt -->
99
+
<!--- KNIT example-handling-failures-03.kt -->
84
100
85
101
### Retry error patterns
86
102
@@ -102,7 +118,7 @@ You can use the following pattern types and combine any number of them:
102
118
*`RetryablePattern.Regex`: Matches a regular expression in the error message.
103
119
*`RetryablePattern.Custom`: Matches a custom logic using a lambda function.
104
120
105
-
If any pattern returns `true`, the error is considered retryable, and the LLM client can retry the request.
121
+
If any pattern returns `true`, the error is considered retryable, and the LLM client retries the request.
106
122
107
123
#### Default patterns
108
124
@@ -150,7 +166,7 @@ val config = RetryConfig(
150
166
)
151
167
)
152
168
```
153
-
<!--- KNIT example-handling-failures-03.kt -->
169
+
<!--- KNIT example-handling-failures-04.kt -->
154
170
155
171
You can also append custom patterns to the default `RetryConfig.DEFAULT_PATTERNS`:
156
172
@@ -165,7 +181,7 @@ val config = RetryConfig(
165
181
)
166
182
)
167
183
```
168
-
<!--- KNIT example-handling-failures-04.kt -->
184
+
<!--- KNIT example-handling-failures-05.kt -->
169
185
170
186
171
187
### Streaming with retry
@@ -199,11 +215,12 @@ val config = RetryConfig(
199
215
val client =RetryingLLMClient(baseClient, config)
200
216
val stream = client.executeStreaming(prompt, OpenAIModels.Chat.GPT4o)
201
217
```
202
-
<!--- KNIT example-handling-failures-05.kt -->
218
+
<!--- KNIT example-handling-failures-06.kt -->
203
219
204
220
!!!note
205
221
Streaming retries only apply to connection failures that occur before the first token is received.
206
-
After streaming has started, any errors will be passed through.
222
+
Once streaming has started, the retry logic is disabled.
223
+
If an error occurs during streaming, the operation is terminated.
207
224
208
225
### Retry with prompt executors
209
226
@@ -250,13 +267,23 @@ val multiExecutor = MultiLLMPromptExecutor(
250
267
),
251
268
)
252
269
```
253
-
<!--- KNIT example-handling-failures-06.kt -->
270
+
<!--- KNIT example-handling-failures-07.kt -->
254
271
255
272
## Timeout configuration
256
273
257
274
All LLM clients support timeout configuration to prevent hanging requests.
258
275
You can specify timeout values for network connections when creating the client using
259
-
the [`ConnectionTimeoutConfig`](https://api.koog.ai/prompt/prompt-executor/prompt-executor-clients/ai.koog.prompt.executor.clients/-connection-timeout-config/index.html) class:
276
+
the [`ConnectionTimeoutConfig`](https://api.koog.ai/prompt/prompt-executor/prompt-executor-clients/ai.koog.prompt.executor.clients/-connection-timeout-config/index.html) class.
277
+
278
+
`ConnectionTimeoutConfig` has the following properties:
0 commit comments