Skip to content

fix: properly pass BedrockLLMClient timeout setting to `BedrockRunt… #890

fix: properly pass BedrockLLMClient timeout setting to `BedrockRunt…

fix: properly pass BedrockLLMClient timeout setting to `BedrockRunt… #890

Triggered via push December 16, 2025 10:31
Status Failure
Total duration 10m 28s
Artifacts 9

heavy-tests.yml

on: push
Matrix: integration-tests
Fit to window
Zoom out
Zoom in

Annotations

23 errors
ModelCapabilitiesIntegrationTest.[34] LLModel(provider=OpenAI, id=gpt-5.1-codex, capabilities=[Completion, Temperature, Basic, Standard, Speculation, Tools, ToolChoice, Image, Document, MultipleChoices, Responses], contextLength=400000, maxOutputTokens=128000), MultipleChoices[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/capabilities/ModelCapabilitiesIntegrationTest.kt#L134
ai.koog.prompt.executor.clients.LLMClientException: Error from client: OpenAILLMClient Error from client: OpenAILLMClient Status code: 4***4 Error body: { "error": { "message": "This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?", "type": "invalid_request_error", "param": "model", "code": null } }
ModelCapabilitiesIntegrationTest.[33] LLModel(provider=OpenAI, id=gpt-5.1-codex, capabilities=[Completion, Temperature, Basic, Standard, Speculation, Tools, ToolChoice, Image, Document, MultipleChoices, Responses], contextLength=400000, maxOutputTokens=128000), Document[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/capabilities/ModelCapabilitiesIntegrationTest.kt#L1
ai.koog.http.client.KoogHttpClientException: Error from client: OpenAILLMClient Status code: 4*** Error body: { "error": { "message": "Missing required parameter: 'input[1].content[***]'.", "type": "invalid_request_error", "param": "input[1].content[***]", "code": "missing_required_parameter" } }
capabilities-tests
Process completed with exit code 1.
AIAgentIntegrationTest.[10] LLModel(provider=OpenRouter, id=qwen/qwen3-vl-8b-instruct, capabilities=[Temperature, Speculation, Tools, Completion, Image, Standard, ToolChoice], contextLength=131072, maxOutputTokens=33000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/agent/AIAgentIntegrationTest.kt#L445
kotlinx.serialization.json.internal.JsonDecodingException: Unexpected JSON token at offset 44: Expected EOF after parsing, but had { instead at path: $ JSON input: {"operation": "MULTIPLY", "a": 7, "b": 2} {"milliseconds": 5***}
AIAgentIntegrationTest.[10] LLModel(provider=OpenRouter, id=qwen/qwen3-vl-8b-instruct, capabilities=[Temperature, Speculation, Tools, Completion, Image, Standard, ToolChoice], contextLength=131072, maxOutputTokens=33000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/agent/AIAgentIntegrationTest.kt#L972
org.opentest4j.AssertionFailedError: calculator tool should be called for model LLModel(provider=OpenRouter, id=qwen/qwen3-vl-8b-instruct, capabilities=[Temperature, Speculation, Tools, Completion, Image, Standard, ToolChoice], contextLength=131***72, maxOutputTokens=33***) Unexpected elements from index 2 expected:<["calculator"]> but was:<["calculator", "calculator", "calculator"]>
AIAgentIntegrationTest.[10] LLModel(provider=OpenRouter, id=qwen/qwen3-vl-8b-instruct, capabilities=[Temperature, Speculation, Tools, Completion, Image, Standard, ToolChoice], contextLength=131072, maxOutputTokens=33000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/agent/AIAgentIntegrationTest.kt#L451
kotlinx.serialization.json.internal.JsonDecodingException: Unexpected JSON token at offset 44: Expected EOF after parsing, but had { instead at path: $ JSON input: {"operation": "MULTIPLY", "a": 7, "b": 2} {"milliseconds": 5***}
AIAgentIntegrationTest.[10] LLModel(provider=OpenRouter, id=qwen/qwen3-vl-8b-instruct, capabilities=[Temperature, Speculation, Tools, Completion, Image, Standard, ToolChoice], contextLength=131072, maxOutputTokens=33000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/agent/AIAgentIntegrationTest.kt#L464
kotlinx.serialization.json.internal.JsonDecodingException: Unexpected JSON token at offset 44: Expected EOF after parsing, but had { instead at path: $ JSON input: {"operation": "MULTIPLY", "a": 7, "b": 2} {"milliseconds": 5***}
agent-tests
Process completed with exit code 1.
MultipleLLMPromptExecutorIntegrationTest.[15] LARGE_IMAGE, LLModel(provider=OpenAI, id=gpt-5.2, capabilities=[Completion, Temperature, Basic, Standard, Speculation, Tools, ToolChoice, Image, Document, MultipleChoices, Completions, Responses], contextLength=400000, maxOutputTokens=128000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/executor/MultipleLLMPromptExecutorIntegrationTest.kt#L89
org.opentest4j.AssertionFailedError: "Error from client: OpenAILLMClient Error from client: OpenAILLMClient Status code: 4*** Error body: { "error": { "message": "Invalid content type. image_url is only supported by certain models.", "type": "invalid_request_error", "param": "messages.[1].content.[1].type", "code": null } } " should include substring "image exceeds" Match[***]: part of slice with indexes [***..4] matched actual[152..156] Line[***] ="Error from client: OpenAILLMClient" Line[1] ="Error from client: OpenAILLMClient" Line[2] ="Status code: 4***" Line[3] ="Error body:" Line[4] ="{" Line[5] =" "error": {" Line[6] =" "message": "Invalid content type. image_url is only supported by certain models."," Match[***]= --------------------------------------+++++------------------------------------------- Line[7] =" "type": "invalid_request_error"," Line[8] =" "param": "messages.[1].content.[1].type"," Line[9] =" "code": null" Line[1***] =" }" Line[11] ="}" expected:<image exceeds> but was:<Error from client: OpenAILLMClient Error from client: OpenAILLMClient Status code: 4*** Error body: { "error": { "message": "Invalid content type. image_url is only supported by certain models.", "type": "invalid_request_error", "param": "messages.[1].content.[1].type", "code": null } } >
MultipleLLMPromptExecutorIntegrationTest.[6] BASIC_JPG, LLModel(provider=OpenAI, id=gpt-5.2, capabilities=[Completion, Temperature, Basic, Standard, Speculation, Tools, ToolChoice, Image, Document, MultipleChoices, Completions, Responses], contextLength=400000, maxOutputTokens=128000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/executor/MultipleLLMPromptExecutorIntegrationTest.kt#L89
ai.koog.prompt.executor.clients.LLMClientException: Error from client: OpenAILLMClient Error from client: OpenAILLMClient Status code: 4*** Error body: { "error": { "message": "Invalid content type. image_url is only supported by certain models.", "type": "invalid_request_error", "param": "messages.[1].content.[1].type", "code": null } }
multiple-llm-executor-tests
Process completed with exit code 1.
SingleLLMPromptExecutorIntegrationTest.[15] LARGE_IMAGE, LLModel(provider=OpenAI, id=gpt-5.2, capabilities=[Completion, Temperature, Basic, Standard, Speculation, Tools, ToolChoice, Image, Document, MultipleChoices, Completions, Responses], contextLength=400000, maxOutputTokens=128000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/executor/SingleLLMPromptExecutorIntegrationTest.kt#L209
org.opentest4j.AssertionFailedError: "Error from client: OpenAILLMClient Error from client: OpenAILLMClient Status code: 4*** Error body: { "error": { "message": "Invalid content type. image_url is only supported by certain models.", "type": "invalid_request_error", "param": "messages.[1].content.[1].type", "code": null } } " should include substring "image exceeds" Match[***]: part of slice with indexes [***..4] matched actual[152..156] Line[***] ="Error from client: OpenAILLMClient" Line[1] ="Error from client: OpenAILLMClient" Line[2] ="Status code: 4***" Line[3] ="Error body:" Line[4] ="{" Line[5] =" "error": {" Line[6] =" "message": "Invalid content type. image_url is only supported by certain models."," Match[***]= --------------------------------------+++++------------------------------------------- Line[7] =" "type": "invalid_request_error"," Line[8] =" "param": "messages.[1].content.[1].type"," Line[9] =" "code": null" Line[1***] =" }" Line[11] ="}" expected:<image exceeds> but was:<Error from client: OpenAILLMClient Error from client: OpenAILLMClient Status code: 4*** Error body: { "error": { "message": "Invalid content type. image_url is only supported by certain models.", "type": "invalid_request_error", "param": "messages.[1].content.[1].type", "code": null } } >
SingleLLMPromptExecutorIntegrationTest.[6] BASIC_JPG, LLModel(provider=OpenAI, id=gpt-5.2, capabilities=[Completion, Temperature, Basic, Standard, Speculation, Tools, ToolChoice, Image, Document, MultipleChoices, Completions, Responses], contextLength=400000, maxOutputTokens=128000)[jvm]: integration-tests/src/jvmTest/kotlin/ai/koog/integration/tests/executor/SingleLLMPromptExecutorIntegrationTest.kt#L209
ai.koog.prompt.executor.clients.LLMClientException: Error from client: OpenAILLMClient Error from client: OpenAILLMClient Status code: 4*** Error body: { "error": { "message": "Invalid content type. image_url is only supported by certain models.", "type": "invalid_request_error", "param": "messages.[1].content.[1].type", "code": null } }
single-llm-executor-tests
Process completed with exit code 1.

Artifacts

Produced during runtime
Name Size Digest
reports-ubuntu-latest-agent-tests
335 KB
sha256:543f314627a730b5505ee5c5063185449905fc4c6da5f59ec312720a1622a956
reports-ubuntu-latest-anthropic-schema-test
287 KB
sha256:c28a1a32655568231d2b0f2fce59ab4a957b524b556f2f710eff8d16fbedfc5f
reports-ubuntu-latest-bedrock-credentials-test
284 KB
sha256:ae6b0580afe2b86815ebb2ab4a9867198d82abf2ddc8c81c6c46142560f0ce55
reports-ubuntu-latest-bedrock-tests
284 KB
sha256:0baf721df12e454b51861a4213f885949e65e429af3d0d0f85fb2d3a3668498e
reports-ubuntu-latest-capabilities-tests
295 KB
sha256:85159c954d25dded4d04d506a724496fb21cd2eab9550c647005f7bdc321d7a2
reports-ubuntu-latest-embeddings-test
284 KB
sha256:dc4fea081c8b8ca9a954d26147b5a057ee54ca626c35233aa6b8fa769c6ed802
reports-ubuntu-latest-multiple-llm-executor-tests
309 KB
sha256:282951651a97b748bd166e7b01a8d03f714d836abc40ab59422e2afca530c240
reports-ubuntu-latest-other-executor-tests
284 KB
sha256:998a12543ff15e30f4c0efd3979ab8b3e34d1bd3bd1eb0a51d3ffd257ffa0d5a
reports-ubuntu-latest-single-llm-executor-tests
309 KB
sha256:ff065595f725f7dac375bd2a5fc4e0eb4704634e3c90ac173e49b5b9926befab