Skip to content

Commit 942591c

Browse files
committed
test(agent-sdk): use GAIA_TEST_MODEL env var; set Llama in CI
The Agent SDK integration test hardcoded DEFAULT_MODEL_NAME (now Gemma-4-E4B-it-GGUF) but the GitHub-hosted Windows CI runner only pulls Llama-3.2-3B-Instruct-Hybrid, causing HTTP 422 from the server. - tests/test_agent_sdk.py: read model from GAIA_TEST_MODEL env var, falling back to DEFAULT_MODEL_NAME so local runs still work - test_agent_sdk.yml: set GAIA_TEST_MODEL=Llama-3.2-3B-Instruct-Hybrid before running the test suite to match the pulled model
1 parent 67e4b6a commit 942591c

2 files changed

Lines changed: 4 additions & 1 deletion

File tree

.github/workflows/test_agent_sdk.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -145,6 +145,8 @@ jobs:
145145
146146
REM Run the comprehensive integration test suite
147147
set PYTHONIOENCODING=utf-8
148+
REM Use the model that was pulled above (overrides DEFAULT_MODEL_NAME=Gemma-4-E4B)
149+
set GAIA_TEST_MODEL=Llama-3.2-3B-Instruct-Hybrid
148150
python tests\test_agent_sdk.py
149151
set integration_exit=%ERRORLEVEL%
150152

tests/test_agent_sdk.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
These tests require a running Lemonade server and test actual LLM interactions.
1010
"""
1111

12+
import os
1213
import sys
1314
import time
1415
import unittest
@@ -39,7 +40,7 @@ def setUpClass(cls):
3940
print(f"{'='*60}")
4041

4142
cls.server_url = "http://localhost:13305"
42-
cls.model = DEFAULT_MODEL_NAME
43+
cls.model = os.environ.get("GAIA_TEST_MODEL", DEFAULT_MODEL_NAME)
4344
cls.timeout = 30 # seconds
4445

4546
# Verify server is running

0 commit comments

Comments
 (0)