ci(model): validate GLM provider via real API calls and add mock unit…#1472
ci(model): validate GLM provider via real API calls and add mock unit…#1472Jinhaooo wants to merge 1 commit intoagentscope-ai:v2_devfrom
Conversation
|
@Jinhaooo Thank you for your contribution to 2.0! However, please note that our active development branch for 2.0 is Additionally, regarding the test implementation — would you consider placing it under a new The reason is that tests placed in |
… tests Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
10fa5d6 to
af14646
Compare
|
@DavdGao Thanks for the review! I've rebased onto v2_dev and moved the integration tests and fixtures to scripts/smoke_test/. The mock unit tests remain in tests/ since they don't require API keys or consume any tokens — they're built from captured fixtures and are safe for routine CI runs. Let me know if you'd prefer those moved as well. |
AgentScope Version
2.0.0
Description
Background
Partial contribution to #1447. The chat model interfaces have been adjusted in 2.0.0 and need to be validated via real API calls before building mock-based unit tests. This PR covers the GLM (Zhipu AI) provider, which is accessed through
OpenAIChatModelas an OpenAI-compatible endpoint.Validation Results (Real API Calls)
enable_thinkingpassthrough works,reasoning_contentparsed asThinkingBlockConclusion:
OpenAIChatModelworks out of the box with GLM. No framework-level issues found. Other GLM models (glm-5, glm-5.1, etc.) share the same OpenAI-compatible protocol and are expected to behave identically.Changes
scripts/smoke_test/test_glm_chat_model.py— real API validation script (requiresZAI_API_KEY)scripts/smoke_test/fixtures/glm_*.json— captured real API responses as mock data sourcetests/unit/test_glm_chat_model_mock.py— mock unit tests for CI (no API key needed)How to Test
Checklist
pre-commit run --all-filescommand