You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The api-proxy should write a models.json file at the end of a workflow run (or on-demand) that describes which models were available from each configured provider. This artifact would be uploaded alongside existing workflow artifacts (agent, firewall-audit-logs) for observability and debugging.
Motivation
Debugging model availability issues: When a workflow fails because a requested model is unavailable (e.g., sweagentd#11264), having a record of what models were actually available at runtime makes diagnosis trivial.
Model selection strategy: Upcoming model-selection policy support ([aw] Security Guard failed #2334) needs a ground-truth list of available models to implement fallback logic.
Audit trail: Teams can track which models were offered over time, detect regressions in model availability, and validate that expected models are present.
Proposed Behavior
At startup (after fetchStartupModels() completes), write /var/log/api-proxy/models.json with the discovered models from each provider.
Update on refresh if models are re-fetched during the run.
The existing api-proxy-logs volume mount already maps /var/log/api-proxy/ to the host, so the file is automatically available for artifact upload.
Summary
The api-proxy should write a
models.jsonfile at the end of a workflow run (or on-demand) that describes which models were available from each configured provider. This artifact would be uploaded alongside existing workflow artifacts (agent,firewall-audit-logs) for observability and debugging.Motivation
Proposed Behavior
fetchStartupModels()completes), write/var/log/api-proxy/models.jsonwith the discovered models from each provider.api-proxy-logsvolume mount already maps/var/log/api-proxy/to the host, so the file is automatically available for artifact upload.Proposed Schema
{ "timestamp": "2026-05-01T03:00:00Z", "providers": { "openai": { "configured": true, "models": ["gpt-4.1", "gpt-4.1-mini", "o3", "o4-mini", ...], "target": "api.openai.com" }, "anthropic": { "configured": false, "models": null, "target": null }, "copilot": { "configured": true, "models": ["gpt-4.1", "claude-sonnet-4", ...], "target": "api.githubcopilot.com" }, "gemini": { "configured": false, "models": null, "target": null }, "opencode": { "configured": false, "models": null, "target": null } }, "model_aliases": { "fast": ["gpt-4.1-mini"], "smart": ["o3"] } }Implementation Notes
cachedModelsobject andreflectEndpoints()function inserver.jsfetchStartupModels()resolves (line ~1553 in server.js)/var/log/api-proxy/directory which is volume-mounted to${workDir}/api-proxy-logs/model_aliasesfromAWF_MODEL_ALIASESif configuredfirewall-audit-logscould include this file, or a separatemodelsartifact could be definedFiles to Change
containers/api-proxy/server.js— Writemodels.jsonafter model fetch completes