Skip to content

feat(api-proxy): add /reflect endpoint for dynamic provider and model discovery#2253

Merged
lpcox merged 4 commits intomainfrom
copilot/add-reflection-endpoint
Apr 28, 2026
Merged

feat(api-proxy): add /reflect endpoint for dynamic provider and model discovery#2253
lpcox merged 4 commits intomainfrom
copilot/add-reflection-endpoint

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 28, 2026

Agent harnesses have no way to programmatically discover which LLM providers are configured in the api-proxy sidecar or what models each exposes — requiring hardcoded assumptions about provider availability.

Changes

New GET /reflect management endpoint (port 10000)
Returns all five proxy endpoints with their configured status, base URL, port, models_url, and a cached model list populated at startup.

{
  "endpoints": [
    { "provider": "openai",    "port": 10000, "base_url": "http://api-proxy:10000", "configured": true,  "models": ["gpt-4o", "o1", "o3-mini"], "models_url": "http://api-proxy:10000/v1/models" },
    { "provider": "anthropic", "port": 10001, "base_url": "http://api-proxy:10001", "configured": false, "models": null,                        "models_url": "http://api-proxy:10001/v1/models" },
    { "provider": "copilot",   "port": 10002, "base_url": "http://api-proxy:10002", "configured": true,  "models": ["claude-3.5-sonnet","gpt-4o"],"models_url": "http://api-proxy:10002/models"   },
    { "provider": "gemini",    "port": 10003, "base_url": "http://api-proxy:10003", "configured": false, "models": null,                        "models_url": "http://api-proxy:10003/v1beta/models" },
    { "provider": "opencode",  "port": 10004, "base_url": "http://api-proxy:10004", "configured": true,  "models": null,                        "models_url": null }
  ],
  "models_fetch_complete": true
}

Startup model fetching (fetchStartupModels)

  • Runs concurrently with validateApiKeys once all listeners are ready
  • Fetches /v1/models (OpenAI, Anthropic), /models (Copilot), /v1beta/models (Gemini) through Squid
  • Normalises both OpenAI-style {data:[{id}]} and Gemini-style {models:[{name:"models/..."}]} responses via extractModelIds
  • Results cached in cachedModels; models_fetch_complete flag signals readiness

fetchJson helper

  • Mirrors httpProbe but returns parsed JSON instead of status code
  • Routes HTTPS through proxyAgent (Squid) consistent with all other upstream calls
  • Emits debug-level log events on network errors and timeouts for operator diagnostics
  • Returns null on any failure — model fetch errors never block proxy startup

Copilot AI linked an issue Apr 28, 2026 that may be closed by this pull request
Copilot AI added 2 commits April 28, 2026 02:09
Add GET /reflect on port 10000 (management port) that returns the list
of configured API proxy endpoints along with the models supported by
each endpoint.

- fetchJson: helper to fetch and parse JSON responses from provider APIs
- extractModelIds: normalise OpenAI/Anthropic/Copilot {data:[{id}]} and
  Gemini {models:[{name}]} formats into sorted string arrays
- cachedModels / resetModelCacheState: in-memory model cache populated
  at startup, with reset helper for test isolation
- fetchStartupModels: fetches model lists from all configured providers
  concurrently at startup (alongside validateApiKeys)
- reflectEndpoints: builds the reflection payload with per-endpoint
  configured status, base_url, port, models_url and cached models
- handleManagementEndpoint: extended to serve GET /reflect
- onListenerReady: triggers fetchStartupModels in addition to validateApiKeys
- All new functions exported and covered by unit tests (306 passing)

Agent-Logs-Url: https://github.com/github/gh-aw-firewall/sessions/36f4f63e-88fd-493f-a600-7fab58452dd6
- fetchJson: add debug-level logging for network errors and timeouts to
  aid operator diagnostics during model discovery
- extractModelIds: extract GEMINI_MODEL_NAME_PREFIX constant and use
  startsWith/slice for prefix stripping (clearer than regex)
- Add test for Gemini model names without the models/ prefix

Agent-Logs-Url: https://github.com/github/gh-aw-firewall/sessions/36f4f63e-88fd-493f-a600-7fab58452dd6
Copilot AI changed the title [WIP] Add reflection endpoint for querying available endpoints and models feat(api-proxy): add /reflect endpoint for dynamic provider and model discovery Apr 28, 2026
Copilot AI requested a review from lpcox April 28, 2026 02:13
@lpcox lpcox marked this pull request as ready for review April 28, 2026 02:52
@lpcox lpcox requested a review from Mossaka as a code owner April 28, 2026 02:52
Copilot AI review requested due to automatic review settings April 28, 2026 02:52
@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Apr 28, 2026

✅ Coverage Check Passed

Overall Coverage

Metric Base PR Delta
Lines 86.02% 86.10% 📈 +0.08%
Statements 86.02% 86.10% 📈 +0.08%
Functions 88.04% 88.04% ➡️ +0.00%
Branches 80.20% 80.24% 📈 +0.04%
📁 Per-file Coverage Changes (1 files)
File Lines (Before → After) Statements (Before → After)
src/docker-manager.ts 87.2% → 87.5% (+0.29%) 86.8% → 87.1% (+0.28%)

Coverage comparison generated by scripts/ci/compare-coverage.ts

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

@github-actions

This comment has been minimized.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an api-proxy management reflection endpoint and startup model discovery so agent harnesses can programmatically determine which providers are configured and which models are available.

Changes:

  • Introduces GET /reflect on the management port (10000) returning provider endpoint metadata plus cached model lists and a models_fetch_complete readiness flag.
  • Adds startup model fetching (fetchStartupModels) and response normalization (extractModelIds), using a new fetchJson helper to retrieve and parse model lists.
  • Extends Jest coverage for the new helper/model-fetch/reflect logic.
Show a summary per file
File Description
containers/api-proxy/server.js Implements fetchJson, startup model caching, reflectEndpoints(), and wires /reflect into management endpoints + startup flow.
containers/api-proxy/server.test.js Adds unit tests for fetchJson, extractModelIds, fetchStartupModels, and reflectEndpoints.

Copilot's findings

Tip

Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

  • Files reviewed: 2/2 changed files
  • Comments generated: 2

Comment on lines +1185 to +1204
const req = mod.request(reqOpts, (res) => {
if (res.statusCode < 200 || res.statusCode >= 300) {
res.resume();
resolveOnce(null);
return;
}
const chunks = [];
res.on('data', (chunk) => chunks.push(chunk));
res.on('end', () => {
try {
resolveOnce(JSON.parse(Buffer.concat(chunks).toString()));
} catch {
resolveOnce(null);
}
});
res.on('error', (err) => {
logRequest('debug', 'fetch_json_error', { url: sanitizeForLog(url), error: String(err && err.message ? err.message : err) });
resolveOnce(null);
});
});
Copy link

Copilot AI Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetchJson() can hang indefinitely if the upstream response stream closes/aborts before emitting end or error (e.g., connection drop mid-body). Unlike httpProbe(), it doesn't handle res.close/res.aborted, so the Promise may never resolve and modelFetchComplete may never flip to true. Consider adding handlers for res.on('aborted'...) and/or res.on('close'...) that resolveOnce(null) (and optionally req.destroy()) to guarantee settlement.

Copilot uses AI. Check for mistakes.
Comment thread containers/api-proxy/server.js Outdated
Comment on lines +1323 to +1337
// Use COPILOT_GITHUB_TOKEN for /models if available (COPILOT_API_KEY is not accepted)
const copilotToken = copilotGithubToken || copilotAuthToken;
if (copilotToken) {
fetches.push(
fetchJson(`https://${copilotTarget}/models`, {
method: 'GET',
headers: {
'Authorization': `Bearer ${copilotToken}`,
'Copilot-Integration-Id': copilotIntegrationId,
},
}, TIMEOUT_MS).then((json) => {
cachedModels.copilot = extractModelIds(json);
})
);
}
Copy link

Copilot AI Apr 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetchStartupModels() attempts to call the Copilot /models endpoint whenever copilotAuthToken is present, but this token can be derived from COPILOT_API_KEY (BYOK). Elsewhere in this file the /models endpoint is explicitly documented as requiring COPILOT_GITHUB_TOKEN and key validation skips BYOK-only mode; for consistency and to avoid a guaranteed-unhelpful request, gate the models fetch on copilotGithubToken being set (or otherwise explicitly detect when the auth token is a GitHub token).

Copilot uses AI. Check for mistakes.
@github-actions

This comment has been minimized.

@lpcox
Copy link
Copy Markdown
Collaborator

lpcox commented Apr 28, 2026

@copilot address the review feedback

- fetchJson: add res.on('close') handler so the Promise always settles
  when the upstream connection drops mid-body without emitting 'end'
  or 'error', preventing modelFetchComplete from hanging indefinitely
- fetchStartupModels: gate Copilot /models fetch exclusively on
  copilotGithubToken (GitHub OAuth); skip when only COPILOT_API_KEY
  (BYOK) is present — consistent with validateApiKeys behaviour where
  BYOK-only mode is documented as having no probe endpoint
- Tests: add cases for res.close mid-body drop and BYOK-only skip

Agent-Logs-Url: https://github.com/github/gh-aw-firewall/sessions/cd6fa904-5c17-4a7e-bcd6-0f0f44061d0e

Co-authored-by: lpcox <15877973+lpcox@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 28, 2026

@copilot address the review feedback

Done in e9f207b. Two fixes applied:

  1. fetchJson hang on mid-body connection drop — added res.on('close', () => resolveOnce(null)) so the Promise always settles even if the upstream connection drops before emitting end or error.

  2. Copilot BYOK model fetchfetchStartupModels now gates the /models request exclusively on copilotGithubToken (GitHub OAuth token). When only COPILOT_API_KEY (BYOK) is present, the fetch is skipped, consistent with validateApiKeys where BYOK-only mode is documented as having no probe endpoint.

Both changes are covered by new tests.

const anthropicKey = ov('anthropicKey', ANTHROPIC_API_KEY);
const anthropicTarget = ov('anthropicTarget', ANTHROPIC_API_TARGET);
const copilotGithubToken = ov('copilotGithubToken', COPILOT_GITHUB_TOKEN);
const copilotAuthToken = ov('copilotAuthToken', COPILOT_AUTH_TOKEN);
@github-actions
Copy link
Copy Markdown
Contributor

🔥 Smoke Test: Copilot BYOK — PASS

Test Result
GitHub MCP (list PRs)
GitHub.com connectivity
File write/read ✅ (smoke-test-copilot-byok-25032156276.txt verified)
BYOK inference

Running in BYOK offline mode (COPILOT_OFFLINE=true) via api-proxy → api.githubcopilot.com

PR: "feat(api-proxy): add /reflect endpoint for dynamic provider and model discovery" · Author: @Copilot · Assignees: @lpcox @Copilot

Overall: PASS

🔑 BYOK report filed by Smoke Copilot BYOK

@github-actions
Copy link
Copy Markdown
Contributor

Smoke Test Results:
✅ GitHub MCP (2 merged PRs listed)
✅ Playwright (github.com title verified)
✅ File Writing (/tmp/gh-aw/agent/ test file created)
✅ Bash (file verification successful)

Status: PASS

💥 [THE END] — Illustrated by Smoke Claude

@github-actions
Copy link
Copy Markdown
Contributor

chore: optimize test-coverage-improver workflow for ~50% token reduction
[Test Coverage] Add comprehensive tests for image-tag module
GitHub MCP: ❌
safeinputs-gh PR query: ❌
Playwright GitHub title: ✅
Tavily search: ❌
File write/read: ✅
Bash cat verify: ✅
Discussion comment: ✅
Build AWF: ✅
Overall: FAIL

Warning

Firewall blocked 1 domain

The following domain was blocked by the firewall during workflow execution:

  • registry.npmjs.org

To allow these domains, add them to the network.allowed list in your workflow frontmatter:

network:
  allowed:
    - defaults
    - "registry.npmjs.org"

See Network Configuration for more information.

🔮 The oracle has spoken through Smoke Codex

@github-actions
Copy link
Copy Markdown
Contributor

Chroot Version Comparison Results

Runtime Host Version Chroot Version Match?
Python Python 3.12.13 Python 3.12.3 ❌ NO
Node.js v24.14.1 v20.20.2 ❌ NO
Go go1.22.12 go1.22.12 ✅ YES

Overall: ❌ Not all tests passed — Python and Node.js versions differ between host and chroot.

Tested by Smoke Chroot

@github-actions
Copy link
Copy Markdown
Contributor

🏗️ Build Test Suite Results

Ecosystem Project Build/Install Tests Status
Bun elysia 1/1 passed ✅ PASS
Bun hono 1/1 passed ✅ PASS
C++ fmt N/A ✅ PASS
C++ json N/A ✅ PASS
Deno oak N/A 1/1 passed ✅ PASS
Deno std N/A 1/1 passed ✅ PASS
.NET hello-world N/A ✅ PASS
.NET json-parse N/A ✅ PASS
Go color 1/1 passed ✅ PASS
Go env 1/1 passed ✅ PASS
Go uuid 1/1 passed ✅ PASS
Java gson 1/1 passed ✅ PASS
Java caffeine 1/1 passed ✅ PASS
Node.js clsx All passed ✅ PASS
Node.js execa All passed ✅ PASS
Node.js p-limit All passed ✅ PASS
Rust fd 1/1 passed ✅ PASS
Rust zoxide 1/1 passed ✅ PASS

Overall: 8/8 ecosystems passed — ✅ PASS

Note: Java required mvn -Dmaven.repo.local=/tmp/... workaround due to permission restrictions on ~/.m2/repository in the sandbox environment.

Generated by Build Test Suite for issue #2253 · ● 485.4K ·

@github-actions
Copy link
Copy Markdown
Contributor

Smoke Test Results

  • Redis PING: ❌ (timeout — no response from host.docker.internal:6379)
  • PostgreSQL pg_isready: ❌ (no response from host.docker.internal:5432)
  • PostgreSQL SELECT 1: ❌ (skipped — host unreachable)

Overall: FAIL — Service containers are not reachable from this environment.

🔌 Service connectivity validated by Smoke Services

@github-actions

This comment has been minimized.

@github-actions
Copy link
Copy Markdown
Contributor

🔬 Smoke Test Results

Test Status
GitHub MCP connectivity ✅ PASS
GitHub.com HTTP connectivity ❌ FAIL (template vars unresolved)
File write/read ❌ FAIL (template vars unresolved)

Overall: FAIL — pre-step outputs (SMOKE_HTTP_CODE, SMOKE_FILE_PATH, SMOKE_FILE_CONTENT, SMOKE_PR_DATA) were not substituted before agent invocation.

PR: feat(api-proxy): add /reflect endpoint for dynamic provider and model discovery
Author: @Copilot | Assignees: @lpcox, @Copilot

📰 BREAKING: Report filed by Smoke Copilot

@lpcox lpcox merged commit 9b2e0b8 into main Apr 28, 2026
65 of 72 checks passed
@lpcox lpcox deleted the copilot/add-reflection-endpoint branch April 28, 2026 04:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Reflection endpoint

4 participants