Skip to content

Releases: link-assistant/agent

[js] 0.16.11

25 Feb 10:36

Choose a tag to compare

feat: log HTTP response body in verbose mode for debugging provider failures (#204)

When --verbose is enabled, the raw HTTP response body from LLM providers is now
also logged. For streaming (SSE) responses, the stream is tee'd so the AI SDK receives
the full response while a preview (up to 4000 chars) is logged asynchronously. For
non-streaming responses, the body is buffered, logged, and the Response is reconstructed
transparently.

This provides the missing visibility needed to diagnose issues like empty responses,
malformed SSE events, or error messages from providers like opencode/kimi-k2.5-free.

Related Pull Request: #205


npm version

[js] 0.16.9

21 Feb 17:53

Choose a tag to compare

fix: improve error serialization and verbose debug output for model resolution

  • Added cyclic-reference-safe JSON serialization for all error output
  • Improved global error handlers with guaranteed JSON output and last-resort fallback
  • Added model resolution verbose logging for debugging
  • Restored opencode/kimi-k2.5-free as default model (confirmed available on models.dev)

Related Pull Request: #202


npm version

[js] 0.16.10

21 Feb 23:57

Choose a tag to compare

Fix model resolution failures and ensure JSON-only output (#200)

  • Try unlisted models instead of throwing ProviderModelNotFoundError
  • Auto-refresh models.dev cache when model not found in catalog
  • Intercept stderr to wrap Bun's plain-text errors in JSON envelope
  • Add unit tests for model fallback and JSON error wrapping
  • Add case study documentation with root cause analysis

Related Pull Request: #203


npm version

[js] 0.16.8

20 Feb 22:27

Choose a tag to compare

Add verbose HTTP logging for debugging provider issues and improve model-not-found error messages to list available models

Related Pull Request: #201


npm version

[js] 0.16.7

19 Feb 09:18

Choose a tag to compare

fix: export Provider.state and improve zero-token error handling (#198)

Related Pull Request: #199


npm version

[js] 0.16.5

18 Feb 20:33

Choose a tag to compare

fix: validate model argument and detect zero-token provider failures (#196)

  • Always prefer CLI model argument over yargs default to prevent silent model substitution
  • Throw on invalid provider/model format instead of falling back to defaults
  • Warn when explicit model not found in provider's model list
  • Detect zero-token responses with unknown finish reason as provider failures
  • Add case study documentation for incident analysis

Related Pull Request: #197


npm version

[js] 0.16.4

16 Feb 18:00

Choose a tag to compare

Add safeguard for model argument mismatch detection

Added a safeguard to detect and correct mismatches between yargs-parsed model arguments and actual process.argv values. This addresses issue #192 where --model kilo/glm-5-free was incorrectly substituted with opencode/kimi-k2.5-free due to potential Bun runtime cache issues.

The safeguard:

  • Extracts the model argument directly from process.argv
  • Compares it with the yargs-parsed value
  • Logs a warning and uses the correct CLI value when a mismatch is detected

Related Pull Request: #193


npm version

[js] 0.16.3

15 Feb 13:36

Choose a tag to compare

Update free models: replace minimax-m2.1-free with minimax-m2.5-free

  • Replace minimax-m2.1-free with minimax-m2.5-free in OpenCode Zen (M2.1 no longer free)
  • Remove glm-4.7-free from free models (no longer available)
  • Update Kilo Gateway free models: add GLM 4.5 Air, DeepSeek R1, update MiniMax to M2.5
  • Update provider priority lists in getSmallModel() function
  • Add FREE_MODELS.md comprehensive documentation

Breaking change: Users relying on opencode/minimax-m2.1-free or opencode/glm-4.7-free
should switch to opencode/minimax-m2.5-free or other free models listed in FREE_MODELS.md.

Related Pull Request: #191


npm version

[js] 0.16.2

15 Feb 13:06

Choose a tag to compare

fix: extract usage and finish reason from OpenRouter provider metadata

When using OpenRouter-compatible APIs (like Kilo Gateway), the standard AI SDK usage object may be empty while the actual usage data is in providerMetadata.openrouter.usage. This fix adds fallback logic to extract token counts and finish reason from provider metadata.

This enables accurate token counting and cost calculation for all OpenRouter-compatible providers including Kilo Gateway.

Related Pull Request: #188


npm version

[js] 0.16.1

15 Feb 12:51

Choose a tag to compare

fix: resolve incorrect peer dependency warning for ai@6.0.86

  • Update @openrouter/ai-sdk-provider from ^1.5.4 to ^2.2.3 (supports AI SDK v6)
  • Update @opentui/core from ^0.1.46 to ^0.1.79
  • Update @opentui/solid from ^0.1.46 to ^0.1.79

This fixes the warn: incorrect peer dependency "ai@6.0.86" warning that
appeared during bun install because @openrouter/ai-sdk-provider@1.x
required ai@^5.0.0 while we use ai@^6.0.1.

Note: The solid-js peer dependency warning remains due to an upstream
issue in @opentui/solid which uses exact version pinning. This has been
reported at anomalyco/opentui#689

Fixes #186

Related Pull Request: #189


npm version