Skip to content

Releases: link-assistant/agent

[js] 0.18.2

31 Mar 21:13

Choose a tag to compare

Release 0.18.2

[js] 0.18.1

31 Mar 09:00

Choose a tag to compare

fix: enable dual HTTP logging (global fetch patch + provider-level wrapper) for complete verbose mode coverage (#221)

Related Pull Request: #222


npm version

[js] 0.18.0

30 Mar 20:26

Choose a tag to compare

Add --compaction-model (default: opencode/gpt-5-nano) and --compaction-safety-margin (default: 15%) CLI options. When the compaction model has a larger context window than the base model, the safety margin is automatically removed, allowing 100% usage of the base model's usable context. This extends effective working context by ~18% for free tier models at zero cost.

Related Pull Request: #220


npm version

[js] 0.17.0

30 Mar 19:02

Choose a tag to compare

Enable --summarize-session by default and use the same model as --model for session summarization. Add 15% safety margin to compaction overflow detection, context diagnostics to step-finish JSON output, and detailed logging for overflow checks.

Related Pull Request: #218


npm version

[js] 0.16.18

25 Mar 12:10

Choose a tag to compare

feat: improve verbose HTTP logging reliability (#215)

  • Add diagnostic breadcrumb log ("verbose HTTP logging active") on first HTTP call per provider to confirm the fetch wrapper is in the chain
  • Pass Bun's non-standard verbose: true option to fetch() when verbose mode is enabled, enabling detailed connection debugging for socket errors
  • Include stack trace and error.cause in HTTP request failed log entries for better debugging of connection failures
  • Add case study documenting the "socket connection closed unexpectedly" error analysis

Related Pull Request: #216


npm version

[js] 0.16.17

11 Mar 20:27

Choose a tag to compare

fix: prevent agent process leaks with event loop fixes and ESLint rules (#213)

  • Fix setTimeout/setInterval in retry-fetch.ts, session/retry.ts, and util/timeout.ts to use .unref() so timers don't prevent process exit
  • Fix Bun.serve() idleTimeout from 0 (infinite) to 255 (default) to prevent keeping event loop alive
  • Fix setTimeout in continuous-mode.js waitForPending to use .unref()
  • Use process.once('SIGINT') instead of process.on('SIGINT') to prevent handler accumulation
  • Fix missing error listener removal in input-queue.js stop()
  • Add eslint-plugin-promise for detecting dangling/floating promises
  • Add no-restricted-syntax ESLint rules to warn on process.on('SIGINT'/'SIGTERM') — prefer process.once()
  • Remove AGENT_PROCESS_LIFETIME_TIMEOUT (agents can run for hours, global timeout is not appropriate)
  • Add --retry-on-rate-limits flag (use --no-retry-on-rate-limits to disable AI API rate limit retries)
  • Move integration tests to tests/integration/ to prevent bulk running; default bun test runs only unit tests

Related Pull Request: #214


npm version

[js] 0.16.16

09 Mar 17:14

Choose a tag to compare

fix: HTTP verbose logging and anthropic provider usage fallback (#211)

Two fixes for issues observed when running the Agent CLI with --verbose
mode and the opencode/minimax-m2.5-free model:

  1. HTTP request/response logging not appearing in verbose mode: The
    lazy log callback pattern (log.info(() => ({...}))) passed through
    the log-lazy npm package, adding indirection that could lose output
    when the CLI runs as a subprocess. Changed all 5 HTTP log call sites
    to use direct calls (log.info('msg', data)) since the verbose check
    is already done at the top of the wrapper.
  2. "Provider returned zero tokens with unknown finish reason" error:
    When using @ai-sdk/anthropic SDK with a custom baseURL (opencode
    proxy), the standard AI SDK usage object is empty but
    providerMetadata.anthropic.usage contains valid token data with
    snake_case keys (input_tokens, output_tokens). Added an anthropic
    metadata fallback in getUsage() to extract tokens from this
    metadata, similar to the existing OpenRouter fallback.

Related Pull Request: #212


npm version

[js] 0.16.14

04 Mar 12:41

Choose a tag to compare

fix: centralize default model constant and update from kimi-k2.5-free to minimax-m2.5-free (#208)

The yargs default model in index.js, the OAuth fallback check in
model-config.js, and the task tool fallback in task.ts referenced
opencode/kimi-k2.5-free, which was discontinued on the OpenCode Zen
provider. Runs that did not pass --model explicitly (or where the
yargs caching bug #192 caused the CLI argument to be silently dropped)
would attempt to use the removed model and fail with a 401 ModelError.

Additionally, the default model was hardcoded in multiple files, making
future updates error-prone (as demonstrated by this issue).

Changes:

  • js/src/cli/defaults.ts: new file exporting DEFAULT_MODEL,
    DEFAULT_PROVIDER_ID, and DEFAULT_MODEL_ID constants — the single
    source of truth for the default model
  • js/src/index.js: import and use DEFAULT_MODEL constant for the
    --model yargs default
  • js/src/cli/model-config.js: import and use DEFAULT_PROVIDER_ID
    and DEFAULT_MODEL_ID constants in the --use-existing-claude-oauth
    check and error messages
  • js/src/tool/task.ts: import and use DEFAULT_PROVIDER_ID and
    DEFAULT_MODEL_ID constants as the fallback model

Related Pull Request: #210


npm version

[js] 0.16.13

04 Mar 10:27

Choose a tag to compare

fix: detect model-not-supported errors from provider response body (#208)

When the OpenCode provider (and similar OpenRouter-compatible proxies) removes
or restricts access to a model, the API returns HTTP 401 with a response body
like {"type":"error","error":{"type":"ModelError","message":"Model X not supported"}}. Without special handling this looks identical to a real
authentication failure, making the root cause hard to diagnose.

The fix adds SessionProcessor.isModelNotSupportedError() which parses the
response body and detects the ModelError pattern from OpenCode/OpenRouter.
When detected, a dedicated error log entry is emitted that:

  • Clearly labels the error as a model-availability issue, NOT an auth error
  • Includes the providerID, modelID, and full response body
  • Suggests using --model <provider>/<model-id> to specify an alternative
  • Links to the case study for further investigation

The fix also adds 11 unit tests covering nested/flat JSON formats, real auth
errors (should not be flagged), plain-text fallback detection, and edge cases.

Root cause documented in docs/case-studies/issue-208/ with the full
1920-line log from the failing run.

Related Pull Request: #209


npm version

[js] 0.16.12

27 Feb 09:32

Choose a tag to compare

fix: check verbose flag at HTTP call time, not SDK creation time (#206)

The verbose HTTP logging wrapper now checks Flag.OPENCODE_VERBOSE when each
HTTP request is made, instead of when the provider SDK is created. Previously,
the wrapper was conditionally installed at SDK creation time using
if (Flag.OPENCODE_VERBOSE), which meant that if the SDK was cached before
the --verbose flag was processed by the CLI middleware, no HTTP logging would
occur for the entire session.

The fix always installs the fetch wrapper but makes it a no-op passthrough
(single boolean check) when verbose mode is disabled, ensuring zero overhead
in normal operation and reliable logging when --verbose is enabled.

Related Pull Request: #207


npm version