Skip to content

[Bug]: Static model list missing configuration leading to failure to accept forge.toml #3060

@joshuas99

Description

@joshuas99

Bug Description

When trying to add a static model list item to the .forge.toml I noted that I was unable to get the setting to be accepted.

I matched the input as per the documentation. My configuration was:

[[providers]]
id = "ollama"
api_key_vars   = "OLLAMA_API_KEY"
url = "http://127.0.0.1:8000/v1/chat/completions"
response_type = "OpenAI"
auth_methods = ["api_key"]

[[providers.models]]
id = "Qwen3.6-35B-A3b-q3-mlx"
name = "Qwen3.5-35B"
description = "Qwen local reasoning model with advanced problem-solving capabilities"
context_length = 262144
tools_supported = true
supports_parallel_tool_calls = true
supports_reasoning = true
input_modalities = ["text"]

The error thrown was:

Config error: invalid type: sequence, expected a string for key providers[0]models in ../../../../.forge/.forge.toml

Steps to Reproduce

  1. Edit the .forge.toml
  2. Attempt to pin a static model to any provider
  3. Save the file
  4. The error is thrown OR it may redact your changes and simplify to model discovery

Expected Behavior

I would expect the model to be added to the list based on following the documentation.

Actual Behavior

The configuration fails to load or the static model information is redacted.

Forge Version

2.11.3

Operating System & Version

macOS 15.7.4 (24G517)

AI Provider

Other

Model

Qwen3.6-35B-A3b-q3-mlx

Installation Method

npx forgecode@latest

Configuration

[session]
provider_id = "ollama"
model_id = "Qwen3.6-35B-A3b-q3-mlx"

[updates]
frequency = "daily"
auto_update = true

[compact]
retention_window = 6
eviction_window = 0.2
max_tokens = 2000
token_threshold = 100000
message_threshold = 200
on_turn_end = false

[reasoning]
effort = "high"
enabled = true

[[providers]]
id = "ollama"
url = "http://127.0.0.1:8000/v1/chat/completions"
response_type = "OpenAI"
auth_methods = ["api_key"]

[[providers.models]]
id = "Qwen3.6-35B-A3b-q3-mlx"
name = "Qwen3.5-35B"
description = "Qwen local reasoning model with advanced problem-solving capabilities"
context_length = 262144
tools_supported = true
supports_parallel_tool_calls = true
supports_reasoning = true
input_modalities = ["text"]

Metadata

Metadata

Labels

severity: highSignificant impact; core functionality is impaired.type: bugSomething isn't working.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions