Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,17 +121,18 @@ API endpoints expect a Bearer JWT. Tokens are issued by `POST /auth/token` (subj

## LLM

The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`.
The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`. For **Google Gemini** (Google AI Studio), set `provider: gemini`, `model` (e.g. `gemini-2.5-flash`, `gemini-2.5-pro`), and **`gemini_api_key`** or `GEMINI_API_KEY`.

| Option | Env var | Default | Description |
|--------|---------|---------|-------------|
| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`. |
| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`). |
| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`, `gemini`. |
| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`; for **gemini**: `gemini-2.5-flash`, `gemini-2.5-pro`). |
| `llm.mistral_api_key` | `RADIOSHAQ_LLM__MISTRAL_API_KEY` | `null` | Mistral API key (or set `MISTRAL_API_KEY` if your code reads it). |
| `llm.openai_api_key` | `RADIOSHAQ_LLM__OPENAI_API_KEY` | `null` | OpenAI API key. |
| `llm.anthropic_api_key` | `RADIOSHAQ_LLM__ANTHROPIC_API_KEY` | `null` | Anthropic API key. |
| `llm.custom_api_base` | `RADIOSHAQ_LLM__CUSTOM_API_BASE` | `null` | **Custom provider base URL** (e.g. `http://localhost:11434` for Ollama). Passed to LiteLLM. |
| `llm.custom_api_key` | `RADIOSHAQ_LLM__CUSTOM_API_KEY` | `null` | Custom provider API key. |
| `llm.gemini_api_key` | `RADIOSHAQ_LLM__GEMINI_API_KEY` | `null` | **Gemini** API key (Google AI Studio; or set `GEMINI_API_KEY`). |
| `llm.huggingface_api_key` | `RADIOSHAQ_LLM__HUGGINGFACE_API_KEY` | `null` | **Hugging Face** token for [Inference Providers](https://huggingface.co/docs/inference-providers) (or set `HF_TOKEN`). Token needs "Inference Providers" permission. |
| `llm.huggingface_api_base` | `RADIOSHAQ_LLM__HUGGINGFACE_API_BASE` | `null` | Optional; default `https://router.huggingface.co/v1` when provider is `huggingface`. |
| `llm.temperature` | `RADIOSHAQ_LLM__TEMPERATURE` | `0.1` | Sampling temperature (0–2). |
Expand Down
310 changes: 310 additions & 0 deletions docs/plan-gemini-api-support.md

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion docs/reference/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -59,17 +59,19 @@ POSTGRES_PASSWORD=radioshaq
# RADIOSHAQ_LLM__ANTHROPIC_API_KEY=
# RADIOSHAQ_LLM__CUSTOM_API_BASE=
# RADIOSHAQ_LLM__CUSTOM_API_KEY=
# RADIOSHAQ_LLM__GEMINI_API_KEY= # For provider: gemini (Google AI Studio)
# RADIOSHAQ_LLM__HUGGINGFACE_API_KEY= # For provider: huggingface (Inference Providers)
# RADIOSHAQ_LLM__HUGGINGFACE_API_BASE= # Optional; default https://router.huggingface.co/v1
# RADIOSHAQ_LLM__TEMPERATURE=0.1
# RADIOSHAQ_LLM__MAX_TOKENS=4096
# RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0
# RADIOSHAQ_LLM__MAX_RETRIES=3
# RADIOSHAQ_LLM__RETRY_DELAY_SECONDS=1.0
# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY / HF_TOKEN directly
# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY / HF_TOKEN / GEMINI_API_KEY directly
# MISTRAL_API_KEY=
# OPENAI_API_KEY=
# HF_TOKEN= # Hugging Face token with "Inference Providers" permission (when provider is huggingface)
# GEMINI_API_KEY=

# -----------------------------------------------------------------------------
# Memory (per-callsign memory, Hindsight, daily summaries)
Expand Down
3 changes: 2 additions & 1 deletion docs/reference/config.example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,14 @@ jwt:
# LLM (set API key in env or here; prefer env for secrets)
# -----------------------------------------------------------------------------
llm:
provider: mistral # mistral | openai | anthropic | custom
provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini
model: mistral-large-latest
mistral_api_key: null
openai_api_key: null
anthropic_api_key: null
custom_api_base: null
custom_api_key: null
gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY
temperature: 0.1
max_tokens: 4096
timeout_seconds: 60.0
Expand Down
10 changes: 10 additions & 0 deletions radioshaq/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ POSTGRES_PASSWORD=radioshaq
# RADIOSHAQ_LLM__ANTHROPIC_API_KEY=
# RADIOSHAQ_LLM__CUSTOM_API_BASE=
# RADIOSHAQ_LLM__CUSTOM_API_KEY=
# RADIOSHAQ_LLM__GEMINI_API_KEY=
# RADIOSHAQ_LLM__TEMPERATURE=0.1
# RADIOSHAQ_LLM__MAX_TOKENS=4096
# RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0
Expand All @@ -71,6 +72,7 @@ POSTGRES_PASSWORD=radioshaq
# MISTRAL_API_KEY=
# OPENAI_API_KEY=
# HF_TOKEN=
# GEMINI_API_KEY=

# -----------------------------------------------------------------------------
# Memory (per-callsign memory, Hindsight, daily summaries)
Expand Down Expand Up @@ -265,3 +267,11 @@ POSTGRES_PASSWORD=radioshaq
# RADIOSHAQ_TTS__KOKORO_SPEED=1.0
# ElevenLabs API key (required when provider=elevenlabs)
# ELEVENLABS_API_KEY=

# -----------------------------------------------------------------------------
# Web UI (Vite) – used when running npm run dev or serving built assets
# -----------------------------------------------------------------------------
# Set in web-interface/.env or project root .env when developing the React UI.
# VITE_RADIOSHAQ_API=http://localhost:8000
# VITE_RADIOSHAQ_TOKEN=
# VITE_GOOGLE_MAPS_API_KEY= # Optional. Enables Map page, Radio field map, Transcripts "View on map". Restrict key by HTTP referrer in Google Cloud Console.
3 changes: 2 additions & 1 deletion radioshaq/config.example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,14 @@ jwt:
# LLM (set API key in env or here; prefer env for secrets)
# -----------------------------------------------------------------------------
llm:
provider: mistral # mistral | openai | anthropic | custom | huggingface
provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini
model: mistral-large-latest
mistral_api_key: null
openai_api_key: null
anthropic_api_key: null
custom_api_base: null
custom_api_key: null
gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY
huggingface_api_key: null # For provider: huggingface; or set HF_TOKEN
huggingface_api_base: null # Optional; default https://router.huggingface.co/v1
temperature: 0.1
Expand Down
42 changes: 42 additions & 0 deletions radioshaq/docs/demo-env-profiles.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Demo environment profiles

Summary of **environment variables** for running the Live HackRF + LLM demo suite. For full WSL/HackRF setup and Option C env, see [scripts/demo/demo-hackrf-full.md](../scripts/demo/demo-hackrf-full.md).

**Live demos use real hardware and real LLM:** HackRF RX/TX and LLM providers are not stubbed in the documented demo flows. Set the env below and attach a HackRF; use `--require-hardware` in the relevant demo scripts to fail fast if SDR TX is not configured.

## Agent and API hooks (how demos drive the system)

- **radio_tx (RadioTransmissionAgent):** Invoked via `POST /radio/send-audio` (multipart WAV) and `POST /radio/send-tts` (JSON body with `message`, optional `frequency_hz`, `mode`). Requires `radio.sdr_tx_enabled=true` and `radio.sdr_tx_backend=hackrf` for HackRF; when not set or no hardware, the TX agent may still run and return success/false (e.g. "Rig manager not configured"). Compliance checks run before TX.
- **radio_rx_audio (RadioAudioReceptionAgent):** No one-off "start monitor" HTTP endpoint. The **voice listener** (server lifespan) starts the agent when `radio.audio_input_enabled` and `radio.voice_listener_enabled` (or `audio_monitoring_enabled`) are true. Demos that need voice RX either run HQ with that config and poll `GET /api/v1/audio/pending` and `GET /transcripts`, or use `POST /messages/from-audio` to simulate inbound voice.
- **radio_rx (RadioReceptionAgent):** Used by the band listener (injection queue consumer) or by tasks submitted via the orchestrator. Demos inject via `POST /inject/message` or `POST /messages/inject-and-store`; the band listener (when enabled) or a process-driven task consumes from the queue.
- **Orchestrator / Judge:** `POST /messages/process` (body: `message` or `text`, optional `channel`, `chat_id`, `sender_id`) runs the REACT loop and routes to agents. Used by run_orchestrator_judge_demo and run_scheduler_demo.
- **WhitelistAgent:** Invoked via `POST /messages/whitelist-request` (JSON or multipart with audio). Orchestrator evaluates and may call the whitelist agent; result in response or completed_tasks.
- **SchedulerAgent:** No direct HTTP endpoint; reached when the orchestrator selects it for a scheduling request (e.g. "Schedule a call for X with Y at Z"). Requires DB with coordination_events for persistence.

## HQ process (`uv run radioshaq run-api`)

- **Mode + JWT:** `RADIOSHAQ_MODE=hq`, `RADIOSHAQ_JWT__SECRET_KEY` (must match receiver `JWT_SECRET`).
- **Receiver uploads:** `RADIOSHAQ_RADIO__RECEIVER_UPLOAD_STORE=true`, `RADIOSHAQ_RADIO__RECEIVER_UPLOAD_INJECT=true`.
- **HackRF SDR TX:** `RADIOSHAQ_RADIO__SDR_TX_ENABLED=true`, `RADIOSHAQ_RADIO__SDR_TX_BACKEND=hackrf`.
- **Message bus consumer:** `RADIOSHAQ_BUS_CONSUMER_ENABLED=1`.
- **LLM:** e.g. `RADIOSHAQ_LLM__PROVIDER=mistral`, `MISTRAL_API_KEY` / `RADIOSHAQ_LLM__MISTRAL_API_KEY`.
- **ASR/TTS:** e.g. `ELEVENLABS_API_KEY`, `RADIOSHAQ_TTS__PROVIDER=elevenlabs`.
- **Voice listener (for voice_rx_audio demos):** `RADIOSHAQ_RADIO__AUDIO_INPUT_ENABLED=true`, `RADIOSHAQ_RADIO__VOICE_LISTENER_ENABLED=true`, `RADIOSHAQ_RADIO__DEFAULT_BAND=2m`.
- **Twilio:** Omit or leave unset for no-Twilio demos; set for Option C with SMS/WhatsApp.

## Remote receiver process (`uv run radioshaq run-receiver`)

- **JWT:** `JWT_SECRET` = same as HQ `RADIOSHAQ_JWT__SECRET_KEY`.
- **Identity:** `STATION_ID=HACKRF-DEMO`.
- **HackRF:** `SDR_TYPE=hackrf`, `HACKRF_INDEX=0`.
- **HQ upload:** `HQ_URL=http://localhost:8000`, `HQ_TOKEN=<from POST /auth/token>`.
- **Demod:** `RECEIVER_MODE=nfm`, `RECEIVER_AUDIO_RATE=48000`.

## Demo scripts

- **Base URL:** Pass `--base-url http://localhost:8000` (or remote). Scripts obtain a JWT via `POST /auth/token` (subject/role/station_id).
- **Extras:** `uv sync --extra hackrf` (receiver + stream), `uv sync --extra voice_tx` (HackRF TX from HQ), `uv sync --extra audio` (ASR) as needed.

## Database

- **Postgres:** `RADIOSHAQ_DATABASE__POSTGRES_URL` or default (e.g. Docker on 5434). Run `uv run radioshaq launch docker` then `cd radioshaq && uv run alembic upgrade head` before demos that use transcripts or registry.
2 changes: 1 addition & 1 deletion radioshaq/infrastructure/local/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ services:
- HINDSIGHT_API_LLM_PROVIDER=${RADIOSHAQ_LLM__PROVIDER:-${HINDSIGHT_API_LLM_PROVIDER:-openai}}
- HINDSIGHT_API_LLM_MODEL=${RADIOSHAQ_LLM__MODEL:-${HINDSIGHT_API_LLM_MODEL:-gpt-4o-mini}}
# API key: first non-empty of RadioShaq keys, then generic keys
- HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}}
- HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__GEMINI_API_KEY:-${GEMINI_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}}}}
# Custom base URL (e.g. OpenAI-compatible or Mistral endpoint)
- HINDSIGHT_API_LLM_BASE_URL=${RADIOSHAQ_LLM__CUSTOM_API_BASE:-${HINDSIGHT_API_LLM_BASE_URL:-}}
# Same Postgres as RadioShaq (postgres service, db radioshaq; pgvector in postgres/init/02-pgvector.sql)
Expand Down
1 change: 1 addition & 0 deletions radioshaq/radioshaq/api/routes/config_routes.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
"anthropic_api_key",
"custom_api_key",
"huggingface_api_key",
"gemini_api_key",
}


Expand Down
9 changes: 7 additions & 2 deletions radioshaq/radioshaq/api/routes/gis.py
Original file line number Diff line number Diff line change
Expand Up @@ -158,10 +158,15 @@ async def get_operators_nearby(
recent_only=recent_hours > 0,
recent_hours=recent_hours,
)
# Ensure each operator has last_seen_at for mapping clients (alias of timestamp)
operators_for_response = [
{**op, "last_seen_at": op.get("last_seen_at") or op.get("timestamp")}
for op in operators
]
return {
"latitude": latitude,
Comment thread
Josephrp marked this conversation as resolved.
"longitude": longitude,
"radius_meters": radius_meters,
"operators": operators,
"count": len(operators),
"operators": operators_for_response,
"count": len(operators_for_response),
}
49 changes: 33 additions & 16 deletions radioshaq/radioshaq/api/routes/radio.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,26 +75,43 @@ async def radio_status(
_user: TokenPayload = Depends(get_current_user),
) -> dict[str, Any]:
"""
Report whether a radio (CAT rig) is connected. When connected, optionally include
current frequency and mode from the rig.
Report whether a radio (CAT rig) is connected and/or SDR TX (HackRF) is configured.
When CAT is connected, include current frequency and mode. For live demos, check
sdr_tx_available to ensure HackRF TX path is enabled (real hardware when device attached).
"""
radio_tx = get_radio_tx_agent(request)
if not radio_tx:
return {"connected": False, "reason": "radio_tx_agent_not_available"}
return {
"connected": False,
"reason": "radio_tx_agent_not_available",
"sdr_tx_available": False,
"sdr_tx_reason": "radio_tx_agent_not_available",
}
out: dict[str, Any] = {}
rig_manager = getattr(radio_tx, "rig_manager", None)
if not rig_manager or not hasattr(rig_manager, "is_connected"):
return {"connected": False, "reason": "rig_not_configured"}
connected = rig_manager.is_connected()
out: dict[str, Any] = {"connected": connected}
if connected:
try:
state = await rig_manager.get_state()
if state:
out["frequency_hz"] = state.frequency
out["mode"] = getattr(state.mode, "value", str(state.mode))
out["ptt"] = state.ptt
except Exception:
pass
if rig_manager and hasattr(rig_manager, "is_connected"):
connected = rig_manager.is_connected()
out["connected"] = connected
if connected:
try:
state = await rig_manager.get_state()
if state:
out["frequency_hz"] = state.frequency
out["mode"] = getattr(state.mode, "value", str(state.mode))
out["ptt"] = state.ptt
except Exception:
pass
else:
out["connected"] = False
out["reason"] = "rig_not_configured"

sdr_transmitter = getattr(radio_tx, "sdr_transmitter", None)
if sdr_transmitter is not None:
out["sdr_tx_available"] = True
out["sdr_tx_reason"] = "configured"
else:
out["sdr_tx_available"] = False
out["sdr_tx_reason"] = "sdr_tx_disabled_or_unavailable"
return out


Expand Down
2 changes: 1 addition & 1 deletion radioshaq/radioshaq/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -468,7 +468,7 @@ def _load_config_for_cli(config_dir: Optional[Path] = None) -> Optional[dict]:
def _safe_llm_dict(llm: Any) -> dict:
"""Dict from LLMConfig with API keys redacted."""
d = llm.model_dump(mode="json") if hasattr(llm, "model_dump") else {}
for k in ("mistral_api_key", "openai_api_key", "anthropic_api_key", "custom_api_key"):
for k in ("mistral_api_key", "openai_api_key", "anthropic_api_key", "custom_api_key", "huggingface_api_key", "gemini_api_key"):
if d.get(k):
d[k] = "(set)"
return d
Expand Down
4 changes: 3 additions & 1 deletion radioshaq/radioshaq/config/schema.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ class LLMProvider(StrEnum):
ANTHROPIC = "anthropic"
CUSTOM = "custom"
HUGGINGFACE = "huggingface"
GEMINI = "gemini"


class RadioMode(StrEnum):
Expand Down Expand Up @@ -201,7 +202,8 @@ class LLMConfig(BaseModel):
mistral_api_key: str | None = Field(default=None)
openai_api_key: str | None = Field(default=None)
anthropic_api_key: str | None = Field(default=None)

gemini_api_key: str | None = Field(default=None)

# Custom provider
custom_api_base: str | None = Field(default=None)
custom_api_key: str | None = Field(default=None)
Expand Down
5 changes: 4 additions & 1 deletion radioshaq/radioshaq/database/postgres_gis.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,8 +177,9 @@ async def find_operators_nearby(
# Build point geometry
point = f"SRID=4326;POINT({longitude} {latitude})"

# Base query (include lat/lon for each operator so callers can map or compute further)
# Base query (include id, lat/lon, distance for mapping; id for stable marker keys)
query = select(
OperatorLocation.id,
OperatorLocation.callsign,
OperatorLocation.timestamp,
OperatorLocation.altitude_meters,
Expand Down Expand Up @@ -216,10 +217,12 @@ async def find_operators_nearby(

return [
{
"id": row.id,
"callsign": row.callsign,
"latitude": float(row.latitude) if row.latitude is not None else None,
"longitude": float(row.longitude) if row.longitude is not None else None,
"timestamp": row.timestamp.isoformat() if row.timestamp else None,
"last_seen_at": row.timestamp.isoformat() if row.timestamp else None,
"altitude_meters": row.altitude_meters,
"source": row.source,
"session_id": row.session_id,
Expand Down
4 changes: 4 additions & 0 deletions radioshaq/radioshaq/llm/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,10 @@ async def chat(
self.api_key
or os.environ.get("MISTRAL_API_KEY")
or os.environ.get("OPENAI_API_KEY")
or os.environ.get("ANTHROPIC_API_KEY")
or os.environ.get("HF_TOKEN")
or os.environ.get("HUGGINGFACE_API_KEY")
or os.environ.get("GEMINI_API_KEY")
)

kwargs: dict[str, Any] = {
Comment thread
Josephrp marked this conversation as resolved.
Expand Down Expand Up @@ -131,8 +133,10 @@ async def chat_with_tools(
self.api_key
or os.environ.get("MISTRAL_API_KEY")
or os.environ.get("OPENAI_API_KEY")
or os.environ.get("ANTHROPIC_API_KEY")
or os.environ.get("HF_TOKEN")
or os.environ.get("HUGGINGFACE_API_KEY")
or os.environ.get("GEMINI_API_KEY")
)

kwargs_tools: dict[str, Any] = {
Expand Down
11 changes: 11 additions & 0 deletions radioshaq/radioshaq/orchestrator/factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,15 @@ def _llm_model_string_from_llm_config(llm: LLMConfig) -> str:
return f"openai/{model}"
if p == "custom":
return f"custom/{model}" if "/" not in model else model
if p == "gemini":
raw_model = (getattr(llm, "model", None) or "").strip()
if not raw_model:
model = "gemini-2.5-flash"
else:
model = raw_model
if "gemini/" in model:
return model
Comment thread
Josephrp marked this conversation as resolved.
Outdated
return f"gemini/{model}"
if "/" not in model and not model.startswith(("openai/", "anthropic/", "mistral/", "custom/", "ollama/")):
return f"mistral/{model}"
return model
Expand Down Expand Up @@ -105,6 +114,8 @@ def _llm_api_key_from_llm_config(llm: LLMConfig) -> str | None:
return getattr(llm, "openai_api_key", None)
if p == "mistral":
return getattr(llm, "mistral_api_key", None)
if p == "gemini":
return getattr(llm, "gemini_api_key", None)
return None


Expand Down
Loading
Loading