diff --git a/.githooks/pre-push b/.githooks/pre-push index aebe627..8c2ab2d 100644 --- a/.githooks/pre-push +++ b/.githooks/pre-push @@ -11,7 +11,10 @@ echo "[pre-push] Installing/syncing test dependencies..." cd radioshaq uv sync --extra dev --extra test --extra sdr -echo "[pre-push] Running unit + integration tests..." -uv run pytest tests/unit tests/integration -v +echo "[pre-push] Running unit tests..." +uv run pytest tests/unit -v + +echo "[pre-push] Running integration tests..." +uv run pytest tests/integration -v echo "[pre-push] All checks passed." diff --git a/.github/PYPI_README.md b/.github/PYPI_README.md index 4324591..8fde6d3 100644 --- a/.github/PYPI_README.md +++ b/.github/PYPI_README.md @@ -2,7 +2,7 @@ **S**trategic **H**am **R**adio **A**utonomous **Q**uery and **K**ontrol System -An AI-powered orchestrator for ham radio operations, emergency communications, and field-to-HQ coordination. One install gives you the API, web UI, and optional remote SDR receiver. +AI-powered orchestration for ham radio operations, emergency communications, and field-to-HQ coordination. One install provides the FastAPI backend, bundled web UI, and optional remote SDR receiver. Supports REACT-style reasoning, specialized agents (radio TX/RX, whitelist, SMS/WhatsApp, GIS, propagation), and tools for relay, TTS, and callsign registration. --- @@ -12,38 +12,39 @@ An AI-powered orchestrator for ham radio operations, emergency communications, a pip install radioshaq ``` -**Optional (for SDR hardware):** `pip install radioshaq[sdr]` (RTL-SDR) or `radioshaq[hackrf]` (HackRF). - **Requirements:** Python 3.11+ -**License notice:** RadioShaq is distributed under GPL-2.0-only. Official CLI and web UI require explicit license acceptance before normal use. +**Optional extras:** + +| Extra | Purpose | +|-------|---------| +| `radioshaq[sdr]` | RTL-SDR for remote listen-only receiver | +| `radioshaq[hackrf]` | HackRF for remote receiver (non-Windows) | +| `radioshaq[audio]` | Local ASR (Whisper, Voxtral) | +| `radioshaq[voice_tx]` | Play audio to rig (sounddevice, soundfile, pydub) | +| `radioshaq[voice_rx]` | Capture + VAD for voice pipeline | +| `radioshaq[tts_kokoro]` | Local TTS (Kokoro, no API key) | +| `radioshaq[metrics]` | Prometheus `/metrics` endpoint | + +**License:** RadioShaq is distributed under **GPL-2.0-only**. The CLI and web UI require license acceptance before normal use (interactive prompt or `RADIOSHAQ_LICENSE_ACCEPTED=1`). --- -## Easiest way to get started: interactive setup +## Quick start -From a project directory (or the repo root), run: +**1. Interactive setup** (recommended) ```bash radioshaq setup ``` -This walks you through: - -- **Mode** — field, hq, or receiver -- **Database** — use Docker Postgres or an existing URL -- **Secrets** — JWT secret, LLM API key (optional) -- **Config** — writes `.env` and `config.yaml`, can start Docker and run migrations +Guides you through mode (field / hq / receiver), database (Docker or URL), JWT secret, optional LLM API key, and radio/voice options. Writes `.env` and `config.yaml`, can start Docker Postgres and run migrations. -**Minimal prompts:** `radioshaq setup --quick` (mode + “use Docker?” then defaults). +- `radioshaq setup --quick` — minimal prompts (mode + Docker?), then defaults +- `radioshaq setup --no-input --mode field` — non-interactive (CI); optional `--db-url`, `--config-dir` +- `radioshaq setup --reconfigure` — update existing config without starting over -**Non-interactive (CI/scripts):** `radioshaq setup --no-input --mode field` (optionally `--db-url postgresql://...`). - -**Reconfigure:** `radioshaq setup --reconfigure` to update existing config without starting over. - ---- - -## Run the API and web UI +**2. Run API and web UI** ```bash radioshaq run-api @@ -53,11 +54,9 @@ radioshaq run-api - **Web UI:** http://localhost:8000/ - **Health:** http://localhost:8000/health -Default host: `0.0.0.0`, port: `8000`. Override with `--host` and `--port`. - ---- +Default bind: `0.0.0.0:8000`. Use `--host` and `--port` to override. -## Get a token (auth) +**3. Get a token** Most API calls need a Bearer JWT: @@ -65,9 +64,7 @@ Most API calls need a Bearer JWT: radioshaq token --subject op1 --role field --station-id STATION-01 ``` -Then set `RADIOSHAQ_TOKEN` to the printed value, or pass it in requests. Roles: `field`, `hq`, `receiver`. - -**Check API from the CLI:** +Set `RADIOSHAQ_TOKEN` to the printed value. Roles: `field`, `hq`, `receiver`. ```bash radioshaq health @@ -76,64 +73,69 @@ radioshaq health --ready --- -## CLI at a glance +## CLI reference + +API base URL: `RADIOSHAQ_API` (default `http://localhost:8000`). Commands that call the API require `RADIOSHAQ_TOKEN` unless noted. -| Command | What it does | -|--------|------------------| -| **setup** | | +| Command | Description | +|---------|-------------| +| **Setup** | | | `radioshaq setup` | Interactive setup: .env, config.yaml, optional Docker and migrations | -| `radioshaq setup --quick` | Minimal prompts (mode, use Docker?), then defaults | -| `radioshaq setup --no-input --mode field` | Non-interactive for CI; optional `--db-url`, `--config-dir` | -| `radioshaq setup --reconfigure` | Update existing config (merge sections) | +| `radioshaq setup --quick` | Minimal prompts | +| `radioshaq setup --no-input --mode field` | Non-interactive; optional `--db-url`, `--config-dir` | +| `radioshaq setup --reconfigure` | Update existing config | | **Server & auth** | | | `radioshaq run-api` | Start FastAPI server (and web UI at /). Options: `--host`, `--port`, `--reload` | | `radioshaq run-receiver` | Start remote SDR receiver (port 8765). Set `JWT_SECRET`, `STATION_ID`, `HQ_URL` | -| `radioshaq token` | Get JWT. Options: `--subject`, `--role`, `--station-id`, `--base-url` | -| `radioshaq health` | Liveness check; `radioshaq health --ready` for readiness | -| **Callsigns** (require `RADIOSHAQ_TOKEN`) | | +| `radioshaq token --subject X --role Y [--station-id Z]` | Get JWT; print `access_token` | +| `radioshaq health` | Liveness; `radioshaq health --ready` for readiness | +| **Callsigns** | | | `radioshaq callsigns list` | List registered callsigns | | `radioshaq callsigns add ` | Register a callsign | -| `radioshaq callsigns remove ` | Remove from whitelist | +| `radioshaq callsigns remove ` | Remove from registry | | `radioshaq callsigns register-from-audio ` | Register from audio (ASR) | | **Messages** | | -| `radioshaq message process ` | Send message through REACT orchestrator | -| `radioshaq message inject ` | Inject into RX path (demo). Options: `--band`, `--mode`, `--source-callsign` | -| `radioshaq message whitelist-request ` | Whitelist request (orchestrator + optional TTS) | -| `radioshaq message relay --source-band X --target-band Y` | Relay message between bands | +| `radioshaq message process ""` | Send message through REACT orchestrator | +| `radioshaq message inject ""` | Inject into RX path (demo). Options: `--band`, `--mode`, `--source-callsign`, `--destination-callsign` | +| `radioshaq message whitelist-request ""` | Whitelist request (orchestrator; optional TTS reply) | +| `radioshaq message relay "" --source-band X --target-band Y` | Relay message between bands | | **Transcripts** | | -| `radioshaq transcripts list` | List transcripts. Options: `--callsign`, `--band`, `--since`, `--limit` | +| `radioshaq transcripts list` | List transcripts. Options: `--callsign`, `--band`, `--mode`, `--since`, `--limit` | | `radioshaq transcripts get ` | Get one transcript | | `radioshaq transcripts play ` | Play transcript as TTS over radio | | **Radio** | | | `radioshaq radio bands` | List bands | -| `radioshaq radio send-tts ` | Send TTS over radio. Options: `--frequency-hz`, `--mode` | +| `radioshaq radio send-tts ""` | Send TTS over radio. Options: `--frequency-hz`, `--mode` | +| **Config** | | +| `radioshaq config show` | Show LLM, memory, overrides from config file (keys redacted). Option: `--section llm|memory|overrides` | +| **Launch (dev)** | | +| `radioshaq launch docker` | Start Docker Compose (Postgres; optional `--hindsight`) | +| `radioshaq launch pm2` | Start Postgres + API under PM2 (optional `--hindsight`) | -Use `radioshaq --help` and `radioshaq --help` for options. API base URL: `RADIOSHAQ_API` (default `http://localhost:8000`). +Use `radioshaq --help` and `radioshaq --help` for options. --- -## Remote receiver (SDR listen-only) +## Remote receiver (SDR) For a listen-only station (e.g. Raspberry Pi + RTL-SDR) that streams to HQ: ```bash -pip install radioshaq[sdr] # or radioshaq[hackrf] for HackRF +pip install radioshaq[sdr] # or radioshaq[hackrf] for HackRF (non-Windows) export JWT_SECRET=your-secret export STATION_ID=RECEIVER-01 export HQ_URL=http://your-hq:8000 radioshaq run-receiver ``` -HQ accepts uploads at `POST /receiver/upload` (Bearer JWT). Default receiver port: `8765` (`--port` to change). +HQ accepts uploads at `POST /receiver/upload` (Bearer JWT). Receiver default port: `8765` (`--port` to change). --- -## After install (no interactive setup) - -If you prefer to configure by hand: +## Manual configuration (no interactive setup) -1. **Database:** Set `DATABASE_URL` or `POSTGRES_*` (and run migrations with your Alembic config). -2. **Config:** Copy `config.example.yaml` to `config.yaml` and set `mode`, `database`, `auth`, etc. See [Configuration](https://radioshaq.readthedocs.io/configuration/). +1. **Database:** Set `DATABASE_URL` or `RADIOSHAQ_DATABASE__POSTGRES_URL`; run migrations with your Alembic config. +2. **Config:** Copy `config.example.yaml` to `config.yaml` and set `mode`, `database`, `auth`, `llm`, etc. 3. **Start:** `radioshaq run-api`. --- diff --git a/.github/README.md b/.github/README.md index e89dfa3..201cbfc 100644 --- a/.github/README.md +++ b/.github/README.md @@ -1,10 +1,31 @@ # RadioShaq -Monorepo for **RadioShaq**: ham radio AI orchestration and remote SDR reception. One main app (single PyPI package); Python is managed with [uv](https://github.com/astral-sh/uv). +Monorepo for **RadioShaq**: AI-powered ham radio orchestration, emergency communications, and remote SDR reception. One main application (single PyPI package); Python is managed with [uv](https://github.com/astral-sh/uv). -**What this repo does** +**RadioShaq** — **S**trategic **H**am **R**adio **A**utonomous **Q**uery and **K**ontrol System — is an autonomous agent that understands natural language requests, plans steps, and delegates to specialized sub-agents and tools. It provides a FastAPI backend, React (Vite) web UI, PostgreSQL + PostGIS + Alembic, and optional real radios and SDR. The **remote receiver** (listen-only SDR station) is bundled; run it with `radioshaq run-receiver`. -- **RadioShaq** — AI-powered orchestrator for ham radio, emergency comms, and field–HQ coordination. FastAPI backend, React (Vite) web UI, Postgres + Alembic, optional real radios and SDR. The **remote receiver** (SDR listen-only station) is bundled; run with `radioshaq run-receiver`. +--- + +## What’s in this repo + +- **radioshaq/** — Main application (single installable package) + - **radioshaq/** — Python package: API, REACT orchestrator, agents (radio_tx, radio_rx, radio_rx_audio, whitelist, sms, whatsapp, gis, propagation, scheduler), tools (send_audio_over_radio, relay_message_between_bands, callsign list/register), compliance (band plans, TX audit), voice pipeline (capture → VAD → ASR → MessageBus) + - **web-interface/** — React frontend (Vite + TypeScript): Map (operator/emergency locations), Transcripts, Callsigns, Messages, Radio, Emergency, Audio config, Settings + - **tests/** — pytest (unit + integration) + - **infrastructure/** — Docker Compose (Postgres, optional Hindsight), PM2, Alembic, AWS Lambda + - **scripts/** — Demos and utilities +- **docs/** — Quick start, configuration, API reference, radio usage, map configuration + +--- + +## Features (from the implementation) + +- **REACT loop** — Reasoning → Evaluation → Acting → Communicating → Tracking; Task Judge and turn/token limits +- **Modes** — `field`, `hq`, `receiver` (config-driven) +- **API** — Auth (JWT), health, messages (process, whitelist-request, inject, relay, from-audio), transcripts, callsigns (list, register, register-from-audio, contact preferences), radio (bands, status, send-tts, send-audio, propagation), GIS (location, operators-nearby, emergency-events), emergency (request, approve/reject, events stream), receiver upload, inject, internal bus, Twilio (SMS/WhatsApp), audio config, config overrides (LLM, memory), memory blocks/summaries, optional Prometheus metrics +- **Web UI** — License gate; pages: Audio config, Emergency, Callsigns, Messages, Transcripts, Radio, Map (OpenStreetMap or Google Maps, operator/emergency locations), Settings +- **Relay** — Band-to-band (and optional SMS/WhatsApp) with optional scheduled delivery and relay delivery worker +- **Compliance** — Region-based band restrictions (FCC, CEPT, etc.), band allowlist, TX audit --- @@ -66,7 +87,7 @@ uv run alembic -c infrastructure/local/alembic.ini upgrade head uv run python -m radioshaq.api.server ``` -From **repo root**, Postgres and migrations can be run as: +From **repo root**: ```bash cd radioshaq/infrastructure/local && docker compose up -d postgres && cd ../../.. @@ -103,15 +124,15 @@ Most endpoints require a Bearer JWT. Request a token (no prior auth in dev), the ```powershell $r = Invoke-RestMethod -Method Post -Uri "http://localhost:8000/auth/token?subject=op1&role=field&station_id=STATION-01" -$env:TOKEN = $r.access_token -Invoke-RestMethod -Uri "http://localhost:8000/auth/me" -Headers @{ Authorization = "Bearer $env:TOKEN" } +$env:RADIOSHAQ_TOKEN = $r.access_token +Invoke-RestMethod -Uri "http://localhost:8000/auth/me" -Headers @{ Authorization = "Bearer $env:RADIOSHAQ_TOKEN" } ``` **Bash:** ```bash -TOKEN=$(curl -s -X POST "http://localhost:8000/auth/token?subject=op1&role=field&station_id=STATION-01" | jq -r .access_token) -curl -H "Authorization: Bearer $TOKEN" http://localhost:8000/auth/me +export RADIOSHAQ_TOKEN=$(curl -s -X POST "http://localhost:8000/auth/token?subject=op1&role=field&station_id=STATION-01" | jq -r .access_token) +curl -H "Authorization: Bearer $RADIOSHAQ_TOKEN" http://localhost:8000/auth/me ``` Roles: `field`, `hq`, `receiver`. Set `RADIOSHAQ_TOKEN` to use the CLI below. @@ -125,9 +146,11 @@ Roles: `field`, `hq`, `receiver`. Set `RADIOSHAQ_TOKEN` to use the CLI below. | `radioshaq message process "your request"` | Send message through REACT orchestrator | | `radioshaq message inject "text"` | Inject into RX path (demo). Options: `--band`, `--source-callsign` | | `radioshaq message relay "msg" --source-band 40m --target-band 2m` | Relay between bands | -| `radioshaq transcripts list` | List transcripts. Options: `--callsign`, `--band`, `--destination-only` | +| `radioshaq transcripts list` | List transcripts. Options: `--callsign`, `--band`, `--since`, `--limit` | | `radioshaq callsigns list` | List registered callsigns | | `radioshaq callsigns add ` | Register a callsign | +| `radioshaq radio bands` | List bands | +| `radioshaq radio send-tts "message"` | Send TTS over radio | API base URL: `RADIOSHAQ_API` (default `http://localhost:8000`). Use `radioshaq --help` and `radioshaq --help` for options. @@ -139,13 +162,16 @@ With the API running, in a second terminal from **radioshaq/**: uv run python scripts/demo/run_demo.py ``` -Gets a token, injects on 40m, relays to 2m, and polls `/transcripts`. See [radioshaq/scripts/demo/README.md](radioshaq/scripts/demo/README.md). +Gets a token, injects on 40m, relays to 2m, and polls `/transcripts`. See [radioshaq/scripts/demo/README.md](radioshaq/scripts/demo/README.md) and docs under `radioshaq/scripts/demo/docs/`. -### API calls +### API highlights - **Process a message:** `POST /messages/process` with JSON `{"message": "your request"}` and header `Authorization: Bearer `. -- **Transcripts:** `GET /transcripts?callsign=&destination_only=true&band=` for messages for you on a band. -- See **http://localhost:8000/docs** for the full OpenAPI spec. +- **Transcripts:** `GET /transcripts?callsign=&band=&destination_only=true`. +- **Relay:** `POST /messages/relay` with message, source_band, target_band, optional target_channel (radio/sms/whatsapp). +- **GIS:** `POST /gis/location`, `GET /gis/location/{callsign}`, `GET /gis/operators-nearby`, `GET /gis/emergency-events`. +- **Emergency:** `POST /emergency/request`, `GET /emergency/events`, `POST /emergency/events/{id}/approve` or `/reject`. +- Full OpenAPI spec at **http://localhost:8000/docs**. --- @@ -154,8 +180,7 @@ Gets a token, injects on 40m, relays to 2m, and polls `/transcripts`. See [radio From **radioshaq/** (SDR listen-only station streaming to HQ): ```bash -uv sync --extra dev --extra test -# With hardware: uv sync --extra sdr # or --extra hackrf +uv sync --extra sdr # or --extra hackrf on non-Windows # Set env then run # JWT_SECRET=... STATION_ID=RECEIVER-01 HQ_URL=http://your-hq:8000 @@ -189,6 +214,8 @@ Frontend: `cd web-interface && npm install && npm run dev`. | [docs/configuration.md](docs/configuration.md) | Config file, env vars, interactive setup | | [docs/radio-usage.md](docs/radio-usage.md) | Rig models, CAT, hardware | | [docs/api-reference.md](docs/api-reference.md) | API overview | +| [docs/index.md](docs/index.md) | Agent overview, REACT loop, agents, tools, modes | +| [radioshaq/docs/map-configuration.md](radioshaq/docs/map-configuration.md) | Map provider (OSM/Google), tile sources | | [radioshaq/README.md](radioshaq/README.md) | App install, auth, demo, monitoring | --- @@ -197,15 +224,15 @@ Frontend: `cd web-interface && npm install && npm run dev`. ``` radioshaq/ # Main application (single PyPI package) -├── radioshaq/ # Python package (API, radio, audio, orchestrator) +├── radioshaq/ # Python package (API, radio, audio, orchestrator, agents, tools) │ └── remote_receiver/ # Bundled SDR receiver (radioshaq run-receiver) -├── web-interface/ # React frontend (Vite + TypeScript) -├── tests/ # pytest (unit + integration) -├── infrastructure/ # Docker, PM2, AWS Lambda, Alembic -└── scripts/ # Demo and utilities +├── web-interface/ # React frontend (Vite + TypeScript) +├── tests/ # pytest (unit + integration) +├── infrastructure/ # Docker, PM2, AWS Lambda, Alembic +└── scripts/ # Demo and utilities -docs/ # Quick-start, configuration, snippets -.github/ # Workflows, PYPI_README.md +docs/ # Quick-start, configuration, API, radio, index +.github/ # Workflows, PYPI_README.md ``` --- diff --git a/.github/mkdocs.yml b/.github/mkdocs.yml index 938fcf0..b797219 100644 --- a/.github/mkdocs.yml +++ b/.github/mkdocs.yml @@ -75,5 +75,5 @@ nav: - Quick Start: quick-start.md - Radio Usage: radio-usage.md - Configuration: configuration.md - - Monitoring: monitoring.md + - Response & compliance: response-compliance-and-monitoring.md - API Reference: api-reference.md diff --git a/.github/workflows/publish-nightly.yml b/.github/workflows/publish-nightly.yml index 06b47a5..85c1729 100644 --- a/.github/workflows/publish-nightly.yml +++ b/.github/workflows/publish-nightly.yml @@ -1,9 +1,6 @@ name: Publish Nightly to PyPI on: - push: - branches: - - dev workflow_dispatch: permissions: @@ -33,17 +30,40 @@ jobs: runs-on: ubuntu-latest env: RADIOSHAQ_LICENSE_ACCEPTED: "1" + DATABASE_URL: postgresql://radioshaq:radioshaq@127.0.0.1:5434/radioshaq + TEST_DATABASE_URL: postgresql+asyncpg://radioshaq:radioshaq@127.0.0.1:5434/radioshaq + services: + postgres: + image: postgis/postgis:16-3.4 + env: + POSTGRES_USER: radioshaq + POSTGRES_PASSWORD: radioshaq + POSTGRES_DB: radioshaq + ports: + - 5434:5432 steps: - uses: actions/checkout@v6 - uses: actions/setup-python@v6 with: python-version: "3.11" - uses: astral-sh/setup-uv@v7 - - name: Run test suite + - name: Install dependencies + run: cd radioshaq && uv sync --extra dev --extra test --extra sdr + - name: Wait for Postgres run: | - cd radioshaq - uv sync --extra dev --extra test --extra sdr - uv run pytest tests/unit tests/integration -v + for i in $(seq 1 30); do + if python -c "import socket; s=socket.socket(); s.settimeout(2); s.connect(('127.0.0.1',5434)); s.close()" 2>/dev/null; then + echo "Postgres is ready on 127.0.0.1:5434" + break + fi + echo "Waiting for Postgres... ($i/30)" + sleep 2 + done + python -c "import socket; s=socket.socket(); s.settimeout(5); s.connect(('127.0.0.1',5434)); s.close(); print('Postgres port open')" + - name: Run database migrations + run: cd radioshaq && uv run alembic-upgrade + - name: Run test suite + run: cd radioshaq && uv run pytest tests/unit tests/integration -v build-and-publish: name: Build and publish nightly artifact diff --git a/.github/workflows/publish-pypi.yml b/.github/workflows/publish-pypi.yml index a6c80a6..b14bf1f 100644 --- a/.github/workflows/publish-pypi.yml +++ b/.github/workflows/publish-pypi.yml @@ -24,6 +24,17 @@ jobs: runs-on: ubuntu-latest env: RADIOSHAQ_LICENSE_ACCEPTED: "1" + DATABASE_URL: postgresql://radioshaq:radioshaq@127.0.0.1:5434/radioshaq + TEST_DATABASE_URL: postgresql+asyncpg://radioshaq:radioshaq@127.0.0.1:5434/radioshaq + services: + postgres: + image: postgis/postgis:16-3.4 + env: + POSTGRES_USER: radioshaq + POSTGRES_PASSWORD: radioshaq + POSTGRES_DB: radioshaq + ports: + - 5434:5432 steps: - uses: actions/checkout@v6 with: @@ -32,11 +43,23 @@ jobs: with: python-version: "3.11" - uses: astral-sh/setup-uv@v7 - - name: Run test suite + - name: Install dependencies + run: cd radioshaq && uv sync --extra dev --extra test --extra sdr + - name: Wait for Postgres run: | - cd radioshaq - uv sync --extra dev --extra test --extra sdr - uv run pytest tests/unit tests/integration -v + for i in $(seq 1 30); do + if python -c "import socket; s=socket.socket(); s.settimeout(2); s.connect(('127.0.0.1',5434)); s.close()" 2>/dev/null; then + echo "Postgres is ready on 127.0.0.1:5434" + break + fi + echo "Waiting for Postgres... ($i/30)" + sleep 2 + done + python -c "import socket; s=socket.socket(); s.settimeout(5); s.connect(('127.0.0.1',5434)); s.close(); print('Postgres port open')" + - name: Run database migrations + run: cd radioshaq && uv run alembic-upgrade + - name: Run test suite + run: cd radioshaq && uv run pytest tests/unit tests/integration -v verify-tag-source: name: Verify tag points to main history diff --git a/.github/workflows/test-ci.yml b/.github/workflows/test-ci.yml index 1e61fea..bd5cd7f 100644 --- a/.github/workflows/test-ci.yml +++ b/.github/workflows/test-ci.yml @@ -20,13 +20,49 @@ jobs: runs-on: ubuntu-latest env: RADIOSHAQ_LICENSE_ACCEPTED: "1" + DATABASE_URL: postgresql://radioshaq:radioshaq@127.0.0.1:5434/radioshaq + TEST_DATABASE_URL: postgresql+asyncpg://radioshaq:radioshaq@127.0.0.1:5434/radioshaq + services: + postgres: + image: postgis/postgis:16-3.4 + env: + POSTGRES_USER: radioshaq + POSTGRES_PASSWORD: radioshaq + POSTGRES_DB: radioshaq + ports: + - 5434:5432 steps: - uses: actions/checkout@v6 + - uses: actions/setup-node@v6 + with: + node-version: "20" + cache: "npm" + cache-dependency-path: radioshaq/web-interface/package-lock.json + - name: Build web UI and stage for API-served bundle + run: | + cd radioshaq/web-interface && npm ci --no-audit --no-fund && npm run build + cd "${{ github.workspace }}" + mkdir -p radioshaq/radioshaq/web_ui + cp -r radioshaq/web-interface/dist/. radioshaq/radioshaq/web_ui/ - uses: actions/setup-python@v6 with: python-version: "3.11" - uses: astral-sh/setup-uv@v7 - - name: Run tests + - name: Install dependencies + run: cd radioshaq && uv sync --extra dev --extra test --extra sdr + - name: Wait for Postgres run: | - cd radioshaq && uv sync --extra dev --extra test --extra sdr && uv run pytest tests/unit tests/integration -v + for i in $(seq 1 30); do + if python -c "import socket; s=socket.socket(); s.settimeout(2); s.connect(('127.0.0.1',5434)); s.close()" 2>/dev/null; then + echo "Postgres is ready on 127.0.0.1:5434" + break + fi + echo "Waiting for Postgres... ($i/30)" + sleep 2 + done + python -c "import socket; s=socket.socket(); s.settimeout(5); s.connect(('127.0.0.1',5434)); s.close(); print('Postgres port open')" + - name: Run database migrations + run: cd radioshaq && uv run alembic-upgrade + - name: Run tests + run: cd radioshaq && uv run pytest tests/unit tests/integration -v diff --git a/.gitignore b/.gitignore index 5fb7a78..14a1399 100644 --- a/.gitignore +++ b/.gitignore @@ -73,6 +73,10 @@ wheels/ *.egg-info MANIFEST .pytest_cache/ +.ruff_cache/ +.tmp_build/ +.tmp_pytest/ +dist-investigate/ .coverage .coverage.* htmlcov/ @@ -149,7 +153,7 @@ logs/ botpy.log *.pid -# --- Shakods / PM2 --- +# --- RadioShaq / PM2 --- # (logs/ above; PM2 dump is in user home) # --- Docker / local dev --- @@ -159,11 +163,24 @@ botpy.log # --- Jupyter --- .ipynb_checkpoints/ +# --- Cache and temp folders --- +.cache/ +cache/ +tmp/ +temp/ +.tmp/ +.temp/ +*.cache +.sass-cache/ +.stylelintcache +.vite/ +.astro/ +.rollup.cache/ + # --- Misc --- *.bak *.tmp *.temp -.cache/ result result-* *.dump @@ -171,6 +188,17 @@ result-* *.sqlite3 db.sqlite3-journal +# --- RadioShaq local data --- +radioshaq/scripts/demo/recordings/ +radioshaq/config.yaml + +# --- RadioShaq web UI build output --- +radioshaq/radioshaq/web_ui/assets/ + +# --- RadioShaq local env / tools --- +radioshaq/.venv-wsl/ +radioshaq/hackrf/ + # --- Optional: uncomment if you don’t want lockfiles in vcs --- # uv.lock # package-lock.json diff --git a/.github/LICENSE.md b/LICENSE.md similarity index 84% rename from .github/LICENSE.md rename to LICENSE.md index 700459b..48f7cae 100644 --- a/.github/LICENSE.md +++ b/LICENSE.md @@ -259,62 +259,4 @@ DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. -END OF TERMS AND CONDITIONS - -How to Apply These Terms to Your New Programs - -If you develop a new program, and you want it to be of the greatest possible -use to the public, the best way to achieve this is to make it free software -which everyone can redistribute and change under these terms. - -To do so, attach the following notices to the program. It is safest to attach -them to the start of each source file to most effectively convey the exclusion -of warranty; and each file should have at least the "copyright" line and a -pointer to where the full notice is found. - - -Copyright (C) - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; version 2 of the License. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License along -with this program; if not, write to the Free Software Foundation, Inc., -51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this when it -starts in an interactive mode: - -Gnomovision version 69, Copyright (C) year name of author -Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w`. -This is free software, and you are welcome to redistribute it -under certain conditions; type `show c` for details. - -The hypothetical commands `show w` and `show c` should show the appropriate -parts of the General Public License. Of course, the commands you use may be -called something other than `show w` and `show c`; they could even be -mouse-clicks or menu items—whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - -Yoyodyne, Inc., hereby disclaims all copyright interest in the program -`Gnomovision` (which makes passes at compilers) written by James Hacker. - -, 1 April 1989 -Ty Coon, President of Vice - -This General Public License does not permit incorporating your program into -proprietary programs. If your program is a subroutine library, you may -consider it more useful to permit linking proprietary applications with the -library. If this is what you want to do, use the GNU Lesser General Public -License instead of this License. +END OF TERMS AND CONDITIONS \ No newline at end of file diff --git a/docs/api-reference.md b/docs/api-reference.md index a2829b5..e4c09b1 100644 --- a/docs/api-reference.md +++ b/docs/api-reference.md @@ -9,18 +9,29 @@ The RadioShaq API is a FastAPI application. All protected endpoints require a ** | Area | Prefix | Purpose | |------|--------|---------| | Health | `/health`, `/health/ready` | Liveness and readiness (DB, orchestrator) | -| Metrics | `/metrics` | Prometheus scrape (uptime, callsigns, optional GPU). See [Monitoring](monitoring.md). | +| Metrics | `/metrics` | Prometheus scrape (uptime, callsigns, optional GPU). See [Response & compliance](response-compliance-and-monitoring.md). | | Auth | `/auth/token`, `/auth/refresh`, `/auth/me` | Issue token, refresh, current user | | Messages | `/messages/process`, `/messages/whitelist-request`, `/messages/from-audio`, `/messages/inject-and-store` | Orchestration and whitelist flow | | Relay | `/messages/relay` | Band translation (e.g. 40m → 2m). Stores source + relayed transcripts; optional inject/TX when config enables. Recipients **poll** `GET /transcripts?callsign=&destination_only=true&band=` to retrieve relayed messages. | -| Callsigns | `/callsigns`, `/callsigns/register`, `/callsigns/register-from-audio`, `/callsigns/registered/{callsign}` | Registered callsigns and registration | +| Callsigns | `/callsigns`, `/callsigns/register`, `/callsigns/register-from-audio`, `/callsigns/registered/{callsign}` (GET, PATCH, DELETE), `/callsigns/registered/{callsign}/contact-preferences` (GET, PATCH) | Registered callsigns, registration, update/delete, and contact preferences (notify-on-relay, consent). | | **Config** | `/api/v1/config/llm`, `/api/v1/config/memory`, `/api/v1/config/overrides` | LLM, memory (Hindsight), and per-role overrides (GET/PATCH; keys redacted). See [Configuration](configuration.md#per-role-and-per-subagent-overrides). | -| Audio | `/api/v1/config/audio`, `/api/v1/audio/devices`, `/api/v1/audio/pending`, approve/reject | Audio config and pending response queue | +| Audio | `/api/v1/config/audio`, `/api/v1/config/audio/reset`, `/api/v1/audio/devices`, `/api/v1/audio/devices/{device_id}/test`, `/api/v1/audio/pending`, approve/reject | Audio config, reset, device list, device test, and pending response queue | | Transcripts | `/transcripts`, `/transcripts/{id}`, `/transcripts/{id}/play` | Search and play transcripts | -| Radio | `/radio/status`, `/radio/propagation`, `/radio/bands`, `/radio/send-tts` | Radio connected?, propagation, band list, send TTS | +| Radio | `/radio/status`, `/radio/propagation`, `/radio/bands`, `/radio/send-tts`, `POST /radio/send-audio` | Radio connected?, propagation, band list, send TTS, upload audio file for TX | +| **GIS** | `/gis/location`, `/gis/location/{callsign}`, `/gis/operators-nearby`, `GET /gis/emergency-events` | Store/retrieve operator location (lat/lon), find operators within radius, emergency events with location for map overlays. | +| **Emergency** | `/emergency/request`, `/emergency/pending-count`, `/emergency/events`, `/emergency/events/stream`, `/emergency/events/{id}/approve`, `/emergency/events/{id}/reject` | Request emergency flow, pending count, list events, SSE stream, approve/reject event | +| **Memory** | `/memory/{callsign}/blocks` (GET, PUT, POST append), `/memory/{callsign}/summaries` (GET) | Per-callsign memory blocks and summaries | +| **Receiver** | `POST /receiver/upload` | Receiver service upload to HQ/field | | Inject | `/inject/message` | Demo: push message into RX injection queue | -| Internal | `/internal/bus/inbound` | MessageBus inbound (e.g. Lambda) | +| Internal | `/internal/bus/inbound`, `POST /internal/opt-out` | MessageBus inbound (e.g. Lambda); SMS/WhatsApp opt-out | + +### GIS location (PostGIS) + +- **POST /gis/location** — Store operator location. Body: `callsign` (required), `latitude` and `longitude` (required for v1), optional `accuracy_meters`, `altitude_meters`. If only `location_text` is sent, returns 400 with `error: "ambiguous_location"`. Response: `id`, `callsign`, `latitude`, `longitude`, `source` (e.g. `user_disclosed`), `timestamp`, `confidence`. Coordinates `0.0, 0.0` are valid. +- **GET /gis/location/{callsign}** — Latest stored location for callsign (explicit lat/lon). 404 if none. +- **GET /gis/operators-nearby** — Query: `latitude`, `longitude`, `radius_meters` (default 50000), optional `recent_hours` (default 24), `max_results` (default 100). Returns list of operators with `distance_meters`. +- **GET /gis/emergency-events** — Emergency events with location for map overlays. Query: `since` (ISO datetime), `status`, `limit`. Returns events with location data. Generated API reference from the FastAPI OpenAPI spec (run `python radioshaq/scripts/export_openapi.py` from repo root before building to produce `docs/api/openapi.json`): -[OAD(docs/api/openapi.json)] +[OpenAPI spec](api/openapi.json) diff --git a/docs/api/openapi.json b/docs/api/openapi.json index 9500095..a4d2044 100644 --- a/docs/api/openapi.json +++ b/docs/api/openapi.json @@ -3,7 +3,7 @@ "info": { "title": "RadioShaq API", "description": "Strategic Autonomous Ham Radio and Knowledge Operations Dispatch System", - "version": "0.1.0" + "version": "0.1.3" }, "paths": { "/health": { @@ -56,6 +56,28 @@ } } }, + "/metrics": { + "get": { + "tags": [ + "metrics" + ], + "summary": "Metrics", + "description": "Prometheus scrape endpoint. Exposes radioshaq_uptime_seconds, radioshaq_callsigns_registered_total,\nand optional GPU gauges (when nvidia-smi is available). Install prometheus-client for full metrics.", + "operationId": "metrics_metrics_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "text/plain": { + "schema": { + "type": "string" + } + } + } + } + } + } + }, "/auth/token": { "post": { "tags": [ @@ -296,7 +318,7 @@ "radio" ], "summary": "Bands", - "description": "List supported bands (from band plan).", + "description": "List supported bands (from effective band plan for config region).", "operationId": "bands_radio_bands_get", "responses": { "200": { @@ -324,24 +346,14 @@ ] } }, - "/radio/send-tts": { - "post": { + "/radio/status": { + "get": { "tags": [ "radio" ], - "summary": "Send Tts", - "description": "Send arbitrary text as TTS over the radio (audio out).", - "operationId": "send_tts_radio_send_tts_post", - "requestBody": { - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/SendTTSBody" - } - } - }, - "required": true - }, + "summary": "Radio Status", + "description": "Report whether a radio (CAT rig) is connected and/or SDR TX (HackRF) is configured.\nWhen CAT is connected, include current frequency and mode. For live demos, check\nsdr_tx_available to ensure HackRF TX path is enabled (real hardware when device attached).", + "operationId": "radio_status_radio_status_get", "responses": { "200": { "description": "Successful Response", @@ -350,17 +362,7 @@ "schema": { "additionalProperties": true, "type": "object", - "title": "Response Send Tts Radio Send Tts Post" - } - } - } - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" + "title": "Response Radio Status Radio Status Get" } } } @@ -373,21 +375,19 @@ ] } }, - "/messages/process": { + "/radio/send-tts": { "post": { "tags": [ - "messages" + "radio" ], - "summary": "Process Message", - "description": "Submit a message for REACT orchestration.\nRequires orchestrator to be set in app state (lifespan).\nOptional body fields: channel, chat_id, sender_id (InboundMessage shape for routing).", - "operationId": "process_message_messages_process_post", + "summary": "Send Tts", + "description": "Send arbitrary text as TTS over the radio (audio out).", + "operationId": "send_tts_radio_send_tts_post", "requestBody": { "content": { "application/json": { "schema": { - "additionalProperties": true, - "type": "object", - "title": "Body" + "$ref": "#/components/schemas/SendTTSBody" } } }, @@ -401,7 +401,7 @@ "schema": { "additionalProperties": true, "type": "object", - "title": "Response Process Message Messages Process Post" + "title": "Response Send Tts Radio Send Tts Post" } } } @@ -424,52 +424,62 @@ ] } }, - "/messages/whitelist-request": { + "/radio/send-audio": { "post": { "tags": [ - "messages" + "radio" ], - "summary": "Whitelist Request", - "description": "Whitelist entry point: request access to gated services (e.g. messaging between bands).\nText or audio \u2192 orchestrator evaluates \u2192 response as text and optionally TTS.\nAccepts application/json: { \"text\" or \"message\", \"callsign?\", \"send_audio_back?\" }\nor multipart/form-data: file (audio), callsign, send_audio_back.\n\nApproved/message can come from either the orchestrator final message (tool path:\nLLM used register_callsign tool and replied) or a completed whitelist agent task\n(agent path: ACTING ran the whitelist agent; result in completed_tasks).", - "operationId": "whitelist_request_messages_whitelist_request_post", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "additionalProperties": true, - "type": "object", - "title": "Response Whitelist Request Messages Whitelist Request Post" - } - } - } - } - }, + "summary": "Send Audio", + "description": "Transmit an uploaded audio file over radio (CAT or SDR via radio_tx agent).\n\nThis is primarily for live demos where the client cannot reference server-local paths.", + "operationId": "send_audio_radio_send_audio_post", "security": [ { "HTTPBearer": [] } - ] - } - }, - "/messages/from-audio": { - "post": { - "tags": [ - "messages" ], - "summary": "Message From Audio", - "description": "Upload audio; run ASR; whitelist check; store transcript. Optionally inject to RX queue.", - "operationId": "message_from_audio_messages_from_audio_post", + "parameters": [ + { + "name": "frequency_hz", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "number" + }, + { + "type": "null" + } + ], + "title": "Frequency Hz" + } + }, + { + "name": "mode", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Mode" + } + } + ], "requestBody": { + "required": true, "content": { "multipart/form-data": { "schema": { - "$ref": "#/components/schemas/Body_message_from_audio_messages_from_audio_post" + "$ref": "#/components/schemas/Body_send_audio_radio_send_audio_post" } } - }, - "required": true + } }, "responses": { "200": { @@ -477,9 +487,9 @@ "content": { "application/json": { "schema": { - "additionalProperties": true, "type": "object", - "title": "Response Message From Audio Messages From Audio Post" + "additionalProperties": true, + "title": "Response Send Audio Radio Send Audio Post" } } } @@ -494,27 +504,22 @@ } } } - }, - "security": [ - { - "HTTPBearer": [] - } - ] + } } }, - "/messages/inject-and-store": { + "/gis/location": { "post": { "tags": [ - "messages" + "gis" ], - "summary": "Inject And Store", - "description": "Inject message into RX queue and store to DB (whitelist enforced).", - "operationId": "inject_and_store_messages_inject_and_store_post", + "summary": "Post Location", + "description": "Store operator location. v1 strict: requires latitude and longitude.\nIf only location_text is provided, returns 400 with clarification.", + "operationId": "post_location_gis_location_post", "requestBody": { "content": { "application/json": { "schema": { - "$ref": "#/components/schemas/InjectAndStoreBody" + "$ref": "#/components/schemas/PostLocationBody" } } }, @@ -526,9 +531,7 @@ "content": { "application/json": { "schema": { - "additionalProperties": true, - "type": "object", - "title": "Response Inject And Store Messages Inject And Store Post" + "$ref": "#/components/schemas/LocationResponse" } } } @@ -551,33 +554,37 @@ ] } }, - "/messages/relay": { - "post": { + "/gis/location/{callsign}": { + "get": { "tags": [ - "messages" + "gis" ], - "summary": "Relay Message Between Bands", - "description": "Translate a message from one band to another and store both sides.\n\nScenario: User A emits on band A (e.g. 40m), message is received and stored;\nthen it is \"relayed\" to band B (e.g. 2m) for User B. Stores:\n1. Original (or reference) transcript on source band\n2. Relay transcript on target band with metadata linking to source\n\nBody:\n- message (str): Text to relay\n- source_band (str): e.g. \"40m\"\n- source_frequency_hz (float, optional): exact freq if known\n- source_callsign (str): who sent on source band\n- target_band (str): e.g. \"2m\"\n- target_frequency_hz (float, optional): target freq; else use band default\n- destination_callsign (str, optional): who receives on target band\n- session_id (str, optional): default generated", - "operationId": "relay_message_between_bands_messages_relay_post", - "requestBody": { - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/RelayBody" - } + "summary": "Get Location", + "description": "Return latest stored location for callsign (explicit lat/lon).", + "operationId": "get_location_gis_location__callsign__get", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "callsign", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Callsign" } - }, - "required": true - }, + } + ], "responses": { "200": { "description": "Successful Response", "content": { "application/json": { "schema": { - "additionalProperties": true, - "type": "object", - "title": "Response Relay Message Between Bands Messages Relay Post" + "$ref": "#/components/schemas/LocationResponse" } } } @@ -592,22 +599,17 @@ } } } - }, - "security": [ - { - "HTTPBearer": [] - } - ] + } } }, - "/transcripts": { + "/gis/operators-nearby": { "get": { "tags": [ - "transcripts" + "gis" ], - "summary": "Search Transcripts", - "description": "Search transcripts (received/relayed messages). Use for demo so User 2 can poll\nfor messages on a band or for their callsign (e.g. after relay from 40m to 2m).\nWhen whitelist is configured, only transcripts whose source/destination is in the whitelist are returned.", - "operationId": "search_transcripts_transcripts_get", + "summary": "Get Operators Nearby", + "description": "Find operators within radius of a point (from persisted operator_locations).", + "operationId": "get_operators_nearby_gis_operators_nearby_get", "security": [ { "HTTPBearer": [] @@ -615,126 +617,60 @@ ], "parameters": [ { - "name": "callsign", + "name": "latitude", "in": "query", - "required": false, + "required": true, "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Filter by source or destination callsign", - "title": "Callsign" - }, - "description": "Filter by source or destination callsign" + "type": "number", + "maximum": 90.0, + "minimum": -90.0, + "title": "Latitude" + } }, { - "name": "frequency_min", + "name": "longitude", "in": "query", - "required": false, + "required": true, "schema": { - "anyOf": [ - { - "type": "number" - }, - { - "type": "null" - } - ], - "description": "Minimum frequency (Hz)", - "title": "Frequency Min" - }, - "description": "Minimum frequency (Hz)" + "type": "number", + "maximum": 180.0, + "minimum": -180.0, + "title": "Longitude" + } }, { - "name": "frequency_max", + "name": "radius_meters", "in": "query", "required": false, "schema": { - "anyOf": [ - { - "type": "number" - }, - { - "type": "null" - } - ], - "description": "Maximum frequency (Hz)", - "title": "Frequency Max" - }, - "description": "Maximum frequency (Hz)" + "type": "number", + "minimum": 0, + "default": 50000, + "title": "Radius Meters" + } }, { - "name": "mode", + "name": "recent_hours", "in": "query", "required": false, "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Filter by mode (FM, PSK31, etc.)", - "title": "Mode" - }, - "description": "Filter by mode (FM, PSK31, etc.)" + "type": "integer", + "minimum": 0, + "default": 24, + "title": "Recent Hours" + } }, { - "name": "band", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Filter by band name (e.g. 40m, 2m); uses extra_data.band", - "title": "Band" - }, - "description": "Filter by band name (e.g. 40m, 2m); uses extra_data.band" - }, - { - "name": "since", - "in": "query", - "required": false, - "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], - "description": "Only transcripts after this time (ISO 8601)", - "title": "Since" - }, - "description": "Only transcripts after this time (ISO 8601)" - }, - { - "name": "limit", + "name": "max_results", "in": "query", "required": false, "schema": { "type": "integer", "maximum": 500, "minimum": 1, - "description": "Max results", "default": 100, - "title": "Limit" - }, - "description": "Max results" + "title": "Max Results" + } } ], "responses": { @@ -745,7 +681,7 @@ "schema": { "type": "object", "additionalProperties": true, - "title": "Response Search Transcripts Transcripts Get" + "title": "Response Get Operators Nearby Gis Operators Nearby Get" } } } @@ -763,14 +699,14 @@ } } }, - "/transcripts/{transcript_id}": { + "/gis/emergency-events": { "get": { "tags": [ - "transcripts" + "gis" ], - "summary": "Get Transcript", - "description": "Get a single transcript by id (for play or display).", - "operationId": "get_transcript_transcripts__transcript_id__get", + "summary": "Get Emergency Events With Locations", + "description": "Return emergency coordination events that have a location, with lat/lon for map overlays.", + "operationId": "get_emergency_events_with_locations_gis_emergency_events_get", "security": [ { "HTTPBearer": [] @@ -778,12 +714,51 @@ ], "parameters": [ { - "name": "transcript_id", - "in": "path", - "required": true, + "name": "since", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "description": "ISO timestamp; only events created_at >= since", + "title": "Since" + }, + "description": "ISO timestamp; only events created_at >= since" + }, + { + "name": "status", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "description": "Filter by status (e.g. pending, approved)", + "title": "Status" + }, + "description": "Filter by status (e.g. pending, approved)" + }, + { + "name": "limit", + "in": "query", + "required": false, "schema": { "type": "integer", - "title": "Transcript Id" + "maximum": 500, + "minimum": 1, + "default": 100, + "title": "Limit" } } ], @@ -795,7 +770,7 @@ "schema": { "type": "object", "additionalProperties": true, - "title": "Response Get Transcript Transcripts Transcript Id Get" + "title": "Response Get Emergency Events With Locations Gis Emergency Events Get" } } } @@ -813,14 +788,14 @@ } } }, - "/transcripts/{transcript_id}/play": { - "post": { + "/memory/{callsign}/blocks": { + "get": { "tags": [ - "transcripts" + "memory" ], - "summary": "Play Transcript Over Radio", - "description": "Load transcript, generate TTS, and send over radio (audio out).", - "operationId": "play_transcript_over_radio_transcripts__transcript_id__play_post", + "summary": "Get Blocks", + "description": "Get core memory blocks (user, identity, ideaspace, system_instructions) for a callsign.", + "operationId": "get_blocks_memory__callsign__blocks_get", "security": [ { "HTTPBearer": [] @@ -828,12 +803,12 @@ ], "parameters": [ { - "name": "transcript_id", + "name": "callsign", "in": "path", "required": true, "schema": { - "type": "integer", - "title": "Transcript Id" + "type": "string", + "title": "Callsign" } } ], @@ -845,7 +820,7 @@ "schema": { "type": "object", "additionalProperties": true, - "title": "Response Play Transcript Over Radio Transcripts Transcript Id Play Post" + "title": "Response Get Blocks Memory Callsign Blocks Get" } } } @@ -863,52 +838,50 @@ } } }, - "/callsigns": { - "get": { + "/memory/{callsign}/blocks/{block_type}": { + "put": { "tags": [ - "callsigns" + "memory" ], - "summary": "List Registered", - "description": "List all registered callsigns (whitelist).", - "operationId": "list_registered_callsigns_get", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "additionalProperties": true, - "type": "object", - "title": "Response List Registered Callsigns Get" - } - } - } - } - }, + "summary": "Update Block", + "description": "Replace a core block's content.", + "operationId": "update_block_memory__callsign__blocks__block_type__put", "security": [ { "HTTPBearer": [] } - ] - } - }, - "/callsigns/register": { - "post": { - "tags": [ - "callsigns" ], - "summary": "Register Callsign", - "description": "Register a callsign so it is automatically accepted for store/relay.", - "operationId": "register_callsign_callsigns_register_post", + "parameters": [ + { + "name": "callsign", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Callsign" + } + }, + { + "name": "block_type", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Block Type" + } + } + ], "requestBody": { + "required": true, "content": { "application/json": { "schema": { - "$ref": "#/components/schemas/RegisterBody" + "type": "object", + "additionalProperties": true, + "title": "Body" } } - }, - "required": true + } }, "responses": { "200": { @@ -916,9 +889,9 @@ "content": { "application/json": { "schema": { - "additionalProperties": true, "type": "object", - "title": "Response Register Callsign Callsigns Register Post" + "additionalProperties": true, + "title": "Response Update Block Memory Callsign Blocks Block Type Put" } } } @@ -933,22 +906,17 @@ } } } - }, - "security": [ - { - "HTTPBearer": [] - } - ] + } } }, - "/callsigns/register-from-audio": { + "/memory/{callsign}/blocks/{block_type}/append": { "post": { "tags": [ - "callsigns" + "memory" ], - "summary": "Register From Audio", - "description": "Upload audio; run ASR and register the extracted or confirmed callsign.", - "operationId": "register_from_audio_callsigns_register_from_audio_post", + "summary": "Append Block", + "description": "Append content to a core block.", + "operationId": "append_block_memory__callsign__blocks__block_type__append_post", "security": [ { "HTTPBearer": [] @@ -957,27 +925,31 @@ "parameters": [ { "name": "callsign", - "in": "query", - "required": false, + "in": "path", + "required": true, "schema": { - "anyOf": [ - { - "type": "string" - }, - { - "type": "null" - } - ], + "type": "string", "title": "Callsign" } + }, + { + "name": "block_type", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Block Type" + } } ], "requestBody": { "required": true, "content": { - "multipart/form-data": { + "application/json": { "schema": { - "$ref": "#/components/schemas/Body_register_from_audio_callsigns_register_from_audio_post" + "type": "object", + "additionalProperties": true, + "title": "Body" } } } @@ -990,7 +962,7 @@ "schema": { "type": "object", "additionalProperties": true, - "title": "Response Register From Audio Callsigns Register From Audio Post" + "title": "Response Append Block Memory Callsign Blocks Block Type Append Post" } } } @@ -1008,14 +980,14 @@ } } }, - "/callsigns/registered/{callsign}": { - "delete": { + "/memory/{callsign}/summaries": { + "get": { "tags": [ - "callsigns" + "memory" ], - "summary": "Unregister Callsign", - "description": "Remove a callsign from the registry.", - "operationId": "unregister_callsign_callsigns_registered__callsign__delete", + "summary": "Get Summaries", + "description": "Get daily summaries for a callsign (last `days` days).", + "operationId": "get_summaries_memory__callsign__summaries_get", "security": [ { "HTTPBearer": [] @@ -1030,6 +1002,16 @@ "type": "string", "title": "Callsign" } + }, + { + "name": "days", + "in": "query", + "required": false, + "schema": { + "type": "integer", + "default": 7, + "title": "Days" + } } ], "responses": { @@ -1038,9 +1020,12 @@ "content": { "application/json": { "schema": { - "type": "object", - "additionalProperties": true, - "title": "Response Unregister Callsign Callsigns Registered Callsign Delete" + "type": "array", + "items": { + "type": "object", + "additionalProperties": true + }, + "title": "Response Get Summaries Memory Callsign Summaries Get" } } } @@ -1058,19 +1043,21 @@ } } }, - "/inject/message": { + "/messages/process": { "post": { "tags": [ - "inject" + "messages" ], - "summary": "Inject Message", - "description": "Inject a message into the RX path for demo/testing.\n\nThe message will be available to receivers (radio_rx / digital_modes)\nwhen they poll the injection queue. Use for:\n- User injection script (e.g. audio \u2192 text \u2192 this endpoint)\n- Simulating one user emitting on a band for another to receive", - "operationId": "inject_message_inject_message_post", + "summary": "Process Message", + "description": "Submit a message for REACT orchestration.\nRequires orchestrator to be set in app state (lifespan).\nOptional body fields: channel, chat_id, sender_id (InboundMessage shape for routing).", + "operationId": "process_message_messages_process_post", "requestBody": { "content": { "application/json": { "schema": { - "$ref": "#/components/schemas/InjectMessageBody" + "additionalProperties": true, + "type": "object", + "title": "Body" } } }, @@ -1084,7 +1071,7 @@ "schema": { "additionalProperties": true, "type": "object", - "title": "Response Inject Message Inject Message Post" + "title": "Response Process Message Messages Process Post" } } } @@ -1107,26 +1094,14 @@ ] } }, - "/internal/bus/inbound": { + "/messages/whitelist-request": { "post": { "tags": [ - "internal" + "messages" ], - "summary": "Publish Inbound", - "description": "Accept an inbound message (e.g. from Lambda) and publish to MessageBus.\nBody: channel, sender_id, chat_id, content; optional media, metadata, session_key_override.\nOrchestrator consumer must be running elsewhere to process (e.g. run_inbound_consumer).", - "operationId": "publish_inbound_internal_bus_inbound_post", - "requestBody": { - "content": { - "application/json": { - "schema": { - "additionalProperties": true, - "type": "object", - "title": "Body" - } - } - }, - "required": true - }, + "summary": "Whitelist Request", + "description": "Whitelist entry point: request access to gated services (e.g. messaging between bands).\nText or audio \u2192 orchestrator evaluates \u2192 response as text and optionally TTS.\nAccepts application/json: { \"text\" or \"message\", \"callsign?\", \"send_audio_back?\" }\nor multipart/form-data: file (audio), callsign, send_audio_back.\n\nApproved/message can come from either the orchestrator final message (tool path:\nLLM used register_callsign tool and replied) or a completed whitelist agent task\n(agent path: ACTING ran the whitelist agent; result in completed_tasks).", + "operationId": "whitelist_request_messages_whitelist_request_post", "responses": { "200": { "description": "Successful Response", @@ -1135,41 +1110,7 @@ "schema": { "additionalProperties": true, "type": "object", - "title": "Response Publish Inbound Internal Bus Inbound Post" - } - } - } - }, - "422": { - "description": "Validation Error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/HTTPValidationError" - } - } - } - } - } - } - }, - "/api/v1/config/audio": { - "get": { - "tags": [ - "audio" - ], - "summary": "Get Audio Config", - "description": "Get current audio configuration (env/file + optional runtime overrides).", - "operationId": "get_audio_config_api_v1_config_audio_get", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "additionalProperties": true, - "type": "object", - "title": "Response Get Audio Config Api V1 Config Audio Get" + "title": "Response Whitelist Request Messages Whitelist Request Post" } } } @@ -1180,21 +1121,21 @@ "HTTPBearer": [] } ] - }, - "patch": { + } + }, + "/messages/from-audio": { + "post": { "tags": [ - "audio" + "messages" ], - "summary": "Update Audio Config", - "description": "Update audio configuration (runtime overlay only; does not persist to file).", - "operationId": "update_audio_config_api_v1_config_audio_patch", + "summary": "Message From Audio", + "description": "Upload audio; run ASR; whitelist check; store transcript. Optionally inject to RX queue.", + "operationId": "message_from_audio_messages_from_audio_post", "requestBody": { "content": { - "application/json": { + "multipart/form-data": { "schema": { - "additionalProperties": true, - "type": "object", - "title": "Body" + "$ref": "#/components/schemas/Body_message_from_audio_messages_from_audio_post" } } }, @@ -1208,7 +1149,7 @@ "schema": { "additionalProperties": true, "type": "object", - "title": "Response Update Audio Config Api V1 Config Audio Patch" + "title": "Response Message From Audio Messages From Audio Post" } } } @@ -1231,43 +1172,24 @@ ] } }, - "/api/v1/config/audio/reset": { + "/messages/inject-and-store": { "post": { "tags": [ - "audio" + "messages" ], - "summary": "Reset Audio Config", - "description": "Clear runtime audio config overrides.", - "operationId": "reset_audio_config_api_v1_config_audio_reset_post", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "additionalProperties": true, - "type": "object", - "title": "Response Reset Audio Config Api V1 Config Audio Reset Post" - } + "summary": "Inject And Store", + "description": "Inject message into RX queue and store to DB (whitelist enforced).", + "operationId": "inject_and_store_messages_inject_and_store_post", + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/InjectAndStoreBody" } } - } + }, + "required": true }, - "security": [ - { - "HTTPBearer": [] - } - ] - } - }, - "/api/v1/audio/devices": { - "get": { - "tags": [ - "audio" - ], - "summary": "List Audio Devices", - "description": "List available audio input/output devices (requires voice_rx).", - "operationId": "list_audio_devices_api_v1_audio_devices_get", "responses": { "200": { "description": "Successful Response", @@ -1276,52 +1198,7 @@ "schema": { "additionalProperties": true, "type": "object", - "title": "Response List Audio Devices Api V1 Audio Devices Get" - } - } - } - } - }, - "security": [ - { - "HTTPBearer": [] - } - ] - } - }, - "/api/v1/audio/devices/{device_id}/test": { - "post": { - "tags": [ - "audio" - ], - "summary": "Test Audio Device", - "description": "Test an audio device by ID (placeholder).", - "operationId": "test_audio_device_api_v1_audio_devices__device_id__test_post", - "security": [ - { - "HTTPBearer": [] - } - ], - "parameters": [ - { - "name": "device_id", - "in": "path", - "required": true, - "schema": { - "type": "integer", - "title": "Device Id" - } - } - ], - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "type": "object", - "additionalProperties": true, - "title": "Response Test Audio Device Api V1 Audio Devices Device Id Test Post" + "title": "Response Inject And Store Messages Inject And Store Post" } } } @@ -1336,30 +1213,6 @@ } } } - } - } - }, - "/api/v1/audio/pending": { - "get": { - "tags": [ - "audio" - ], - "summary": "List Pending Responses", - "description": "List pending responses awaiting human confirmation.", - "operationId": "list_pending_responses_api_v1_audio_pending_get", - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "additionalProperties": true, - "type": "object", - "title": "Response List Pending Responses Api V1 Audio Pending Get" - } - } - } - } }, "security": [ { @@ -1368,41 +1221,23 @@ ] } }, - "/api/v1/audio/pending/{pending_id}/approve": { + "/messages/relay": { "post": { "tags": [ - "audio" - ], - "summary": "Approve Pending Response", - "description": "Approve a pending response (send it over the radio).", - "operationId": "approve_pending_response_api_v1_audio_pending__pending_id__approve_post", - "security": [ - { - "HTTPBearer": [] - } - ], - "parameters": [ - { - "name": "pending_id", - "in": "path", - "required": true, - "schema": { - "type": "string", - "title": "Pending Id" - } - } + "messages" ], + "summary": "Relay Message Between Bands", + "description": "Translate a message from one band to another and store both sides.\n\nScenario: User A emits on band A (e.g. 40m), message is received and stored;\nthen it is \"relayed\" to band B (e.g. 2m) for User B. Stores:\n1. Original (or reference) transcript on source band\n2. Relay transcript on target band with metadata linking to source\n\nBody:\n- message (str): Text to relay\n- source_band (str): e.g. \"40m\"\n- source_frequency_hz (float, optional): exact freq if known\n- source_callsign (str): who sent on source band\n- target_band (str): e.g. \"2m\"\n- target_frequency_hz (float, optional): target freq; else use band default\n- destination_callsign (str, optional): who receives on target band\n- session_id (str, optional): default generated", + "operationId": "relay_message_between_bands_messages_relay_post", "requestBody": { "content": { "application/json": { "schema": { - "type": "object", - "additionalProperties": true, - "default": {}, - "title": "Body" + "$ref": "#/components/schemas/RelayBody" } } - } + }, + "required": true }, "responses": { "200": { @@ -1410,9 +1245,9 @@ "content": { "application/json": { "schema": { - "type": "object", "additionalProperties": true, - "title": "Response Approve Pending Response Api V1 Audio Pending Pending Id Approve Post" + "type": "object", + "title": "Response Relay Message Between Bands Messages Relay Post" } } } @@ -1427,17 +1262,22 @@ } } } - } + }, + "security": [ + { + "HTTPBearer": [] + } + ] } }, - "/api/v1/audio/pending/{pending_id}/reject": { - "post": { + "/transcripts": { + "get": { "tags": [ - "audio" + "transcripts" ], - "summary": "Reject Pending Response", - "description": "Reject a pending response.", - "operationId": "reject_pending_response_api_v1_audio_pending__pending_id__reject_post", + "summary": "Search Transcripts", + "description": "Search transcripts (received/relayed messages). Use for demo so User 2 can poll\nfor messages on a band or for their callsign (e.g. after relay from 40m to 2m).\nFor a callsign to poll their messages on a band: use callsign=&destination_only=true&band=.\nOmit band to get messages across all bands.\nWhen whitelist is configured, only transcripts whose source/destination is in the whitelist are returned.", + "operationId": "search_transcripts_transcripts_get", "security": [ { "HTTPBearer": [] @@ -1445,39 +1285,1683 @@ ], "parameters": [ { - "name": "pending_id", - "in": "path", - "required": true, + "name": "callsign", + "in": "query", + "required": false, "schema": { - "type": "string", - "title": "Pending Id" - } - } - ], - "requestBody": { - "content": { - "application/json": { - "schema": { - "type": "object", - "additionalProperties": true, - "default": {}, - "title": "Body" - } - } - } - }, - "responses": { - "200": { - "description": "Successful Response", - "content": { - "application/json": { - "schema": { - "type": "object", - "additionalProperties": true, - "title": "Response Reject Pending Response Api V1 Audio Pending Pending Id Reject Post" - } - } - } + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "description": "Filter by source or destination callsign", + "title": "Callsign" + }, + "description": "Filter by source or destination callsign" + }, + { + "name": "frequency_min", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "number" + }, + { + "type": "null" + } + ], + "description": "Minimum frequency (Hz)", + "title": "Frequency Min" + }, + "description": "Minimum frequency (Hz)" + }, + { + "name": "frequency_max", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "number" + }, + { + "type": "null" + } + ], + "description": "Maximum frequency (Hz)", + "title": "Frequency Max" + }, + "description": "Maximum frequency (Hz)" + }, + { + "name": "mode", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "description": "Filter by mode (FM, PSK31, etc.)", + "title": "Mode" + }, + "description": "Filter by mode (FM, PSK31, etc.)" + }, + { + "name": "band", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "description": "Filter by band name (e.g. 40m, 2m); uses extra_data.band", + "title": "Band" + }, + "description": "Filter by band name (e.g. 40m, 2m); uses extra_data.band" + }, + { + "name": "destination_only", + "in": "query", + "required": false, + "schema": { + "type": "boolean", + "description": "If True and callsign set, return only transcripts where callsign is destination", + "default": false, + "title": "Destination Only" + }, + "description": "If True and callsign set, return only transcripts where callsign is destination" + }, + { + "name": "since", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "description": "Only transcripts after this time (ISO 8601)", + "title": "Since" + }, + "description": "Only transcripts after this time (ISO 8601)" + }, + { + "name": "limit", + "in": "query", + "required": false, + "schema": { + "type": "integer", + "maximum": 500, + "minimum": 1, + "description": "Max results", + "default": 100, + "title": "Limit" + }, + "description": "Max results" + } + ], + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Search Transcripts Transcripts Get" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/transcripts/{transcript_id}": { + "get": { + "tags": [ + "transcripts" + ], + "summary": "Get Transcript", + "description": "Get a single transcript by id (for play or display).", + "operationId": "get_transcript_transcripts__transcript_id__get", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "transcript_id", + "in": "path", + "required": true, + "schema": { + "type": "integer", + "title": "Transcript Id" + } + } + ], + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Get Transcript Transcripts Transcript Id Get" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/transcripts/{transcript_id}/play": { + "post": { + "tags": [ + "transcripts" + ], + "summary": "Play Transcript Over Radio", + "description": "Load transcript, generate TTS, and send over radio (audio out).", + "operationId": "play_transcript_over_radio_transcripts__transcript_id__play_post", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "transcript_id", + "in": "path", + "required": true, + "schema": { + "type": "integer", + "title": "Transcript Id" + } + } + ], + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Play Transcript Over Radio Transcripts Transcript Id Play Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/callsigns": { + "get": { + "tags": [ + "callsigns" + ], + "summary": "List Registered", + "description": "List all registered callsigns (whitelist).", + "operationId": "list_registered_callsigns_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response List Registered Callsigns Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/callsigns/register": { + "post": { + "tags": [ + "callsigns" + ], + "summary": "Register Callsign", + "description": "Register a callsign so it is automatically accepted for store/relay.", + "operationId": "register_callsign_callsigns_register_post", + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/RegisterBody" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Register Callsign Callsigns Register Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/callsigns/register-from-audio": { + "post": { + "tags": [ + "callsigns" + ], + "summary": "Register From Audio", + "description": "Upload audio; run ASR and register the extracted or confirmed callsign.", + "operationId": "register_from_audio_callsigns_register_from_audio_post", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "callsign", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Callsign" + } + } + ], + "requestBody": { + "required": true, + "content": { + "multipart/form-data": { + "schema": { + "$ref": "#/components/schemas/Body_register_from_audio_callsigns_register_from_audio_post" + } + } + } + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Register From Audio Callsigns Register From Audio Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/callsigns/registered/{callsign}": { + "patch": { + "tags": [ + "callsigns" + ], + "summary": "Patch Callsign Bands", + "description": "Set preferred_bands for a registered callsign. Band names must be in effective band plan (e.g. 40m, 2m).", + "operationId": "patch_callsign_bands_callsigns_registered__callsign__patch", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "callsign", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Callsign" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PatchCallsignBandsBody" + } + } + } + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Patch Callsign Bands Callsigns Registered Callsign Patch" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + }, + "delete": { + "tags": [ + "callsigns" + ], + "summary": "Unregister Callsign", + "description": "Remove a callsign from the registry.", + "operationId": "unregister_callsign_callsigns_registered__callsign__delete", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "callsign", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Callsign" + } + } + ], + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Unregister Callsign Callsigns Registered Callsign Delete" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/callsigns/registered/{callsign}/contact-preferences": { + "get": { + "tags": [ + "callsigns" + ], + "summary": "Get Contact Preferences", + "description": "Get contact preferences for a registered callsign (\u00a78.1).", + "operationId": "get_contact_preferences_callsigns_registered__callsign__contact_preferences_get", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "callsign", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Callsign" + } + } + ], + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Get Contact Preferences Callsigns Registered Callsign Contact Preferences Get" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + }, + "patch": { + "tags": [ + "callsigns" + ], + "summary": "Patch Contact Preferences", + "description": "Set contact preferences (notify by SMS/WhatsApp when a message is left for this callsign). Sets consent_at when enabling notify_on_relay (\u00a78.1).", + "operationId": "patch_contact_preferences_callsigns_registered__callsign__contact_preferences_patch", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "callsign", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Callsign" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/PatchContactPreferencesBody" + } + } + } + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Patch Contact Preferences Callsigns Registered Callsign Contact Preferences Patch" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/emergency/request": { + "post": { + "tags": [ + "emergency" + ], + "summary": "Create Emergency Request", + "description": "Create an emergency coordination event (status=pending). Only allowed when\nemergency_contact is enabled and current region is in regions_allowed.", + "operationId": "create_emergency_request_emergency_request_post", + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/EmergencyRequestBody" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Create Emergency Request Emergency Request Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/emergency/pending-count": { + "get": { + "tags": [ + "emergency" + ], + "summary": "Emergency Pending Count", + "description": "Return the number of pending emergency events. Use this to inform the operator\n(e.g. dashboard polling or script) that action is required; then list with GET /emergency/events.", + "operationId": "emergency_pending_count_emergency_pending_count_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Emergency Pending Count Emergency Pending Count Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/emergency/events/stream": { + "get": { + "tags": [ + "emergency" + ], + "summary": "Emergency Events Stream", + "description": "Server-Sent Events stream of pending emergency count. Send event every 10s.\nOperator UI can subscribe to trigger audio and browser notifications when count > 0.", + "operationId": "emergency_events_stream_emergency_events_stream_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/emergency/events": { + "get": { + "tags": [ + "emergency" + ], + "summary": "List Emergency Events", + "description": "List coordination events with event_type=emergency. Optional filter by status (e.g. pending, approved, rejected).", + "operationId": "list_emergency_events_emergency_events_get", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "status", + "in": "query", + "required": false, + "schema": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Status" + } + } + ], + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response List Emergency Events Emergency Events Get" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/emergency/events/{event_id}/approve": { + "post": { + "tags": [ + "emergency" + ], + "summary": "Approve Emergency Event", + "description": "Approve an emergency event and queue the SMS/WhatsApp for outbound delivery.\nSets status=approved, records approved_at/approved_by/queued_at, and returns queued state.", + "operationId": "approve_emergency_event_emergency_events__event_id__approve_post", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "event_id", + "in": "path", + "required": true, + "schema": { + "type": "integer", + "title": "Event Id" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/ApproveBody" + } + } + } + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Approve Emergency Event Emergency Events Event Id Approve Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/emergency/events/{event_id}/reject": { + "post": { + "tags": [ + "emergency" + ], + "summary": "Reject Emergency Event", + "description": "Reject an emergency event (do not send). Sets status=rejected and records rejected_at, rejected_by.", + "operationId": "reject_emergency_event_emergency_events__event_id__reject_post", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "event_id", + "in": "path", + "required": true, + "schema": { + "type": "integer", + "title": "Event Id" + } + } + ], + "requestBody": { + "required": true, + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/RejectBody" + } + } + } + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Reject Emergency Event Emergency Events Event Id Reject Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/inject/message": { + "post": { + "tags": [ + "inject" + ], + "summary": "Inject Message", + "description": "Inject a message into the RX path for demo/testing.\n\nThe message will be available to receivers (radio_rx / digital_modes)\nwhen they poll the injection queue. Unless inject_skip_bus is True,\nalso publish to MessageBus so the orchestrator processes it.", + "operationId": "inject_message_inject_message_post", + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/InjectMessageBody" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Inject Message Inject Message Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/receiver/upload": { + "post": { + "tags": [ + "receiver" + ], + "summary": "Receiver Upload", + "description": "Accept upload from a remote receiver station.\n\nCalled by radioshaq.remote_receiver (SDR service) when HQ_URL points here.\nRequires Bearer JWT. When receiver_upload_store is enabled, persists transcript\nwith band (from frequency). When receiver_upload_inject is enabled, injects\ninto the RX path after store.", + "operationId": "receiver_upload_receiver_upload_post", + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/ReceiverUploadBody" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "integer" + } + ] + }, + "type": "object", + "title": "Response Receiver Upload Receiver Upload Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/internal/bus/inbound": { + "post": { + "tags": [ + "internal" + ], + "summary": "Publish Inbound", + "description": "Accept an inbound message (e.g. from Lambda) and publish to MessageBus.\nBody: channel, sender_id, chat_id, content; optional media, metadata, session_key_override.\nOrchestrator consumer must be running elsewhere to process (e.g. run_inbound_consumer).", + "operationId": "publish_inbound_internal_bus_inbound_post", + "requestBody": { + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Body" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Publish Inbound Internal Bus Inbound Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/internal/opt-out": { + "post": { + "tags": [ + "internal" + ], + "summary": "Opt Out", + "description": "Record opt-out for notify-on-relay (\u00a78.1). Call when user sends STOP via SMS/WhatsApp.\nProvide either callsign or phone + channel (sms/whatsapp). Clears that contact and sets opt_out_at.\nRequires a valid Bearer token (e.g. service JWT used by your Twilio webhook handler or Lambda).", + "operationId": "opt_out_internal_opt_out_post", + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/OptOutBody" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Opt Out Internal Opt Out Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/twilio/sms": { + "post": { + "tags": [ + "twilio" + ], + "summary": "Twilio Sms Webhook", + "description": "Inbound SMS webhook from Twilio.", + "operationId": "twilio_sms_webhook_twilio_sms_post", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } + } + } + } + }, + "/twilio/whatsapp": { + "post": { + "tags": [ + "twilio" + ], + "summary": "Twilio Whatsapp Webhook", + "description": "Inbound WhatsApp webhook from Twilio.", + "operationId": "twilio_whatsapp_webhook_twilio_whatsapp_post", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } + } + } + } + }, + "/api/v1/config/audio": { + "get": { + "tags": [ + "audio" + ], + "summary": "Get Audio Config", + "description": "Get current audio configuration (env/file + optional runtime overrides).\nRuntime overrides do not affect active agents until process restart.", + "operationId": "get_audio_config_api_v1_config_audio_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Get Audio Config Api V1 Config Audio Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + }, + "patch": { + "tags": [ + "audio" + ], + "summary": "Update Audio Config", + "description": "Update audio configuration (runtime overlay only; does not persist to file).\nRestart required for changes to affect active agents (voice_rx, etc.).", + "operationId": "update_audio_config_api_v1_config_audio_patch", + "requestBody": { + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Body" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/api/v1/config/audio/reset": { + "post": { + "tags": [ + "audio" + ], + "summary": "Reset Audio Config", + "description": "Clear runtime audio config overrides. Restart required for agents to use file/env config.", + "operationId": "reset_audio_config_api_v1_config_audio_reset_post", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/api/v1/audio/devices": { + "get": { + "tags": [ + "audio" + ], + "summary": "List Audio Devices", + "description": "List available audio input/output devices (requires voice_rx).", + "operationId": "list_audio_devices_api_v1_audio_devices_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response List Audio Devices Api V1 Audio Devices Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/api/v1/audio/devices/{device_id}/test": { + "post": { + "tags": [ + "audio" + ], + "summary": "Test Audio Device", + "description": "Test that an audio device can be opened for playback and capture.\n\nThis performs a very short open/close cycle for the given device ID using sounddevice.\nIt does not play audible tones by design, but verifies that the OS/driver accept\na basic stream configuration for this device.", + "operationId": "test_audio_device_api_v1_audio_devices__device_id__test_post", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "device_id", + "in": "path", + "required": true, + "schema": { + "type": "integer", + "title": "Device Id" + } + } + ], + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Test Audio Device Api V1 Audio Devices Device Id Test Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/api/v1/audio/pending": { + "get": { + "tags": [ + "audio" + ], + "summary": "List Pending Responses", + "description": "List pending responses awaiting human confirmation.", + "operationId": "list_pending_responses_api_v1_audio_pending_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response List Pending Responses Api V1 Audio Pending Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/api/v1/audio/pending/{pending_id}/approve": { + "post": { + "tags": [ + "audio" + ], + "summary": "Approve Pending Response", + "description": "Approve a pending response (send it over the radio).", + "operationId": "approve_pending_response_api_v1_audio_pending__pending_id__approve_post", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "pending_id", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Pending Id" + } + } + ], + "requestBody": { + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "default": {}, + "title": "Body" + } + } + } + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Approve Pending Response Api V1 Audio Pending Pending Id Approve Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/api/v1/audio/pending/{pending_id}/reject": { + "post": { + "tags": [ + "audio" + ], + "summary": "Reject Pending Response", + "description": "Reject a pending response.", + "operationId": "reject_pending_response_api_v1_audio_pending__pending_id__reject_post", + "security": [ + { + "HTTPBearer": [] + } + ], + "parameters": [ + { + "name": "pending_id", + "in": "path", + "required": true, + "schema": { + "type": "string", + "title": "Pending Id" + } + } + ], + "requestBody": { + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "default": {}, + "title": "Body" + } + } + } + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "type": "object", + "additionalProperties": true, + "title": "Response Reject Pending Response Api V1 Audio Pending Pending Id Reject Post" + } + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + } + } + }, + "/api/v1/config/llm": { + "get": { + "tags": [ + "config" + ], + "summary": "Get Config Llm", + "description": "Get current LLM configuration (API keys redacted). Runtime overrides merged if set.\nRuntime overrides do not affect active orchestrator/agents until process restart.", + "operationId": "get_config_llm_api_v1_config_llm_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Get Config Llm Api V1 Config Llm Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + }, + "patch": { + "tags": [ + "config" + ], + "summary": "Update Config Llm", + "description": "Update LLM configuration (runtime overlay only; does not persist to file).\nAPI keys in body are not stored. Restart required for changes to affect orchestrator/agents.", + "operationId": "update_config_llm_api_v1_config_llm_patch", + "requestBody": { + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Body" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/api/v1/config/memory": { + "get": { + "tags": [ + "config" + ], + "summary": "Get Config Memory", + "description": "Get current memory/Hindsight configuration. Runtime overrides merged if set.\nRuntime overrides do not affect active components until process restart.", + "operationId": "get_config_memory_api_v1_config_memory_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Get Config Memory Api V1 Config Memory Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + }, + "patch": { + "tags": [ + "config" + ], + "summary": "Update Config Memory", + "description": "Update memory configuration (runtime overlay only; does not persist to file).\nRestart required for changes to affect active components.", + "operationId": "update_config_memory_api_v1_config_memory_patch", + "requestBody": { + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Body" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } + }, + "422": { + "description": "Validation Error", + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/HTTPValidationError" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + } + }, + "/api/v1/config/overrides": { + "get": { + "tags": [ + "config" + ], + "summary": "Get Config Overrides", + "description": "Get per-role LLM and memory overrides. Keys: orchestrator, judge, whitelist, daily_summary, memory.\nRuntime overrides do not affect active orchestrator/agents until process restart.", + "operationId": "get_config_overrides_api_v1_config_overrides_get", + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Response Get Config Overrides Api V1 Config Overrides Get" + } + } + } + } + }, + "security": [ + { + "HTTPBearer": [] + } + ] + }, + "patch": { + "tags": [ + "config" + ], + "summary": "Update Config Overrides", + "description": "Update per-role overrides (runtime overlay only; does not persist to file).\nRestart required for changes to affect orchestrator/agents.", + "operationId": "update_config_overrides_api_v1_config_overrides_patch", + "requestBody": { + "content": { + "application/json": { + "schema": { + "additionalProperties": true, + "type": "object", + "title": "Body" + } + } + }, + "required": true + }, + "responses": { + "200": { + "description": "Successful Response", + "content": { + "application/json": { + "schema": {} + } + } }, "422": { "description": "Validation Error", @@ -1489,12 +2973,35 @@ } } } - } + }, + "security": [ + { + "HTTPBearer": [] + } + ] } } }, "components": { "schemas": { + "ApproveBody": { + "properties": { + "notes": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Notes" + } + }, + "type": "object", + "title": "ApproveBody", + "description": "Body for POST /emergency/events/{id}/approve." + }, "Body_message_from_audio_messages_from_audio_post": { "properties": { "file": { @@ -1515,9 +3022,269 @@ "type": "null" } ], - "title": "Destination Callsign" + "title": "Destination Callsign" + }, + "band": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Band" + }, + "mode": { + "type": "string", + "title": "Mode", + "default": "PSK31" + }, + "frequency_hz": { + "type": "number", + "title": "Frequency Hz", + "default": 0.0 + }, + "session_id": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Session Id" + }, + "inject": { + "type": "boolean", + "title": "Inject", + "default": false + } + }, + "type": "object", + "required": [ + "file", + "source_callsign" + ], + "title": "Body_message_from_audio_messages_from_audio_post" + }, + "Body_register_from_audio_callsigns_register_from_audio_post": { + "properties": { + "file": { + "type": "string", + "contentMediaType": "application/octet-stream", + "title": "File" + } + }, + "type": "object", + "required": [ + "file" + ], + "title": "Body_register_from_audio_callsigns_register_from_audio_post" + }, + "Body_send_audio_radio_send_audio_post": { + "properties": { + "file": { + "type": "string", + "contentMediaType": "application/octet-stream", + "title": "File" + } + }, + "type": "object", + "required": [ + "file" + ], + "title": "Body_send_audio_radio_send_audio_post" + }, + "EmergencyRequestBody": { + "properties": { + "target_callsign": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Target Callsign", + "description": "Target callsign (optional)" + }, + "contact_phone": { + "type": "string", + "title": "Contact Phone", + "description": "Contact phone E.164 for SMS/WhatsApp" + }, + "contact_channel": { + "type": "string", + "title": "Contact Channel", + "description": "sms or whatsapp" + }, + "notes": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Notes" + } + }, + "type": "object", + "required": [ + "contact_phone", + "contact_channel" + ], + "title": "EmergencyRequestBody", + "description": "Body for POST /emergency/request." + }, + "HTTPValidationError": { + "properties": { + "detail": { + "items": { + "$ref": "#/components/schemas/ValidationError" + }, + "type": "array", + "title": "Detail" + } + }, + "type": "object", + "title": "HTTPValidationError" + }, + "InjectAndStoreBody": { + "properties": { + "text": { + "type": "string", + "minLength": 1, + "title": "Text" + }, + "band": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Band" + }, + "frequency_hz": { + "type": "number", + "title": "Frequency Hz", + "default": 0.0 + }, + "mode": { + "type": "string", + "title": "Mode", + "default": "PSK31" + }, + "source_callsign": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Source Callsign" + }, + "destination_callsign": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Destination Callsign" + }, + "audio_path": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Audio Path" + }, + "metadata": { + "additionalProperties": true, + "type": "object", + "title": "Metadata" + } + }, + "type": "object", + "required": [ + "text" + ], + "title": "InjectAndStoreBody", + "description": "Body for POST /messages/inject-and-store." + }, + "InjectMessageBody": { + "properties": { + "text": { + "type": "string", + "minLength": 1, + "title": "Text", + "description": "Message text to inject as received" + }, + "band": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Band", + "description": "Band name (e.g. 40m, 2m)" + }, + "frequency_hz": { + "type": "number", + "title": "Frequency Hz", + "description": "Frequency in Hz", + "default": 0.0 + }, + "mode": { + "type": "string", + "title": "Mode", + "description": "Mode (PSK31, FT8, FM, etc.)", + "default": "PSK31" + }, + "source_callsign": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Source Callsign", + "description": "Source callsign" + }, + "destination_callsign": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Destination Callsign", + "description": "Destination callsign" }, - "band": { + "audio_path": { "anyOf": [ { "type": "string" @@ -1526,19 +3293,45 @@ "type": "null" } ], - "title": "Band" + "title": "Audio Path", + "description": "Optional path to audio file (stored with transcript)" }, - "mode": { + "metadata": { + "additionalProperties": true, + "type": "object", + "title": "Metadata" + } + }, + "type": "object", + "required": [ + "text" + ], + "title": "InjectMessageBody", + "description": "Body for POST /inject/message (user injection for demo)." + }, + "LocationResponse": { + "properties": { + "id": { + "type": "integer", + "title": "Id" + }, + "callsign": { "type": "string", - "title": "Mode", - "default": "PSK31" + "title": "Callsign" }, - "frequency_hz": { + "latitude": { "type": "number", - "title": "Frequency Hz", - "default": 0.0 + "title": "Latitude" }, - "session_id": { + "longitude": { + "type": "number", + "title": "Longitude" + }, + "source": { + "type": "string", + "title": "Source" + }, + "timestamp": { "anyOf": [ { "type": "string" @@ -1547,56 +3340,93 @@ "type": "null" } ], - "title": "Session Id" + "title": "Timestamp" }, - "inject": { - "type": "boolean", - "title": "Inject", - "default": false + "confidence": { + "anyOf": [ + { + "type": "number" + }, + { + "type": "null" + } + ], + "title": "Confidence" } }, "type": "object", "required": [ - "file", - "source_callsign" + "id", + "callsign", + "latitude", + "longitude", + "source", + "timestamp" ], - "title": "Body_message_from_audio_messages_from_audio_post" + "title": "LocationResponse", + "description": "Response for stored or retrieved location (explicit lat/lon, no raw geometry)." }, - "Body_register_from_audio_callsigns_register_from_audio_post": { + "OptOutBody": { "properties": { - "file": { + "callsign": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Callsign", + "description": "Callsign to opt out" + }, + "phone": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Phone", + "description": "Phone (E.164) to opt out; used if callsign not set" + }, + "channel": { "type": "string", - "contentMediaType": "application/octet-stream", - "title": "File" + "title": "Channel", + "description": "sms or whatsapp" } }, "type": "object", "required": [ - "file" + "channel" ], - "title": "Body_register_from_audio_callsigns_register_from_audio_post" + "title": "OptOutBody", + "description": "Body for POST /internal/opt-out. Used by webhook when user sends STOP." }, - "HTTPValidationError": { + "PatchCallsignBandsBody": { "properties": { - "detail": { + "preferred_bands": { "items": { - "$ref": "#/components/schemas/ValidationError" + "type": "string" }, "type": "array", - "title": "Detail" + "minItems": 0, + "title": "Preferred Bands", + "description": "Preferred bands e.g. [40m, 2m]" } }, "type": "object", - "title": "HTTPValidationError" + "required": [ + "preferred_bands" + ], + "title": "PatchCallsignBandsBody", + "description": "Body for PATCH /callsigns/registered/{callsign}." }, - "InjectAndStoreBody": { + "PatchContactPreferencesBody": { "properties": { - "text": { - "type": "string", - "minLength": 1, - "title": "Text" - }, - "band": { + "notify_sms_phone": { "anyOf": [ { "type": "string" @@ -1605,19 +3435,10 @@ "type": "null" } ], - "title": "Band" - }, - "frequency_hz": { - "type": "number", - "title": "Frequency Hz", - "default": 0.0 - }, - "mode": { - "type": "string", - "title": "Mode", - "default": "PSK31" + "title": "Notify Sms Phone", + "description": "E.164; set to empty string to clear" }, - "source_callsign": { + "notify_whatsapp_phone": { "anyOf": [ { "type": "string" @@ -1626,20 +3447,22 @@ "type": "null" } ], - "title": "Source Callsign" + "title": "Notify Whatsapp Phone", + "description": "E.164; set to empty string to clear" }, - "destination_callsign": { + "notify_on_relay": { "anyOf": [ { - "type": "string" + "type": "boolean" }, { "type": "null" } ], - "title": "Destination Callsign" + "title": "Notify On Relay", + "description": "Enable notify when a message is left for this callsign" }, - "audio_path": { + "consent_source": { "anyOf": [ { "type": "string" @@ -1648,54 +3471,63 @@ "type": "null" } ], - "title": "Audio Path" + "title": "Consent Source", + "description": "api / web / voice; required when enabling notify_on_relay" }, - "metadata": { - "additionalProperties": true, - "type": "object", - "title": "Metadata" + "consent_confirmed": { + "anyOf": [ + { + "type": "boolean" + }, + { + "type": "null" + } + ], + "title": "Consent Confirmed", + "description": "Explicit consent; required for EU/UK/ZA when enabling notify" } }, "type": "object", - "required": [ - "text" - ], - "title": "InjectAndStoreBody", - "description": "Body for POST /messages/inject-and-store." + "title": "PatchContactPreferencesBody", + "description": "Body for PATCH /callsigns/registered/{callsign}/contact-preferences (\u00a78.1)." }, - "InjectMessageBody": { + "PostLocationBody": { "properties": { - "text": { + "callsign": { "type": "string", "minLength": 1, - "title": "Text", - "description": "Message text to inject as received" + "title": "Callsign", + "description": "Operator callsign" }, - "band": { + "latitude": { "anyOf": [ { - "type": "string" + "type": "number", + "maximum": 90.0, + "minimum": -90.0 }, { "type": "null" } ], - "title": "Band", - "description": "Band name (e.g. 40m, 2m)" - }, - "frequency_hz": { - "type": "number", - "title": "Frequency Hz", - "description": "Frequency in Hz", - "default": 0.0 + "title": "Latitude", + "description": "Latitude (WGS 84)" }, - "mode": { - "type": "string", - "title": "Mode", - "description": "Mode (PSK31, FT8, FM, etc.)", - "default": "PSK31" + "longitude": { + "anyOf": [ + { + "type": "number", + "maximum": 180.0, + "minimum": -180.0 + }, + { + "type": "null" + } + ], + "title": "Longitude", + "description": "Longitude (WGS 84)" }, - "source_callsign": { + "location_text": { "anyOf": [ { "type": "string" @@ -1704,22 +3536,68 @@ "type": "null" } ], - "title": "Source Callsign", - "description": "Source callsign" + "title": "Location Text", + "description": "Free-text place (v1 strict: not used for storage alone)" }, - "destination_callsign": { + "accuracy_meters": { "anyOf": [ { - "type": "string" + "type": "number", + "minimum": 0.0 }, { "type": "null" } ], - "title": "Destination Callsign", - "description": "Destination callsign" + "title": "Accuracy Meters" }, - "audio_path": { + "altitude_meters": { + "anyOf": [ + { + "type": "number" + }, + { + "type": "null" + } + ], + "title": "Altitude Meters" + } + }, + "type": "object", + "required": [ + "callsign" + ], + "title": "PostLocationBody", + "description": "Body for POST /gis/location. Provide either (latitude, longitude) or location_text (v1: text alone returns 400)." + }, + "ReceiverUploadBody": { + "properties": { + "station_id": { + "type": "string", + "title": "Station Id", + "description": "Receiver station ID" + }, + "operator_id": { + "type": "string", + "title": "Operator Id", + "description": "Operator/sub from JWT" + }, + "timestamp": { + "type": "string", + "title": "Timestamp", + "description": "ISO timestamp" + }, + "frequency_hz": { + "type": "number", + "title": "Frequency Hz", + "description": "Frequency in Hz" + }, + "signal_strength_db": { + "type": "number", + "title": "Signal Strength Db", + "description": "Signal strength dB" + }, + "decoded_text": { "anyOf": [ { "type": "string" @@ -1728,21 +3606,26 @@ "type": "null" } ], - "title": "Audio Path", - "description": "Optional path to audio file (stored with transcript)" + "title": "Decoded Text", + "description": "Decoded text if any" }, - "metadata": { - "additionalProperties": true, - "type": "object", - "title": "Metadata" + "mode": { + "type": "string", + "title": "Mode", + "description": "Mode (e.g. FM, FT8)", + "default": "" } }, "type": "object", "required": [ - "text" + "station_id", + "operator_id", + "timestamp", + "frequency_hz", + "signal_strength_db" ], - "title": "InjectMessageBody", - "description": "Body for POST /inject/message (user injection for demo)." + "title": "ReceiverUploadBody", + "description": "Payload from a remote receiver station (SDR samples/decoded data)." }, "RegisterBody": { "properties": { @@ -1757,6 +3640,21 @@ "title": "Source", "description": "api or audio", "default": "api" + }, + "preferred_bands": { + "anyOf": [ + { + "items": { + "type": "string" + }, + "type": "array" + }, + { + "type": "null" + } + ], + "title": "Preferred Bands", + "description": "Preferred bands e.g. [40m, 2m]" } }, "type": "object", @@ -1766,6 +3664,24 @@ "title": "RegisterBody", "description": "Body for POST /callsigns/register." }, + "RejectBody": { + "properties": { + "notes": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Notes" + } + }, + "type": "object", + "title": "RejectBody", + "description": "Body for POST /emergency/events/{id}/reject." + }, "RelayBody": { "properties": { "message": { @@ -1778,8 +3694,16 @@ "title": "Source Band" }, "target_band": { - "type": "string", - "title": "Target Band" + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Target Band", + "description": "Target band (e.g. 2m) when target_channel=radio; ignored when target_channel is sms/whatsapp" }, "source_frequency_hz": { "anyOf": [ @@ -1830,6 +3754,18 @@ ], "title": "Session Id" }, + "deliver_at": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Deliver At", + "description": "ISO datetime when message should be delivered (optional)" + }, "source_audio_path": { "anyOf": [ { @@ -1851,16 +3787,39 @@ } ], "title": "Target Audio Path" + }, + "target_channel": { + "type": "string", + "title": "Target Channel", + "description": "Delivery channel: radio, sms, or whatsapp", + "default": "radio" + }, + "destination_phone": { + "anyOf": [ + { + "type": "string" + }, + { + "type": "null" + } + ], + "title": "Destination Phone", + "description": "E.164 phone for SMS/WhatsApp when target_channel is sms or whatsapp" + }, + "emergency": { + "type": "boolean", + "title": "Emergency", + "description": "If true and target_channel is sms/whatsapp, queue for human approval (Section 9)", + "default": false } }, "type": "object", "required": [ "message", - "source_band", - "target_band" + "source_band" ], "title": "RelayBody", - "description": "Body for POST /messages/relay (band translation)." + "description": "Body for POST /messages/relay (band translation or SMS/WhatsApp)." }, "SendTTSBody": { "properties": { diff --git a/docs/configuration.md b/docs/configuration.md index 3b95ffe..df695f4 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -121,17 +121,20 @@ API endpoints expect a Bearer JWT. Tokens are issued by `POST /auth/token` (subj ## LLM -The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`); the client passes `api_base` to LiteLLM so custom endpoints work without code changes. +The orchestrator (REACT loop), judge, whitelist agent, and daily-summary cron use an LLM. Set the provider, model, and the matching API key. For **local/custom** endpoints (e.g. [Ollama](https://ollama.ai)), set `provider: custom`, `model` (e.g. `ollama/llama2` or `llama2`), and **`custom_api_base`** (e.g. `http://localhost:11434`). For **[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers)** (serverless models from Groq, Together, etc.), set `provider: huggingface`, `model` (e.g. `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`), and **`huggingface_api_key`** or `HF_TOKEN`; the client uses the HF router URL as `api_base`. For **Google Gemini** (Google AI Studio), set `provider: gemini`, `model` (e.g. `gemini-2.5-flash`, `gemini-2.5-pro`), and **`gemini_api_key`** or `GEMINI_API_KEY`. | Option | Env var | Default | Description | |--------|---------|---------|-------------| -| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`. | -| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`). | +| `llm.provider` | `RADIOSHAQ_LLM__PROVIDER` | `mistral` | One of: `mistral`, `openai`, `anthropic`, `custom`, `huggingface`, `gemini`. | +| `llm.model` | `RADIOSHAQ_LLM__MODEL` | `mistral-large-latest` | Model name (e.g. `mistral-small-latest`, `gpt-4o`, `ollama/llama2`; for **huggingface**: `openai/gpt-oss-120b:groq`, `Qwen/Qwen2.5-7B-Instruct-1M`; for **gemini**: `gemini-2.5-flash`, `gemini-2.5-pro`). | | `llm.mistral_api_key` | `RADIOSHAQ_LLM__MISTRAL_API_KEY` | `null` | Mistral API key (or set `MISTRAL_API_KEY` if your code reads it). | | `llm.openai_api_key` | `RADIOSHAQ_LLM__OPENAI_API_KEY` | `null` | OpenAI API key. | | `llm.anthropic_api_key` | `RADIOSHAQ_LLM__ANTHROPIC_API_KEY` | `null` | Anthropic API key. | | `llm.custom_api_base` | `RADIOSHAQ_LLM__CUSTOM_API_BASE` | `null` | **Custom provider base URL** (e.g. `http://localhost:11434` for Ollama). Passed to LiteLLM. | | `llm.custom_api_key` | `RADIOSHAQ_LLM__CUSTOM_API_KEY` | `null` | Custom provider API key. | +| `llm.gemini_api_key` | `RADIOSHAQ_LLM__GEMINI_API_KEY` | `null` | **Gemini** API key (Google AI Studio; or set `GEMINI_API_KEY`). | +| `llm.huggingface_api_key` | `RADIOSHAQ_LLM__HUGGINGFACE_API_KEY` | `null` | **Hugging Face** token for [Inference Providers](https://huggingface.co/docs/inference-providers) (or set `HF_TOKEN`). Token needs "Inference Providers" permission. | +| `llm.huggingface_api_base` | `RADIOSHAQ_LLM__HUGGINGFACE_API_BASE` | `null` | Optional; default `https://router.huggingface.co/v1` when provider is `huggingface`. | | `llm.temperature` | `RADIOSHAQ_LLM__TEMPERATURE` | `0.1` | Sampling temperature (0–2). | | `llm.max_tokens` | `RADIOSHAQ_LLM__MAX_TOKENS` | `4096` | Max tokens per response. | | `llm.timeout_seconds` | `RADIOSHAQ_LLM__TIMEOUT_SECONDS` | `60.0` | Request timeout. | @@ -153,6 +156,7 @@ Per-callsign memory: core blocks, recent messages, daily summaries, and optional | `memory.recent_messages_limit` | `RADIOSHAQ_MEMORY__RECENT_MESSAGES_LIMIT` | `40` | Max recent messages included in context. | | `memory.daily_summary_days` | `RADIOSHAQ_MEMORY__DAILY_SUMMARY_DAYS` | `7` | Days of daily summaries to include. | | `memory.summary_timezone` | `RADIOSHAQ_MEMORY__SUMMARY_TIMEZONE` | `America/New_York` | Timezone for daily summary windows. | +| `memory.memory_retention_days` | `RADIOSHAQ_MEMORY__MEMORY_RETENTION_DAYS` | `0` | Delete memory_messages older than N days; 0 = no delete. | --- @@ -217,7 +221,8 @@ Controls the physical rig (CAT), optional FLDIGI, packet, and SDR TX. If `radio. | `radio.radio_reply_use_tts` | `RADIOSHAQ_RADIO__RADIO_REPLY_USE_TTS` | `true` | For MessageBus outbound radio replies, force `use_tts` on/off. | | `radio.tx_audit_log_path` | `RADIOSHAQ_RADIO__TX_AUDIT_LOG_PATH` | `null` | Path to JSONL file for TX audit log. | | `radio.tx_allowed_bands_only` | `RADIOSHAQ_RADIO__TX_ALLOWED_BANDS_ONLY` | `true` | Restrict TX to band_plan bands. | -| `radio.restricted_bands_region` | `RADIOSHAQ_RADIO__RESTRICTED_BANDS_REGION` | `FCC` | Region for restricted bands: `FCC`, `CEPT`. | +| `radio.restricted_bands_region` | `RADIOSHAQ_RADIO__RESTRICTED_BANDS_REGION` | `FCC` | Country/region for restricted-band enforcement: `FCC`, `CA`, `CEPT`, `FR`, `UK`, `ES`, `BE`, `CH`, `LU`, `MC`, `MX`, `AR`, `CL`, … (Americas), `AU`, `ZA`, `NG`, `KE`, … (Africa), `NZ`, `JP`, `IN`. **Do not use `ITU_R1` or `ITU_R3`** here — they are band-plan-only (no restricted bands); use `band_plan_region` for those. | +| `radio.band_plan_region` | `RADIOSHAQ_RADIO__BAND_PLAN_REGION` | `null` | Override band plan source (e.g. `ITU_R1`, `ITU_R3`). If null, uses the backend from `restricted_bands_region`. Use this for ITU region plans; keep `restricted_bands_region` as a country. | | `radio.allowed_callsigns` | (list in YAML) | `null` | Static list of allowed callsigns; merged with DB registry. | | `radio.callsign_registry_required` | `RADIOSHAQ_RADIO__CALLSIGN_REGISTRY_REQUIRED` | `false` | If true, only registered or allowed callsigns for store/relay. | | `radio.sdr_tx_enabled` | `RADIOSHAQ_RADIO__SDR_TX_ENABLED` | `false` | Enable HackRF (or other SDR) TX. | @@ -239,9 +244,62 @@ Controls the physical rig (CAT), optional FLDIGI, packet, and SDR TX. If `radio. | `radio.listener_concurrent_bands` | `RADIOSHAQ_RADIO__LISTENER_CONCURRENT_BANDS` | `true` | If true, one monitor task per band in parallel; if false, single receiver round-robin. | | `radio.relay_inject_target_band` | `RADIOSHAQ_RADIO__RELAY_INJECT_TARGET_BAND` | `false` | When relaying (no deliver_at), inject the relayed message into the target band RX queue. | | `radio.relay_tx_target_band` | `RADIOSHAQ_RADIO__RELAY_TX_TARGET_BAND` | `false` | When relaying (no deliver_at), transmit the relayed message on the target band via radio_tx. | +| `radio.station_callsign` | `RADIOSHAQ_RADIO__STATION_CALLSIGN` | `null` | Station callsign for reply call-out; defaults to packet_callsign. | +| `radio.response_radio_format_enabled` | `RADIOSHAQ_RADIO__RESPONSE_RADIO_FORMAT_ENABLED` | `false` | Wrap reply in radio format (station de caller … Over/K). | +| `radio.response_radio_format_style` | `RADIOSHAQ_RADIO__RESPONSE_RADIO_FORMAT_STYLE` | `over` | Sign-off: `over` \| `prosign` (K) \| `none`. | +| `radio.voice_store_keywords` | (list in YAML) | `null` | Only store voice segments containing at least one keyword (case-insensitive). | +| `radio.band_listener_store` | `RADIOSHAQ_RADIO__BAND_LISTENER_STORE` | `true` | When storage set, store band-listener messages as transcripts. | +| `radio.band_listener_store_min_length` | `RADIOSHAQ_RADIO__BAND_LISTENER_STORE_MIN_LENGTH` | `0` | Min message length to store from band listener. | +| `radio.transcript_retention_days` | `RADIOSHAQ_RADIO__TRANSCRIPT_RETENTION_DAYS` | `0` | If > 0, delete transcripts older than N days. | +| `radio.relay_store_only_relayed` | `RADIOSHAQ_RADIO__RELAY_STORE_ONLY_RELAYED` | `false` | When true, relay stores only relayed transcript, not source. | **Relay:** Relay is **store-only by default**. Recipients get messages by **polling** `GET /transcripts?callsign=&destination_only=true&band=`. When `relay_inject_target_band` or `relay_tx_target_band` is enabled, they apply to both the API and the orchestrator relay tool. +**Compliance and region support:** TX is checked against restricted bands and (when `tx_allowed_bands_only` is true) the effective band plan. The **compliance plugin** provides region-specific backends: + +| Backend key | Restricted bands | Band plan | Typical use | +|-------------|------------------|-----------|--------------| +| `FCC` | US 47 CFR §15.205 | Default (ITU R2) | United States | +| `CA` | FCC baseline (ISED/RBR-4) | Default (ITU R2) | Canada | +| `CEPT` | EU harmonised (ERC 70-03, ETSI) | IARU R1 (2m 144–146 MHz, 70cm 430–440 MHz) | EU general | +| `FR` | Same as CEPT | Same as CEPT | France | +| `UK` | Same as CEPT | Same as CEPT | United Kingdom | +| `ES` | Same as CEPT | Same as CEPT | Spain | +| `BE` | Same as CEPT | Same as CEPT | Belgium | +| `CH` | Same as CEPT | Same as CEPT | Switzerland | +| `LU` | Same as CEPT | Same as CEPT | Luxembourg | +| `MC` | Same as CEPT | Same as CEPT | Monaco | +| `ITU_R1` | None (band-plan only) | IARU R1 | Override band plan only | +| `ITU_R3` | None (band-plan only) | IARU R3 (2m 144–148, 70cm 430–440 MHz) | Override band plan for Asia–Pacific | +| `MX` | FCC baseline (IFT may vary) | Default (ITU R2) | Mexico | +| `AR`, `CL`, `CO`, `PE`, `VE`, `EC`, `UY`, `PY`, `BO`, `CR`, `PA`, `GT`, `DO` | FCC baseline | Default (ITU R2) | Argentina, Chile, Colombia, Peru, Venezuela, Ecuador, Uruguay, Paraguay, Bolivia, Costa Rica, Panama, Guatemala, Dominican Republic | +| `AU` | Enforced (ACMA conservative) | IARU R3 | Australia | +| `ZA` | Enforced (ICASA NRFP) | IARU R1 | South Africa | +| `NG`, `KE`, `EG`, `MA`, … (see [Response & compliance](response-compliance-and-monitoring.md#21-radio-restricted-bands-and-band-plans)) | Enforced (R1 conservative) | IARU R1 | Nigeria, Kenya, Egypt, Morocco, etc. | +| `NZ` | Enforced (RSM PIB 21 conservative) | IARU R3 | New Zealand | +| `JP` | Enforced (conservative set) | IARU R3 | Japan | +| `IN` | Enforced (conservative set) | IARU R3 | India | + +Set `restricted_bands_region: CEPT` (or `FR`, `UK`, `ES`, `BE`, `CH`, `LU`, `MC`) for EU/EEA to enforce CEPT-style restricted bands and R1 band edges. For Americas use `CA`, `MX`, or country code (`AR`, `CL`, etc.). For Australia/Asia–Pacific use `AU` or `ITU_R3`. For Africa use country code (`ZA`, `NG`, `KE`, etc.) — R1 band plan, national rules apply. Use `band_plan_region: ITU_R1` or `ITU_R3` to override band plan. See [Response & compliance](response-compliance-and-monitoring.md#21-radio-restricted-bands-and-band-plans) for official sources and country→backend mapping. Operators must verify national rules (e.g. ANFR, Ofcom, ACMA, IFT, ISED, ICASA, NCC). + +--- + +## TTS (text-to-speech) + +When `radio.voice_use_tts` is true or a task sets `use_tts: true`, speech is generated from text using the configured TTS provider. Options live under `tts.*`. + +| Option | Env var | Default | Description | +|--------|---------|---------|-------------| +| `tts.provider` | `RADIOSHAQ_TTS__PROVIDER` | `elevenlabs` | `elevenlabs` (API; set `ELEVENLABS_API_KEY`) or `kokoro` (local; run `uv sync --extra tts_kokoro`). | +| `tts.elevenlabs_voice_id` | `RADIOSHAQ_TTS__ELEVENLABS_VOICE_ID` | (Rachel) | ElevenLabs voice ID. | +| `tts.elevenlabs_model_id` | `RADIOSHAQ_TTS__ELEVENLABS_MODEL_ID` | `eleven_multilingual_v2` | ElevenLabs model (e.g. `eleven_turbo_v2_5`, `eleven_flash_v2_5`). | +| `tts.elevenlabs_output_format` | `RADIOSHAQ_TTS__ELEVENLABS_OUTPUT_FORMAT` | `mp3_44100_128` | Output format. | +| `tts.kokoro_voice` | `RADIOSHAQ_TTS__KOKORO_VOICE` | `af_heart` | Kokoro voice name (e.g. `am_michael`, `bf_emma`). | +| `tts.kokoro_lang_code` | `RADIOSHAQ_TTS__KOKORO_LANG_CODE` | `a` | Language code: `a` (US English), `b` (UK English), `e`, `f`, etc. | +| `tts.kokoro_speed` | `RADIOSHAQ_TTS__KOKORO_SPEED` | `1.0` | Speech rate (0.5–2.0). | + +**Kokoro (local TTS):** Install with `uv sync --extra tts_kokoro`. This pulls in `kokoro`, `soundfile`, and Kokoro’s own dependencies (e.g. `torch`, `transformers`, `misaki[en]`). On Linux, the `soundfile` package may require the system library **libsndfile** (e.g. `apt install libsndfile1` or `dnf install libsndfile`). + --- ## Audio (voice_rx pipeline) @@ -270,8 +328,8 @@ When `radio.audio_input_enabled` is true, the voice_rx pipeline captures audio f | `audio.min_speech_duration_ms` | `RADIOSHAQ_AUDIO__MIN_SPEECH_DURATION_MS` | `500` | Min segment length. | | `audio.max_speech_duration_ms` | `RADIOSHAQ_AUDIO__MAX_SPEECH_DURATION_MS` | `30000` | Max segment length. | | `audio.silence_duration_ms` | `RADIOSHAQ_AUDIO__SILENCE_DURATION_MS` | `800` | Silence to end segment. | -| `audio.asr_model` | `RADIOSHAQ_AUDIO__ASR_MODEL` | `voxtral` | ASR model name. | -| `audio.asr_language` | `RADIOSHAQ_AUDIO__ASR_LANGUAGE` | `en` | ASR language. | +| `audio.asr_model` | `RADIOSHAQ_AUDIO__ASR_MODEL` | `voxtral` | ASR backend: `voxtral`, `whisper` (local; install with `uv sync --extra audio`), or `scribe` (ElevenLabs API; set `ELEVENLABS_API_KEY`). | +| `audio.asr_language` | `RADIOSHAQ_AUDIO__ASR_LANGUAGE` | `en` | ASR language (`en`, `fr`, `es`, or `auto`). | | `audio.asr_min_confidence` | `RADIOSHAQ_AUDIO__ASR_MIN_CONFIDENCE` | `0.6` | Min ASR confidence (0–1). | | `audio.response_mode` | `RADIOSHAQ_AUDIO__RESPONSE_MODE` | `listen_only` | `listen_only`, `confirm_first`, `auto_respond`, `confirm_timeout`. | | `audio.response_timeout_seconds` | `RADIOSHAQ_AUDIO__RESPONSE_TIMEOUT_SECONDS` | `30.0` | Timeout for confirm_timeout mode. | @@ -288,6 +346,9 @@ When `radio.audio_input_enabled` is true, the voice_rx pipeline captures audio f | `audio.ptt_coordination_enabled` | `RADIOSHAQ_AUDIO__PTT_COORDINATION_ENABLED` | `true` | PTT coordination for half-duplex. | | `audio.ptt_cooldown_ms` | `RADIOSHAQ_AUDIO__PTT_COOLDOWN_MS` | `500` | PTT cooldown (ms). | | `audio.break_in_enabled` | `RADIOSHAQ_AUDIO__BREAK_IN_ENABLED` | `true` | Allow break-in. | +| `audio.eleven_voice_isolator_enabled` | `RADIOSHAQ_AUDIO__ELEVEN_VOICE_ISOLATOR_ENABLED` | `false` | When true and asr_model is scribe, run ElevenLabs Voice Isolator before Scribe (requires ELEVENLABS_API_KEY). | +| `audio.voice_publish_to_bus` | `RADIOSHAQ_AUDIO__VOICE_PUBLISH_TO_BUS` | `true` | Publish transcribed voice segments to MessageBus for orchestrator. | +| `audio.voice_source_callsign_default` | `RADIOSHAQ_AUDIO__VOICE_SOURCE_CALLSIGN_DEFAULT` | `null` | Default sender_id for voice when not parsed (e.g. 'VOICE'); null = 'UNKNOWN'. | **Response modes:** - **listen_only** — Transcribe only; no TX. @@ -336,6 +397,58 @@ When `mode: hq`, these options apply. --- +## Twilio (SMS & WhatsApp) + +RadioShaq can send and receive **SMS** and **WhatsApp** messages via **Twilio** (same account for both). Outbound delivery is handled by the single outbound dispatcher when the MessageBus consumer is enabled. + +| Option | Env var | Default | Description | +|--------|---------|---------|-------------| +| `twilio.account_sid` | `RADIOSHAQ_TWILIO__ACCOUNT_SID` | `null` | Twilio Account SID (required for SMS and WhatsApp send). | +| `twilio.auth_token` | `RADIOSHAQ_TWILIO__AUTH_TOKEN` | `null` | Twilio Auth Token. | +| `twilio.from_number` | `RADIOSHAQ_TWILIO__FROM_NUMBER` | `null` | SMS sender phone number (E.164, e.g. `+15551234567`). | +| `twilio.whatsapp_from` | `RADIOSHAQ_TWILIO__WHATSAPP_FROM` | `null` | WhatsApp sender number (E.164); must be WhatsApp-enabled in Twilio. Optional; if unset, the WhatsApp agent is registered but returns "not configured" on send. | + +All use the **`RADIOSHAQ_TWILIO__`** prefix. See [reference/.env.example](reference/.env.example) for a commented template. + +**Config file (YAML):** You can set the same under `twilio` in `config.yaml`: + +```yaml +twilio: + account_sid: "ACxxxx" + auth_token: "your-auth-token" + from_number: "+15551234567" + whatsapp_from: "+15551234567" # optional; same or different number enabled for WhatsApp +``` + +Environment variables override file values. + +**Behavior:** + +- **SMS:** If `account_sid`, `auth_token`, and `from_number` are set, the SMS agent sends via Twilio. Otherwise, send returns `success: false` with `reason: "twilio_not_configured"`. +- **WhatsApp:** If `whatsapp_from` is also set (and Twilio client exists), the WhatsApp agent sends via Twilio WhatsApp Business API (`whatsapp:+E.164`). Otherwise, the agent is still registered but returns "Twilio WhatsApp not configured" on send. +- **Inbound:** Configure Twilio webhooks (SMS and/or WhatsApp) to POST to your Lambda or directly to `https:///internal/bus/inbound` with a body like `{"channel": "sms"|"whatsapp", "chat_id": "", "sender_id": "...", "content": "..."}`. See [Twilio WhatsApp webhooks](https://www.twilio.com/docs/sms/whatsapp/api#configuring-inbound-message-webhooks) and opt-in requirements. + +**Notify when a message is left for you (§8.1, §8.3):** Whitelisted callsigns can opt in to receive a short SMS or WhatsApp notification when a message is delivered to them on radio (notify-on-relay). Set contact preferences via `GET`/`PATCH /callsigns/registered/{callsign}/contact-preferences`; in strict regions (e.g. EU/UK/ZA), `consent_confirmed: true` is required when enabling. Recipients can reply **STOP** to opt out; configure your Twilio webhook (or Lambda) to call `POST /internal/opt-out` with `{"phone": "+1234567890", "channel": "sms"}` or `{"callsign": "K5ABC", "channel": "whatsapp"}`. See [Response & compliance](response-compliance-and-monitoring.md) and the project doc *Notify and emergency compliance plan* (in `radioshaq/docs/`) for region-specific consent and opt-out rules. + +**References:** [Twilio WhatsApp API overview](https://www.twilio.com/docs/sms/whatsapp/api), [Twilio WhatsApp quickstart (Python)](https://www.twilio.com/docs/whatsapp/quickstart/python), [WhatsApp opt-in requirements](https://www.twilio.com/docs/sms/whatsapp/api#whatsapp-opt-in-requirements) (required for production). + +--- + +## Emergency contact + +Emergency contact settings control whether the station can receive and display emergency events (e.g. from external systems) and how approval/regions work. All options use the **`RADIOSHAQ_EMERGENCY_CONTACT__`** prefix. + +| Option | Env var | Default | Description | +|--------|---------|---------|-------------| +| `emergency_contact.enabled` | `RADIOSHAQ_EMERGENCY_CONTACT__ENABLED` | `false` | Enable emergency contact / event handling. | +| `emergency_contact.regions_allowed` | `RADIOSHAQ_EMERGENCY_CONTACT__REGIONS_ALLOWED` | (list) | Allowed regions for emergency events (e.g. list of region codes). | +| `emergency_contact.approval_required` | `RADIOSHAQ_EMERGENCY_CONTACT__APPROVAL_REQUIRED` | `true` | Require human approval before acting on emergency events. | +| `emergency_contact.allowed_event_types` | `RADIOSHAQ_EMERGENCY_CONTACT__ALLOWED_EVENT_TYPES` | (list) | Event types to accept (e.g. alert types). | + +See [Response & compliance](response-compliance-and-monitoring.md) for emergency workflows and the project doc *Notify and emergency compliance plan* (in `radioshaq/docs/`) for region-specific rules. + +--- + ## PM2 (process manager) Used when running under PM2 (e.g. `ecosystem.config.js`). Log and process settings. @@ -359,7 +472,32 @@ Used when running under PM2 (e.g. `ecosystem.config.js`). Log and process settin - **MessageBus consumer** — The API can run an inbound message consumer in the background so external systems can push work into the REACT loop. Set **`RADIOSHAQ_BUS_CONSUMER_ENABLED=1`** (or `true`/`yes`) to enable it. If not set, the consumer is disabled. - **API host/port** — The server uses `API_HOST` / `API_PORT` or **`RADIOSHAQ_API_HOST`** / **`RADIOSHAQ_API_PORT`** when starting uvicorn (default from `hq.host` and `hq.port`). - **CLI** — Scripts that call the API use **`RADIOSHAQ_API`** (base URL, default `http://localhost:8000`) and **`RADIOSHAQ_TOKEN`** (Bearer token). -- **TTS** — When `radio.voice_use_tts` is true (or when a task sets `use_tts: true`, including MessageBus replies when `radio.radio_reply_use_tts: true`), ElevenLabs is used; set **`ELEVENLABS_API_KEY`** in the environment. +- **TTS** — When `radio.voice_use_tts` is true (or when a task sets `use_tts: true`), speech is generated per **`tts.provider`**: **`elevenlabs`** (set **`ELEVENLABS_API_KEY`**) or **`kokoro`** (local; `uv sync --extra tts_kokoro`). See [TTS (text-to-speech)](#tts-text-to-speech). +- **ASR** — Voice pipeline and audio upload use **`audio.asr_model`**: **`voxtral`** / **`whisper`** (local; `uv sync --extra audio`) or **`scribe`** (ElevenLabs API; **`ELEVENLABS_API_KEY`**). - **Alembic** — Migrations read **`DATABASE_URL`** or **`POSTGRES_HOST`**, **`POSTGRES_PORT`**, **`POSTGRES_DB`**, **`POSTGRES_USER`**, **`POSTGRES_PASSWORD`** (see [.env.example](reference/.env.example)). +### Maps (web interface) + +The web UI can show **operator locations** and **emergency events** on a map. You can choose **OpenStreetMap** (free, no API key) or **Google Maps** (requires an API key). At launch the app reads the provider from: (1) **localStorage** key `radioshaq_mapProvider` (user's last choice), then (2) **`VITE_MAP_PROVIDER`** (`osm` or `google`). On the Map page you can switch between OpenStreetMap and Google Maps; the choice is saved in localStorage. If Google is selected and **`VITE_GOOGLE_MAPS_API_KEY`** is not set, the map shows a message to set the key or switch to OSM. Restrict the Google key by HTTP referrer in Google Cloud Console and enable only the APIs you need (e.g. Maps JavaScript API). + +**Front-end (Vite) env vars:** + +| Env var | Description | +|---------|-------------| +| `VITE_MAP_PROVIDER` | `osm` or `google`. | +| `VITE_GOOGLE_MAPS_API_KEY` | Google Maps API key (when provider is `google`). | +| `VITE_DEFAULT_MAP_CENTER_LAT` | Default map center latitude. | +| `VITE_DEFAULT_MAP_CENTER_LON` | Default map center longitude. | +| `VITE_DEFAULT_MAP_ZOOM` | Default zoom level. | +| `VITE_DEFAULT_MAP_RADIUS_METERS` | Default radius in meters (e.g. for “nearby” queries). | +| `VITE_MAP_SOURCE` | OSM: active tile source id (default `osm`). | +| `VITE_MAP_TILE_URL` | OSM: Leaflet tile URL template (`{z}`, `{x}`, `{y}`, optional `{s}`). | +| `VITE_MAP_TILE_ATTRIBUTION` | OSM: attribution text/HTML. | +| `VITE_MAP_TILE_SUBDOMAINS` | OSM: subdomains (e.g. `a,b,c`). | +| `VITE_MAP_SOURCES` | OSM: JSON array of tile sources (`id`, `name`, `tileUrlTemplate`, `attribution`, etc.). | + +**Where maps appear:** Map page (full-screen operator and emergency map), Emergency page (event map), Radio page (field map panel), Transcripts (“View on map”), Callsigns (“Set location”). When the provider is OpenStreetMap, the built-in source `osm` uses `https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png`. For custom tiles use the single-source vars above, or provide multiple sources in `VITE_MAP_SOURCES`. + +Backend GIS endpoints (`POST/GET /gis/location`, `GET /gis/operators-nearby`, `GET /gis/emergency-events`) are documented in the [API Reference](api-reference.md). Backend tests: `radioshaq/tests/unit/test_gis_routes.py`; manually verify with `npm run dev` and the Map page. + For a minimal path from zero to a running station, follow [Quick Start](quick-start.md); for hardware and rig-specific details, see [Radio Usage](radio-usage.md). diff --git a/docs/index.md b/docs/index.md index 25474bb..16a20e9 100644 --- a/docs/index.md +++ b/docs/index.md @@ -97,6 +97,7 @@ So: you **set up** one or more field stations (and optionally an HQ and remote r - **SDR TX** — Optional HackRF transmit when `radio.sdr_tx_enabled` is true; band and compliance checks apply. - **Callsign whitelist** — Static list plus DB-backed registration; optional “registry required” for relay/store. - **Transcripts, relay, inject** — Store transcripts; relay between bands via the orchestrator tool `relay_message_between_bands` or `POST /messages/relay`. Delivery is poll-based by default (recipient uses `GET /transcripts?callsign=...&destination_only=true&band=...`); optional config can enable inject or TX on the target band. Inject test audio for demos; when multiple bands are monitored, inject is band-accurate. +- **GIS / maps** — Operator location (`POST`/`GET /gis/location`), operators-nearby, and emergency events with location (`GET /gis/emergency-events`). The web UI Map page and map panels (Emergency, Radio, Transcripts, Callsigns) show locations; map provider (OSM vs Google) and API keys are set via front-end env vars — see [Configuration](configuration.md) (Maps / web interface subsection). - **Compliance** — TX audit log, band allowlist, and region-based restricted bands (e.g. FCC, CEPT) to keep operations within regulations. --- @@ -111,6 +112,7 @@ So: you **set up** one or more field stations (and optionally an HQ and remote r - Inject for demo: `radioshaq message inject "text"`. - List transcripts: `radioshaq transcripts list`. - Health: `radioshaq health`. + - **Config show:** `radioshaq config show [--section llm|memory|overrides] [--config-dir PATH]` — prints LLM, memory, and per-role overrides from config (API keys redacted). - Start API: `radioshaq run-api`. - **Launch (dev):** `radioshaq launch docker` (start Postgres), `radioshaq launch docker --hindsight` (Postgres + Hindsight), `radioshaq launch pm2` (Postgres + API via PM2), `radioshaq launch pm2 --hindsight` (same + Hindsight). Same commands on Windows, Linux, and macOS. diff --git a/docs/monitoring.md b/docs/monitoring.md deleted file mode 100644 index fb9e39e..0000000 --- a/docs/monitoring.md +++ /dev/null @@ -1,56 +0,0 @@ -# Monitoring - -RadioShaq exposes a **Prometheus**-compatible scrape endpoint and an optional **VAD/metrics WebSocket** for the dashboard. - -## Prometheus `/metrics` - -**Endpoint:** `GET /metrics` (no authentication). - -Returns Prometheus exposition format (text/plain) with: - -| Metric | Type | Description | -|--------|------|-------------| -| `radioshaq_uptime_seconds` | gauge | Process uptime in seconds | -| `radioshaq_callsigns_registered_total` | gauge | Number of registered (whitelisted) callsigns | -| `radioshaq_gpu_utilization_percent` | gauge | GPU utilization 0–100 (when `nvidia-smi` is available) | -| `radioshaq_gpu_memory_used_mb` | gauge | GPU memory used in MB | -| `radioshaq_gpu_memory_total_mb` | gauge | GPU memory total in MB | - -GPU metrics are populated only when **nvidia-smi** is on the PATH and returns data (e.g. NVIDIA drivers and GPU present). No extra Python dependency is required for GPU metrics. - -**Optional:** Install the `prometheus_client` library for standard exposition format and future expansion (e.g. default process metrics): - -```bash -cd radioshaq && uv sync --extra metrics -``` - -Without it, the server still exposes the gauges above in valid Prometheus text format. - -**Example scrape config (Prometheus):** - -```yaml -scrape_configs: - - job_name: radioshaq - static_configs: - - targets: ['localhost:8000'] - metrics_path: /metrics -``` - -## VAD / audio metrics WebSocket - -**Endpoint:** `WS /ws/audio/metrics/{session_id}`. - -Used by the dashboard **VAD visualizer** for real-time audio pipeline state (VAD active, SNR, state). By default the server sends a **placeholder heartbeat** every second (`vad_active: false`, `snr_db: null`, `state: "idle"`). - -When the **voice_rx pipeline** is wired, the pipeline can push live metrics by setting **`app.state.audio_metrics_latest`** to a dict before each WebSocket send: - -- `vad_active` (bool): whether voice activity is detected -- `snr_db` (float | null): signal-to-noise ratio in dB -- `state` (str): e.g. `"idle"`, `"speech"`, `"processing"` -- `type` (str, optional): e.g. `"metrics"` or `"heartbeat"` - -The WebSocket handler reads this once per second and sends it to connected clients. If `audio_metrics_latest` is not set or not a dict, the placeholder heartbeat is sent. - -## Health checks - -Use **`GET /health`** for liveness and **`GET /health/ready`** for readiness (DB, orchestrator, audio agent). See [API Reference](api-reference.md). diff --git a/docs/quick-start.md b/docs/quick-start.md index 541da07..39a2ffe 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -4,6 +4,8 @@ This guide gets the RadioShaq API running on your machine in a few minutes. By t **(Optional) Interactive setup:** Run `radioshaq setup` from the `radioshaq/` directory to be guided through mode, database (Docker or URL), JWT, LLM, and optional radio/memory/field settings. Radio setup includes prompts for MessageBus outbound radio replies and whether those replies use TTS. It writes `.env` and `config.yaml` to the project root and can start Docker Postgres and run migrations. See [Configuration](configuration.md#interactive-setup). +**Voice (TTS/ASR):** TTS can use **ElevenLabs** (set `ELEVENLABS_API_KEY`) or **Kokoro** (local: `uv sync --extra tts_kokoro`). ASR can use **Voxtral/Whisper** (local: `uv sync --extra audio`) or **Scribe** (ElevenLabs API). See [Configuration → TTS and Audio](configuration.md#tts-text-to-speech). + **(Optional) Full automated setup:** From the `radioshaq/` directory, run one script to install deps, create config, start Docker Postgres (and optionally Hindsight), run migrations, and install PM2 if Node is present: **Windows** — `.\infrastructure\local\setup.ps1`; **Linux/macOS** — `./infrastructure/local/setup.sh` (or `bash infrastructure/local/setup.sh`). Then start the API with `radioshaq launch pm2` or `radioshaq run-api`. Alternatively, follow the steps below. --- @@ -109,3 +111,4 @@ Use the token in the `Authorization` header: `Bearer $TOKEN` (Bash) or `Bearer $ - **Configure for production** — Set [Configuration](configuration.md): `RADIOSHAQ_JWT__SECRET_KEY`, LLM provider and API key, and optionally `RADIOSHAQ_MODE`, database URL, and log level. - **Connect a radio** — See [Radio Usage](radio-usage.md) for rig model IDs, ports, and voice TX/RX setup (IC-7300, FT-450D, RTL-SDR, HackRF). - **Explore the API** — Use the [API Reference](api-reference.md) and the live docs at http://localhost:8000/docs to try `/auth/token`, `/messages/process`, `/transcripts`, and relay/inject endpoints. +- **Web UI / maps** — The web interface can show maps (operator locations, emergency events). Map provider and API keys are configured via front-end env vars (e.g. `VITE_MAP_PROVIDER`, `VITE_GOOGLE_MAPS_API_KEY`); see [Configuration](configuration.md) (Maps / web interface subsection). diff --git a/docs/radio-usage.md b/docs/radio-usage.md index 591adbb..db0e2ca 100644 --- a/docs/radio-usage.md +++ b/docs/radio-usage.md @@ -81,7 +81,7 @@ Use case: **Voice RX** with `response_mode: auto_respond` (or confirm_first). Fo - **First contact + chat:** Use **POST /messages/process** with `message` and `sender_id` (or `callsign`). Enable memory in config so the station loads context and first-contact hint when there’s no prior history. - **Whitelist:** Use **POST /messages/whitelist-request** with text or audio; optionally send `callsign` in the body. Enable the bus consumer if replies go through the MessageBus. - **MessageBus radio replies:** Control whether outbound bus replies transmit and whether they use TTS with `radio.radio_reply_tx_enabled` and `radio.radio_reply_use_tts`. -- **Radio-style call-out:** Set `radio.station_callsign` and optionally `radio.response_radio_format_enabled: true` so replies are wrapped as “STATION de CALLSIGN … Over.” See [Configuration](configuration.md) and the user-flow investigation in the repo for details. +- **Radio-style call-out:** Set `radio.station_callsign` and optionally `radio.response_radio_format_enabled: true` (and `response_radio_format_style`: `over` | `prosign` | `none`) so replies are wrapped as “STATION de CALLSIGN … Over.” See [Configuration](configuration.md#radio) for the Radio table and options. --- diff --git a/docs/reference/.env.example b/docs/reference/.env.example index a3bad2d..e823ecc 100644 --- a/docs/reference/.env.example +++ b/docs/reference/.env.example @@ -59,14 +59,19 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_LLM__ANTHROPIC_API_KEY= # RADIOSHAQ_LLM__CUSTOM_API_BASE= # RADIOSHAQ_LLM__CUSTOM_API_KEY= +# RADIOSHAQ_LLM__GEMINI_API_KEY= # For provider: gemini (Google AI Studio) +# RADIOSHAQ_LLM__HUGGINGFACE_API_KEY= # For provider: huggingface (Inference Providers) +# RADIOSHAQ_LLM__HUGGINGFACE_API_BASE= # Optional; default https://router.huggingface.co/v1 # RADIOSHAQ_LLM__TEMPERATURE=0.1 # RADIOSHAQ_LLM__MAX_TOKENS=4096 # RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0 # RADIOSHAQ_LLM__MAX_RETRIES=3 # RADIOSHAQ_LLM__RETRY_DELAY_SECONDS=1.0 -# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY directly +# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY / HF_TOKEN / GEMINI_API_KEY directly # MISTRAL_API_KEY= # OPENAI_API_KEY= +# HF_TOKEN= # Hugging Face token with "Inference Providers" permission (when provider is huggingface) +# GEMINI_API_KEY= # ----------------------------------------------------------------------------- # Memory (per-callsign memory, Hindsight, daily summaries) @@ -77,6 +82,7 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_MEMORY__RECENT_MESSAGES_LIMIT=40 # RADIOSHAQ_MEMORY__DAILY_SUMMARY_DAYS=7 # RADIOSHAQ_MEMORY__SUMMARY_TIMEZONE=America/New_York +# RADIOSHAQ_MEMORY__MEMORY_RETENTION_DAYS=0 # ----------------------------------------------------------------------------- # Radio (CAT, FLDIGI, packet, SDR TX, voice) @@ -114,6 +120,14 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_RADIO__AUDIO_INPUT_ENABLED=false # RADIOSHAQ_RADIO__AUDIO_OUTPUT_ENABLED=false # RADIOSHAQ_RADIO__AUDIO_MONITORING_ENABLED=false +# RADIOSHAQ_RADIO__STATION_CALLSIGN= +# RADIOSHAQ_RADIO__RESPONSE_RADIO_FORMAT_ENABLED=false +# RADIOSHAQ_RADIO__RESPONSE_RADIO_FORMAT_STYLE=over +# RADIOSHAQ_RADIO__BAND_LISTENER_STORE=true +# RADIOSHAQ_RADIO__BAND_LISTENER_STORE_MIN_LENGTH=0 +# RADIOSHAQ_RADIO__TRANSCRIPT_RETENTION_DAYS=0 +# RADIOSHAQ_RADIO__RELAY_STORE_ONLY_RELAYED=false +# voice_store_keywords: use config.yaml (list) # allowed_callsigns: use config.yaml (list) or JSON in env, e.g. RADIOSHAQ_RADIO__ALLOWED_CALLSIGNS='["K1ABC","W2XYZ"]' # ----------------------------------------------------------------------------- @@ -156,6 +170,9 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_AUDIO__PTT_COORDINATION_ENABLED=true # RADIOSHAQ_AUDIO__PTT_COOLDOWN_MS=500 # RADIOSHAQ_AUDIO__BREAK_IN_ENABLED=true +# RADIOSHAQ_AUDIO__ELEVEN_VOICE_ISOLATOR_ENABLED=false +# RADIOSHAQ_AUDIO__VOICE_PUBLISH_TO_BUS=true +# RADIOSHAQ_AUDIO__VOICE_SOURCE_CALLSIGN_DEFAULT= # trigger_phrases: use config.yaml (list) or JSON, e.g. RADIOSHAQ_AUDIO__TRIGGER_PHRASES='["radioshaq","field station"]' # ----------------------------------------------------------------------------- @@ -187,6 +204,14 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_HQ__AUTO_COORDINATION_ENABLED=true # RADIOSHAQ_HQ__COORDINATION_INTERVAL_SECONDS=30 +# ----------------------------------------------------------------------------- +# Emergency contact (emergency events, regions, approval; see response-compliance-and-monitoring.md) +# ----------------------------------------------------------------------------- +# RADIOSHAQ_EMERGENCY_CONTACT__ENABLED=false +# RADIOSHAQ_EMERGENCY_CONTACT__REGIONS_ALLOWED= # JSON array e.g. ["FCC","CA"] +# RADIOSHAQ_EMERGENCY_CONTACT__APPROVAL_REQUIRED=true +# RADIOSHAQ_EMERGENCY_CONTACT__ALLOWED_EVENT_TYPES= # JSON array e.g. ["emergency"] + # ----------------------------------------------------------------------------- # PM2 (process manager) # ----------------------------------------------------------------------------- @@ -214,5 +239,8 @@ POSTGRES_PASSWORD=radioshaq # CLI base URL and token for scripts # RADIOSHAQ_API=http://localhost:8000 # RADIOSHAQ_TOKEN= -# TTS (ElevenLabs) – used when voice_use_tts is true +# TTS: when tts.provider is elevenlabs, set ELEVENLABS_API_KEY. For kokoro (local), use uv sync --extra tts_kokoro. # ELEVENLABS_API_KEY= +# ASR: when audio.asr_model is scribe (ElevenLabs Scribe), ELEVENLABS_API_KEY is required. voxtral/whisper are local (uv sync --extra audio). +# RADIOSHAQ_TTS__PROVIDER=elevenlabs +# RADIOSHAQ_AUDIO__ASR_MODEL=voxtral diff --git a/docs/reference/config.example.yaml b/docs/reference/config.example.yaml index 7900c53..a253d29 100644 --- a/docs/reference/config.example.yaml +++ b/docs/reference/config.example.yaml @@ -44,13 +44,14 @@ jwt: # LLM (set API key in env or here; prefer env for secrets) # ----------------------------------------------------------------------------- llm: - provider: mistral # mistral | openai | anthropic | custom + provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini model: mistral-large-latest mistral_api_key: null openai_api_key: null anthropic_api_key: null custom_api_base: null custom_api_key: null + gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY temperature: 0.1 max_tokens: 4096 timeout_seconds: 60.0 @@ -67,6 +68,7 @@ memory: recent_messages_limit: 40 daily_summary_days: 7 summary_timezone: "America/New_York" + memory_retention_days: 0 # Delete memory_messages older than N days; 0 = no delete # ----------------------------------------------------------------------------- # Radio (CAT, FLDIGI, packet, SDR TX, voice) @@ -112,6 +114,11 @@ radio: station_callsign: null # Our callsign for reply; defaults to packet_callsign response_radio_format_enabled: false response_radio_format_style: over # over | prosign (K) | none + voice_store_keywords: null # Only store voice segments containing at least one keyword (list; case-insensitive) + band_listener_store: true # When storage set, store band-listener messages as transcripts + band_listener_store_min_length: 0 # Min message length to store from band listener + transcript_retention_days: 0 # If > 0, delete transcripts older than N days + relay_store_only_relayed: false # When true, relay stores only relayed transcript, not source # ----------------------------------------------------------------------------- # Audio (voice_rx pipeline: capture, VAD, ASR, triggers, response mode) @@ -157,6 +164,9 @@ audio: ptt_coordination_enabled: true ptt_cooldown_ms: 500 break_in_enabled: true + eleven_voice_isolator_enabled: false # When true and asr_model is scribe, run ElevenLabs Voice Isolator (requires ELEVENLABS_API_KEY) + voice_publish_to_bus: true # Publish transcribed voice segments to MessageBus for orchestrator + voice_source_callsign_default: null # Default sender_id for voice when not parsed; null = 'UNKNOWN' # ----------------------------------------------------------------------------- # Field mode (when mode: field) @@ -189,6 +199,15 @@ hq: auto_coordination_enabled: true coordination_interval_seconds: 30 +# ----------------------------------------------------------------------------- +# Emergency contact (emergency events, regions, approval) +# ----------------------------------------------------------------------------- +emergency_contact: + enabled: false + regions_allowed: [] # e.g. [FCC, CA] + approval_required: true + allowed_event_types: [] # e.g. [emergency] + # ----------------------------------------------------------------------------- # PM2 (process manager) # ----------------------------------------------------------------------------- diff --git a/docs/response-compliance-and-monitoring.md b/docs/response-compliance-and-monitoring.md new file mode 100644 index 0000000..1a95bc1 --- /dev/null +++ b/docs/response-compliance-and-monitoring.md @@ -0,0 +1,273 @@ +# Response & compliance + +Operator response (emergency approval, relay, contact preferences), compliance (radio and messaging), and monitoring (metrics, health, WebSocket). + +--- + +## 1. Response + +### 1.1 Emergency message approval (operator confirmation) + +**Purpose:** Emergency outreach is for the **operator to receive messages and transmit them**. The operator receives each emergency request (message text and destination), then transmits by approving it — the system sends the message via SMS or WhatsApp to the specified contact. + +When emergency SMS/WhatsApp is enabled and approval is required, outbound emergency messages are **queued until an operator approves or rejects them**. The operator can use the **web UI** (Emergency page) or the **API**. + +**Config:** Set `RADIOSHAQ_EMERGENCY_CONTACT__ENABLED=true` and `RADIOSHAQ_EMERGENCY_CONTACT__REGIONS_ALLOWED` (e.g. `["FCC","CA"]`). See `.env.example` in the repository; full details are in the project doc *Notify and emergency compliance plan* (radioshaq/docs/). + +**How the operator is informed (timely relay or reject):** + +1. **Audio** — When the pending count goes from 0 to greater than 0, the web UI plays a short alert sound (two beeps) so the operator is notified even if the tab is in the background. +2. **Automated polling** — The web UI polls `GET /emergency/pending-count` every 15s (and the Emergency page polls the event list every 12s). An optional **SSE stream** `GET /emergency/events/stream` sends the pending count every 10s for clients that prefer a long-lived connection. +3. **Push notification** — When new pending messages arrive (count 0 → N), the browser **Notification** API is used (if the user has clicked “Allow notifications” and permission is granted). This notifies the operator when the tab is in the background or the browser is minimised. + +**API endpoints (for scripts or custom UIs):** + +- **Pending count:** `GET /emergency/pending-count` — returns `{"count": N}`. +- **SSE stream:** `GET /emergency/events/stream` — Server-Sent Events; each event is `data: {"pending_count": N}` every ~10s. Requires Bearer token. +- **List pending:** `GET /emergency/events` (optional: `?status=pending`) — returns `events` and `count` with `id`, `initiator_callsign`, `target_callsign`, `notes`, `extra_data` (e.g. `emergency_contact_phone`, `emergency_contact_channel`, `message`). + +**Web UI flows:** Open the **Emergency** page in the RadioShaq web interface. The page lists all pending emergency events with contact phone, channel (SMS/WhatsApp), and message text. Use **Approve & send** to transmit the message to the contact, or **Reject** to decline (no message sent). Optional notes can be added before approving or rejecting. Click **Allow notifications** to enable browser push when new requests arrive. + +**How the operator transmits (confirms):** + +1. **Approve and send** + `POST /emergency/events/{event_id}/approve` + Optional body: `{"notes": "optional note"}`. Requires a valid **Bearer token**. Sends the message via SMS/WhatsApp. The backend: + + - Verifies the event is `emergency` and `pending` + - Sets `status=approved` and records `approved_at` and `approved_by` (from the JWT: `sub` or `callsign`) + - Publishes the message to the outbound bus (SMS or WhatsApp is sent by the dispatcher) + - Sets `sent_at` in `extra_data` + +**Example (curl):** + +```bash +# Check how many pending (use your JWT) +curl -s -H "Authorization: Bearer YOUR_JWT" "https://your-api/emergency/pending-count" + +# List pending (use your JWT) +curl -s -H "Authorization: Bearer YOUR_JWT" "https://your-api/emergency/events" + +# Approve event 42 (sends the SMS/WhatsApp) +curl -s -X POST -H "Authorization: Bearer YOUR_JWT" \ + -H "Content-Type: application/json" -d '{"notes": "Verified"}' \ + "https://your-api/emergency/events/42/approve" +``` + +2. **Reject (do not send)** + `POST /emergency/events/{event_id}/reject` + Optional body: `{"notes": "optional note"}`. Sets `status=rejected` and records `rejected_at`, `rejected_by`; no message is sent. + +**Creating emergency requests:** Use `POST /emergency/request` (body: `contact_phone`, `contact_channel` sms/whatsapp, optional `target_callsign`, `notes`) or relay with `emergency: true`. See [API Reference](api-reference.md) for endpoints. + +--- + +### 1.2 Relay (radio, SMS, WhatsApp) + +- **Radio relay:** Message is stored for the destination callsign/band; optionally injected or transmitted on the target band (site config). +- **SMS/WhatsApp relay:** Set `target_channel=sms` or `whatsapp` and `destination_phone` (E.164). The relay is stored and delivered by the relay_delivery worker and outbound dispatcher. +- **Emergency relay:** Set `emergency=true` with SMS/WhatsApp target. If the region is allowed and approval is required, the message is **queued for approval** (coordination event); an operator must call `POST /emergency/events/{id}/approve` before it is sent. See §1.1. + +API: `POST /messages/relay`. See [API Reference](api-reference.md). + +--- + +### 1.3 Contact preferences and notify-on-relay + +Whitelisted callsigns can opt in to receive a **short SMS or WhatsApp notification** when a message is left for them on radio (notify-on-relay). + +- **Get/set preferences:** `GET /callsigns/registered/{callsign}/contact-preferences`, `PATCH /callsigns/registered/{callsign}/contact-preferences` (set `notify_sms_phone`, `notify_whatsapp_phone`, `notify_on_relay`, `consent_source`; in strict regions, `consent_confirmed=true` when enabling). +- **Opt-out:** When a recipient replies STOP, call `POST /internal/opt-out` with `phone` or `callsign` and `channel` (sms/whatsapp) so the system records opt-out and stops notifications. + +Details (consent, opt-out, region behaviour) are in the project doc *Notify and emergency compliance plan* (radioshaq/docs/). + +--- + +## 2. Compliance + +### 2.1 Radio (restricted bands and band plans) + +The compliance plugin enforces **restricted bands** and **band plans** by region (FCC, CEPT, CA, AU, ZA, etc.). Configure: + +- `RADIOSHAQ_RADIO__RESTRICTED_BANDS_REGION` — e.g. `FCC`, `CA`, `CEPT`, `FR`, `UK`, `AU`, `ZA`. Drives which bands are disallowed for transmission. +- `RADIOSHAQ_RADIO__BAND_PLAN_REGION` — optional override (e.g. `ITU_R1`, `ITU_R3`); leave blank to use the backend default for the region. + +**Operators are responsible for verifying national rules** (e.g. ANFR, Ofcom, ACMA, IFT). + +#### Backend overview + +| Backend | Region | Restricted bands source | Band plan | Official references | +|---------|--------|-------------------------|-----------|----------------------| +| **FCC** | US (and baseline for some R2) | 47 CFR §15.205 | ITU R2 (Americas) | [ecfr.gov §15.205](https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A/part-15/subpart-C/section-15.205), [law.cornell.edu](https://www.law.cornell.edu/cfr/text/47/15.205) | +| **CEPT** | EU harmonised | ECC/ETSI (see below) | IARU R1 | ERC/REC 70-03, EU 2006/771/EC, ETSI EN 300 220 | +| **FR** | France | Same as CEPT | IARU R1 | CEPT + national ANFR | +| **UK** | United Kingdom | Same as CEPT (Ofcom) | IARU R1 | CEPT; Ofcom UKFAT | +| **ES** | Spain | Same as CEPT | IARU R1 | CEPT + national authority | +| **BE** | Belgium | Same as CEPT | IARU R1 | CEPT TR 61-01/61-02; BIPT/IBPT | +| **CH** | Switzerland | Same as CEPT | IARU R1 | CEPT; BAKOM | +| **LU** | Luxembourg | Same as CEPT | IARU R1 | CEPT; ILNAS | +| **MC** | Monaco | Same as CEPT | IARU R1 | CEPT | +| **ITU_R1** | Band plan only | — | IARU R1 | [IARU R1 band plans](https://www.iaru-r1.org/on-the-air/band-plans/) | +| **ITU_R3** | Band plan only | — | IARU R3 (2m 144–148 MHz, 70cm 430–440 MHz) | IARU R3-004 (2019); [IARU R3](https://www.iaru.org/) | +| **CA** | Canada (ITU R2) | FCC §15.205 baseline; RSS-210 §7.1, Annexes A/B (ISED) | ITU R2 | ISED RSS-210 Issue 11; RBR-4; CEPT T/R 61-01 for reciprocal | +| **MX** | Mexico (ITU R2) | FCC §15.205 baseline (IFT CNAF, IFT-016-2024) | ITU R2 | IFT; FCC as baseline; verify IFT | +| **AR, CL, CO, PE, VE, EC, UY, PY, BO, CR, PA, GT, DO** | R2 Americas (see table) | FCC §15.205 baseline | ITU R2 | IARU R2; verify IFT, ENACOM, SUBTEL, CRC, etc. | +| **AU** | Australia (ITU R3) | ACMA Spectrum Plan / conservative set | IARU R3 | ACMA; WIA band plan | +| **ZA** | South Africa (ITU R1) | ICASA NRFP / RFSAPs (conservative set) | IARU R1 | ICASA; SARL | +| **NG, KE, EG, MA, TN, DZ, GH, TZ, ET, SN, CI, CM, BW, NA, ZW, MZ, UG, RW, GA, ML, BF, NE, TG, BJ, CD, MG** | R1 Africa (see table) | R1 conservative (CEPT-aligned); ZA uses dedicated list | IARU R1 | Verify national regulator (NCC, CA, NTRA, ANRT, BOCRA, etc.) | +| **NZ** | New Zealand (ITU R3) | RSM PIB 21 conservative set | IARU R3 | RSM; PIB 21 | +| **JP** | Japan (ITU R3) | Conservative set (MIC/JARL) | IARU R3 | MIC; JARL | +| **IN** | India (ITU R3) | Conservative set (WPC) | IARU R3 | WPC; ARSI | + +**Important:** Use **ITU_R1** and **ITU_R3** only as `band_plan_region`, not as `restricted_bands_region`. They provide band plans but no restricted-band list; setting them as restricted region would disable all restricted-band enforcement. Set `restricted_bands_region` to a country (e.g. CEPT, FR, AU) and `band_plan_region` to ITU_R1 or ITU_R3 if you need that plan. + +#### FCC (United States) + +- **Rule:** 47 CFR §15.205 — Restricted bands of operation. +- **Meaning:** Intentional radiators must not operate in the listed bands; only spurious emission limits (§15.209) apply. +- **Source:** Code of Federal Regulations, title 47, chapter I, subchapter A, part 15, subpart C, section 15.205. The list in code is maintained from the official eCFR/Cornell text. + +#### CEPT / EU (France, UK, Spain, etc.) + +CEPT does **not** publish a single “FCC 15.205 equivalent” list. EU harmonisation defines **allowed** SRD bands and conditions; “restricted” is inferred from: + +1. **ERC/REC 70-03** (CEPT Recommendation on Short Range Devices) + - [docdb.cept.org document 845](https://docdb.cept.org/document/845) — Annexes list allowed SRD applications and bands; Appendix 3 lists national restrictions. + - [ECO Frequency Information System (EFIS)](https://efis.cept.org/) — National implementation status and restrictions. + +2. **EU Commission Decision 2006/771/EC** (as amended) + - Harmonised technical conditions for SRD; annex lists frequency bands and parameters. + - [EUR-Lex CELEX 32006D0771](https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32006D0771). + +3. **ETSI EN 300 220** + - Harmonised standard for SRD 25 MHz–1000 MHz. Defines permitted bands (e.g. 433.04–434.79 MHz, 863–876 MHz, 915–921 MHz). + - [ETSI EN 300 220-2](https://www.etsi.org/deliver/etsi_en/300200_300299/30022002/). + +The **CEPT restricted list in code** is derived from bands that are commonly protected in EU (aeronautical, radionavigation, COSPAS-SARSAT, marine, etc.). It explicitly **omits** FCC-only ranges (e.g. 240–285 MHz, 322–335.4 MHz, US GHz blocks). National administrations (e.g. ANFR France, Ofcom UK) may add further restrictions; operators must check national rules. + +#### Band plans + +- **ITU Region 2 (Americas):** Default in `bands.py`; 2m 144–148 MHz, 70cm 420–450 MHz. + [IARU R2 band plans](https://www.iaru-r2.org/en/reference/band-plans/). + +- **ITU Region 1 (Europe, Africa, Middle East):** 2m 144–146 MHz, 70cm 430–440 MHz. + [IARU R1 band plans](https://www.iaru-r1.org/on-the-air/band-plans/). + +- **ITU Region 3 (Asia–Pacific):** IARU R3 band plan: 2m 144–148 MHz, 70cm 430–440 MHz (secondary in R3; 440–450 only in Australia/Philippines per RR 5.270). Used by **ITU_R3** and **AU** backends. [IARU R3-004 (2019)](https://www.iaru.org/). + +#### Country → backend mapping + +| Country / area | Recommended `restricted_bands_region` | Notes | +|----------------|----------------------------------------|--------| +| United States | `FCC` | R2 band plan default | +| Canada | `CA` or `FCC` | R2; ISED/RBR-4; CEPT reciprocal for EU visits | +| France | `FR` or `CEPT` | R1; ANFR | +| Belgium | `BE` or `CEPT` | R1; BIPT/IBPT | +| Switzerland | `CH` or `CEPT` | R1; BAKOM | +| Luxembourg | `LU` or `CEPT` | R1 | +| Monaco | `MC` or `CEPT` | R1 | +| United Kingdom | `UK` or `CEPT` | R1; Ofcom | +| Spain | `ES` or `CEPT` | R1 | +| Mexico | `MX` or `FCC` | R2; IFT | +| Argentina | `AR` or `MX` | R2; ENACOM | +| Chile | `CL` or `MX` | R2; SUBTEL | +| Colombia | `CO` or `MX` | R2; CRC | +| Peru | `PE` or `MX` | R2; MTC | +| Venezuela | `VE` or `MX` | R2; CONATEL | +| Ecuador | `EC` or `MX` | R2 | +| Uruguay | `UY` or `MX` | R2 | +| Paraguay | `PY` or `MX` | R2 | +| Bolivia | `BO` or `MX` | R2 | +| Costa Rica | `CR` or `MX` | R2 | +| Panama | `PA` or `MX` | R2 | +| Guatemala | `GT` or `MX` | R2 | +| Dominican Republic | `DO` or `MX` | R2 | +| Other Latin America / Caribbean (R2) | `MX` or `FCC` | R2; verify national regulator | +| Australia | `AU` or `ITU_R3` | IARU R3; restricted bands enforced (ACMA conservative set) — verify ACMA | +| New Zealand | `NZ` | R3; restricted bands enforced (RSM PIB 21 conservative); verify RSM | +| Japan | `JP` | R3; restricted bands enforced (conservative set); verify MIC/JARL | +| India | `IN` | R3; restricted bands enforced (conservative set); verify WPC/ARSI | +| Other R3 | `ITU_R3` | R3 band plan; verify national regulator | +| South Africa | `ZA` | R1; restricted bands enforced (ICASA NRFP); verify ICASA; SARL | +| Nigeria | `NG` | R1; restricted: R1 conservative; verify NCC | +| Kenya | `KE` | R1; restricted: R1 conservative; verify CA | +| Egypt | `EG` | R1; restricted: R1 conservative; verify NTRA | +| Morocco | `MA` | R1; restricted: R1 conservative; verify ANRT | +| Tunisia | `TN` | R1; restricted: R1 conservative; verify national authority | +| Algeria | `DZ` | R1; restricted: R1 conservative; verify national authority | +| Ghana | `GH` | R1; restricted: R1 conservative; verify NCA | +| Tanzania | `TZ` | R1; restricted: R1 conservative; verify TCRA | +| Ethiopia | `ET` | R1; restricted: R1 conservative; verify ETA | +| Senegal | `SN` | R1; restricted: R1 conservative; verify ARTP | +| Côte d'Ivoire | `CI` | R1; restricted: R1 conservative; verify ARTCI | +| Cameroon | `CM` | R1; restricted: R1 conservative; verify MINPOSTEL | +| Botswana | `BW` | R1; restricted: R1 conservative; verify BOCRA | +| Namibia | `NA` | R1; restricted: R1 conservative; verify CRAN | +| Zimbabwe | `ZW` | R1; restricted: R1 conservative; verify POTRAZ | +| Mozambique | `MZ` | R1; restricted: R1 conservative; verify INCM | +| Uganda | `UG` | R1; restricted: R1 conservative; verify UCC | +| Rwanda | `RW` | R1; restricted: R1 conservative; verify RURA | +| Gabon | `GA` | R1; restricted: R1 conservative; verify ARCEP | +| Mali, Burkina Faso, Niger, Togo, Benin | `ML`, `BF`, `NE`, `TG`, `BJ` | R1; restricted: R1 conservative; verify national regulator | +| DRC, Madagascar | `CD`, `MG` | R1; restricted: R1 conservative; verify national regulator | +| Other Africa (ITU R1) | `ZA` or country code or `ITU_R1` | R1 band plan; restricted: R1 conservative; verify national regulator | + +--- + +### 2.2 Messaging (SMS/WhatsApp: consent, opt-out, emergency) + +- **Notify-on-relay:** Consent is recorded when a user enables “notify when a message is left for me” (`notify_consent_at`, `notify_consent_source`). In strict regions (EU/UK/ZA), explicit `consent_confirmed` is required. Opt-out (STOP) is handled via `POST /internal/opt-out`. +- **Emergency:** Emergency SMS/WhatsApp is **region-gated** (`emergency_contact.regions_allowed`, e.g. FCC, CA) and **human-approved** when `approval_required=true`. Each approval and send is recorded (`approved_at`, `approved_by`, `sent_at` in event `extra_data`). + +Country/region rules (US TCPA, Canada CASL, EU/UK GDPR/PECR, Australia Spam Act, South Africa POPIA, WhatsApp) and the region→profile table are documented in the project’s *Notify and emergency compliance plan* (radioshaq/docs/). + +--- + +## 3. Monitoring + +### 3.1 Prometheus `/metrics` + +**Endpoint:** `GET /metrics` (no authentication). + +Returns Prometheus exposition format (text/plain) with: + +| Metric | Type | Description | +|--------|------|-------------| +| `radioshaq_uptime_seconds` | gauge | Process uptime in seconds | +| `radioshaq_callsigns_registered_total` | gauge | Number of registered (whitelisted) callsigns | +| `radioshaq_relay_deliveries_total` | counter | Incremented by relay_delivery worker | +| Listener/band gauges | gauge | Messages per band when band listener reports | +| `radioshaq_gpu_utilization_percent` | gauge | GPU utilization 0–100 (when `nvidia-smi` available) | +| `radioshaq_gpu_memory_used_mb` / `_total_mb` | gauge | GPU memory (when `nvidia-smi` available) | + +GPU metrics are populated only when **nvidia-smi** is on the PATH. For full Prometheus client support: `uv sync --extra metrics` (from the radioshaq directory). + +**Example scrape config (Prometheus):** + +```yaml +scrape_configs: + - job_name: radioshaq + static_configs: + - targets: ['localhost:8000'] + metrics_path: /metrics +``` + +--- + +### 3.2 Health checks + +Use **`GET /health`** for liveness and **`GET /health/ready`** for readiness (DB, orchestrator, audio agent). See [API Reference](api-reference.md). + +--- + +### 3.3 Audio metrics (WebSocket) + +When the voice_rx pipeline is running, real-time audio metrics (VAD, SNR, state) are available over a **WebSocket**. **Endpoint:** `WS /ws/audio/metrics/{session_id}`. By default the server sends a placeholder heartbeat every second (`vad_active: false`, `snr_db: null`, `state: "idle"`). When the voice_rx pipeline is wired, set `app.state.audio_metrics_latest` to a dict with `vad_active`, `snr_db`, `state`, and optional `type`; the handler sends it to connected clients once per second. The web UI can show “live” audio state; without a live signal, a placeholder or “waiting for pipeline” message may be shown. + +--- + +## See also + +- [Configuration](configuration.md) — env and config options +- [API Reference](api-reference.md) — endpoints and auth. Emergency events with location can be displayed on the web UI Map page and retrieved via **GET /gis/emergency-events** (see GIS subsection). +- Project docs in the repository (radioshaq/docs/): *Twilio SMS & WhatsApp*, *Notify and emergency compliance plan*, *SMS/WhatsApp implementation plan*. diff --git a/radioshaq/.env.example b/radioshaq/.env.example index a3bad2d..b31b6ec 100644 --- a/radioshaq/.env.example +++ b/radioshaq/.env.example @@ -27,7 +27,7 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_DATABASE__DYNAMODB_ENDPOINT= # RADIOSHAQ_DATABASE__DYNAMODB_REGION=us-east-1 # RADIOSHAQ_DATABASE__REDIS_URL=redis://localhost:6379/0 -# RADIOSHAQ_DATABASE__ALEMBIC_CONFIG=infrastructure/local/alembic.ini +# RADIOSHAQ_DATABASE__ALEMBIC_CONFIG=alembic.ini # RADIOSHAQ_DATABASE__AUTO_MIGRATE=false # ----------------------------------------------------------------------------- @@ -59,14 +59,20 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_LLM__ANTHROPIC_API_KEY= # RADIOSHAQ_LLM__CUSTOM_API_BASE= # RADIOSHAQ_LLM__CUSTOM_API_KEY= +# RADIOSHAQ_LLM__GEMINI_API_KEY= # RADIOSHAQ_LLM__TEMPERATURE=0.1 # RADIOSHAQ_LLM__MAX_TOKENS=4096 # RADIOSHAQ_LLM__TIMEOUT_SECONDS=60.0 # RADIOSHAQ_LLM__MAX_RETRIES=3 # RADIOSHAQ_LLM__RETRY_DELAY_SECONDS=1.0 -# Alternative: some code also reads MISTRAL_API_KEY / OPENAI_API_KEY directly +# Hugging Face (when RADIOSHAQ_LLM__PROVIDER=huggingface) +# RADIOSHAQ_LLM__HUGGINGFACE_API_KEY= +# RADIOSHAQ_LLM__HUGGINGFACE_API_BASE= +# Alternative: some code also reads provider API keys directly # MISTRAL_API_KEY= # OPENAI_API_KEY= +# HF_TOKEN= +# GEMINI_API_KEY= # ----------------------------------------------------------------------------- # Memory (per-callsign memory, Hindsight, daily summaries) @@ -77,6 +83,8 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_MEMORY__RECENT_MESSAGES_LIMIT=40 # RADIOSHAQ_MEMORY__DAILY_SUMMARY_DAYS=7 # RADIOSHAQ_MEMORY__SUMMARY_TIMEZONE=America/New_York +# RADIOSHAQ_MEMORY__MEMORY_RETENTION_DAYS=0 +# RADIOSHAQ_MEMORY__HINDSIGHT_EMBEDDING_MODEL= # ----------------------------------------------------------------------------- # Radio (CAT, FLDIGI, packet, SDR TX, voice) @@ -103,7 +111,9 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_RADIO__VOICE_USE_TTS=false # RADIOSHAQ_RADIO__TX_AUDIT_LOG_PATH= # RADIOSHAQ_RADIO__TX_ALLOWED_BANDS_ONLY=true -# RADIOSHAQ_RADIO__RESTRICTED_BANDS_REGION=FCC +# Compliance: region/country for restricted bands and band plan (see docs/compliance-regulatory.md) +# RADIOSHAQ_RADIO__RESTRICTED_BANDS_REGION=FCC # US=FCC, Canada=CA, EU/UK=CEPT|FR|UK|ES, Australia=AU, South Africa=ZA, NZ|JP|IN, or country code (e.g. AR, MX, NG) +# RADIOSHAQ_RADIO__BAND_PLAN_REGION= # Optional override: ITU_R1 (EU/Africa), ITU_R3 (Asia–Pacific), or leave blank to use backend default # RADIOSHAQ_RADIO__CALLSIGN_REGISTRY_REQUIRED=false # RADIOSHAQ_RADIO__SDR_TX_ENABLED=false # RADIOSHAQ_RADIO__SDR_TX_BACKEND=hackrf @@ -114,10 +124,19 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_RADIO__AUDIO_INPUT_ENABLED=false # RADIOSHAQ_RADIO__AUDIO_OUTPUT_ENABLED=false # RADIOSHAQ_RADIO__AUDIO_MONITORING_ENABLED=false +# Outbound: radio reply and relay (see docs/twilio-sms-whatsapp.md for SMS/WhatsApp) +# RADIOSHAQ_RADIO__RADIO_REPLY_TX_ENABLED=true +# RADIOSHAQ_RADIO__RADIO_REPLY_USE_TTS=true +# RADIOSHAQ_RADIO__RELAY_INJECT_TARGET_BAND=false +# RADIOSHAQ_RADIO__RELAY_TX_TARGET_BAND=false +# RADIOSHAQ_RADIO__RELAY_SCHEDULED_DELIVERY_ENABLED=false +# RADIOSHAQ_RADIO__STATION_CALLSIGN= # allowed_callsigns: use config.yaml (list) or JSON in env, e.g. RADIOSHAQ_RADIO__ALLOWED_CALLSIGNS='["K1ABC","W2XYZ"]' # ----------------------------------------------------------------------------- # Audio (voice_rx pipeline: VAD, ASR, triggers, response mode) +# Web UI shows VAD/metrics; when no live signal, a placeholder message is shown +# until the pipeline feeds metrics via WebSocket. # ----------------------------------------------------------------------------- # RADIOSHAQ_AUDIO__INPUT_DEVICE= # RADIOSHAQ_AUDIO__INPUT_SAMPLE_RATE=16000 @@ -129,6 +148,9 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_AUDIO__HIGHPASS_CUTOFF_HZ=80.0 # RADIOSHAQ_AUDIO__DENOISING_ENABLED=true # RADIOSHAQ_AUDIO__DENOISING_BACKEND=rnnoise +# When true and ASR model is 'scribe', run ElevenLabs Voice Isolator (audio-isolation) +# before Scribe STT. Requires ELEVENLABS_API_KEY. +# RADIOSHAQ_AUDIO__ELEVEN_VOICE_ISOLATOR_ENABLED=false # RADIOSHAQ_AUDIO__NOISE_CALIBRATION_SECONDS=3.0 # RADIOSHAQ_AUDIO__MIN_SNR_DB=3.0 # RADIOSHAQ_AUDIO__VAD_ENABLED=true @@ -140,7 +162,7 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_AUDIO__MAX_SPEECH_DURATION_MS=30000 # RADIOSHAQ_AUDIO__SILENCE_DURATION_MS=800 # RADIOSHAQ_AUDIO__ASR_MODEL=voxtral -# RADIOSHAQ_AUDIO__ASR_LANGUAGE=en +# RADIOSHAQ_AUDIO__ASR_LANGUAGE=en # en | fr | es | auto (auto = detect language) # RADIOSHAQ_AUDIO__ASR_MIN_CONFIDENCE=0.6 # RADIOSHAQ_AUDIO__RESPONSE_MODE=listen_only # RADIOSHAQ_AUDIO__RESPONSE_TIMEOUT_SECONDS=30.0 @@ -156,6 +178,9 @@ POSTGRES_PASSWORD=radioshaq # RADIOSHAQ_AUDIO__PTT_COORDINATION_ENABLED=true # RADIOSHAQ_AUDIO__PTT_COOLDOWN_MS=500 # RADIOSHAQ_AUDIO__BREAK_IN_ENABLED=true +# Message bus: publish voice segments to orchestrator +# RADIOSHAQ_AUDIO__VOICE_PUBLISH_TO_BUS=true +# RADIOSHAQ_AUDIO__VOICE_SOURCE_CALLSIGN_DEFAULT= # trigger_phrases: use config.yaml (list) or JSON, e.g. RADIOSHAQ_AUDIO__TRIGGER_PHRASES='["radioshaq","field station"]' # ----------------------------------------------------------------------------- @@ -214,5 +239,39 @@ POSTGRES_PASSWORD=radioshaq # CLI base URL and token for scripts # RADIOSHAQ_API=http://localhost:8000 # RADIOSHAQ_TOKEN= -# TTS (ElevenLabs) – used when voice_use_tts is true +# ----------------------------------------------------------------------------- +# Twilio / SMS & WhatsApp (same Twilio account; WhatsApp requires opt-in) +# ----------------------------------------------------------------------------- +# RADIOSHAQ_TWILIO__ACCOUNT_SID= +# RADIOSHAQ_TWILIO__AUTH_TOKEN= +# RADIOSHAQ_TWILIO__FROM_NUMBER= # E.164 SMS sender +# RADIOSHAQ_TWILIO__WHATSAPP_FROM= # E.164 WhatsApp-enabled sender (optional) + +# ----------------------------------------------------------------------------- +# Emergency contact (Section 9: human-validated emergency SMS/WhatsApp; see docs/notify-and-emergency-compliance-plan.md) +# ----------------------------------------------------------------------------- +# RADIOSHAQ_EMERGENCY_CONTACT__ENABLED=false +# RADIOSHAQ_EMERGENCY_CONTACT__REGIONS_ALLOWED= # JSON array e.g. ["FCC","CA"] +# RADIOSHAQ_EMERGENCY_CONTACT__APPROVAL_REQUIRED=true +# RADIOSHAQ_EMERGENCY_CONTACT__ALLOWED_EVENT_TYPES= # JSON array e.g. ["emergency"] + +# ----------------------------------------------------------------------------- +# TTS (used when radio_reply_use_tts or voice_use_tts is true; see docs/twilio-sms-whatsapp.md) +# ----------------------------------------------------------------------------- +# RADIOSHAQ_TTS__PROVIDER=elevenlabs +# RADIOSHAQ_TTS__ELEVENLABS_VOICE_ID=21m00Tcm4TlvDq8ikWAM +# RADIOSHAQ_TTS__ELEVENLABS_MODEL_ID=eleven_multilingual_v2 +# RADIOSHAQ_TTS__ELEVENLABS_OUTPUT_FORMAT=mp3_44100_128 +# RADIOSHAQ_TTS__KOKORO_VOICE=af_heart +# RADIOSHAQ_TTS__KOKORO_LANG_CODE=a +# RADIOSHAQ_TTS__KOKORO_SPEED=1.0 +# ElevenLabs API key (required when provider=elevenlabs) # ELEVENLABS_API_KEY= + +# ----------------------------------------------------------------------------- +# Web UI (Vite) – used when running npm run dev or serving built assets +# ----------------------------------------------------------------------------- +# Set in web-interface/.env or project root .env when developing the React UI. +# VITE_RADIOSHAQ_API=http://localhost:8000 +# VITE_RADIOSHAQ_TOKEN= +# VITE_GOOGLE_MAPS_API_KEY= # Optional. Enables Map page, Radio field map, Transcripts "View on map". Restrict key by HTTP referrer in Google Cloud Console. diff --git a/radioshaq/README.md b/radioshaq/README.md index 32becbd..cc50afa 100644 --- a/radioshaq/README.md +++ b/radioshaq/README.md @@ -7,7 +7,7 @@ A specialized AI-powered orchestrator for ham radio operations, emergency commun **Documentation:** [Quick Start](https://radioshaq.readthedocs.io/quick-start/), [Configuration](https://radioshaq.readthedocs.io/configuration/), [API Reference](https://radioshaq.readthedocs.io/api-reference/) (Read the Docs). In-repo source: [../docs/](../docs/) (MkDocs Material). [![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/) -[![License: GPL--2.0--only](https://img.shields.io/badge/License-GPL--2.0--only-blue.svg)](LICENSE.md) +[![License: GPL--2.0--only](https://img.shields.io/badge/License-GPL--2.0--only-blue.svg)](../LICENSE.md) ## Install (get everything correctly) @@ -22,6 +22,8 @@ From the **radioshaq** directory: uv sync --extra dev --extra test ``` +**Optional voice/audio:** For TTS, use **ElevenLabs** (set `ELEVENLABS_API_KEY`) or **Kokoro** (local: `uv sync --extra tts_kokoro`). For ASR, use **Voxtral/Whisper** (local: `uv sync --extra audio`) or **Scribe** (ElevenLabs API). See [Configuration](https://radioshaq.readthedocs.io/configuration/) for `tts.*` and `audio.asr_model`. + **Recommended first-time setup (cross-platform):** run interactive setup from the `radioshaq/` directory to create `.env` and `config.yaml`, optionally start Docker Postgres and run migrations: ```bash @@ -43,8 +45,8 @@ radioshaq launch docker # Or manually: cd infrastructure/local && docker compose up -d postgres && cd ../.. -# 2. Run migrations (use the runner script to avoid path issues on Windows) -python infrastructure/local/run_alembic.py upgrade head +# 2. Run migrations (use uv run so project deps are available; runner avoids path issues on Windows) +uv run python infrastructure/local/run_alembic.py upgrade head # 3. Start API uv run python -m radioshaq.api.server @@ -59,6 +61,16 @@ See [Configuration](https://radioshaq.readthedocs.io/configuration/) or [docs/da **Memory (per-callsign):** Run the memory migration (`uv run alembic upgrade head`) to create memory tables. Optional [Hindsight](https://hindsight.vectorize.io/) for semantic memory: set `RADIOSHAQ_MEMORY__HINDSIGHT_BASE_URL` and install `hindsight-client` if needed; or set `RADIOSHAQ_MEMORY__HINDSIGHT_ENABLED=false` for PostgreSQL-only memory. See [../MEMORY_SYSTEM.md](../MEMORY_SYSTEM.md) and [../MEMORY_IMPLEMENTATION_PLAN.md](../MEMORY_IMPLEMENTATION_PLAN.md). +### Runtime topology (API process) + +When you run `python -m radioshaq.api.server` (or `radioshaq run-api`), a single process runs: + +- **API** (FastAPI), **orchestrator** (REACT), and optional **MessageBus consumer** (when `RADIOSHAQ_BUS_CONSUMER_ENABLED=1`). +- **Outbound handler:** One dispatcher consumes outbound messages and routes by channel: `radio_rx` → radio TX, `sms` → Twilio SMS, `whatsapp` → Twilio WhatsApp. So SMS and WhatsApp outbound are handled inside the API process when the bus consumer is enabled; no separate Node bridge is required. +- **Optional Node bridge:** PM2 can start a `radioshaq-bridge` app only if `bridge/dist/index.js` exists; if the bridge directory is absent, that app is skipped and the API runs without it. + +SMS/WhatsApp configuration: see [docs/twilio-sms-whatsapp.md](docs/twilio-sms-whatsapp.md) and `.env.example` (`RADIOSHAQ_TWILIO__*`). + ## Authentication Most endpoints require a **Bearer JWT**. Get a token (no auth required) then send it on each request. @@ -90,9 +102,13 @@ uv run python scripts/demo/run_demo.py The script gets its own token from `POST /auth/token` (subject `demo-op1`, role `field`) and then injects on 40m, relays to 2m, and polls `/transcripts`. No manual auth needed. To poll **your messages** on a band (messages where you are the destination), use `GET /transcripts?callsign=&destination_only=true&band=`; omit `band` to get messages across all bands. See [scripts/demo/README.md](scripts/demo/README.md) and [docs/demo-two-local-one-remote.md](docs/demo-two-local-one-remote.md). -## Monitoring +## Response, compliance, and monitoring + +**Response:** Operator approval of emergency SMS/WhatsApp: poll `GET /emergency/pending-count` or `GET /emergency/events` to see pending requests, then `POST /emergency/events/{id}/approve` to send (see [Response & compliance](docs/response-compliance-and-monitoring.md)). Relay (radio/SMS/WhatsApp) and contact preferences (notify-on-relay, opt-out) are documented there as well. -**Prometheus:** `GET /metrics` (no auth) exposes uptime, callsign count, and optional GPU gauges (when `nvidia-smi` is available). Optional: `uv sync --extra metrics` for full prometheus-client support. See [Monitoring](https://radioshaq.readthedocs.io/monitoring/) in the docs. +**Compliance:** Radio restricted bands and band plans by region (FCC, CEPT, CA, etc.); messaging consent, opt-out, and emergency region allowlist. See [docs/response-compliance-and-monitoring.md](docs/response-compliance-and-monitoring.md) (includes regulatory references, backend table, country mapping) and [notify-and-emergency-compliance-plan.md](docs/notify-and-emergency-compliance-plan.md). + +**Monitoring:** `GET /metrics` (no auth) exposes uptime, callsign count, relay delivery count, and optional GPU gauges (when `nvidia-smi` is available). Optional: `uv sync --extra metrics` for full prometheus-client support. See [Response & compliance](docs/response-compliance-and-monitoring.md). ## Installing from PyPI @@ -105,8 +121,37 @@ Then open **http://localhost:8000/** for the web UI and **http://localhost:8000/ **Remote receiver (SDR):** For listen-only stations (e.g. Raspberry Pi + RTL-SDR) that stream to HQ, run `radioshaq run-receiver` after the same install. Set `JWT_SECRET`, `STATION_ID`, `HQ_URL`; optionally `pip install radioshaq[sdr]` or `radioshaq[hackrf]` for hardware support. HQ accepts uploads at `POST /receiver/upload`. +### HackRF on Windows + +The `python-hackrf` package needs the **HackRF SDK** (headers and DLLs) at build time. By default it looks for `C:\Program Files\HackRF\include\hackrf.h` and `C:\Program Files\HackRF\lib\`. + +1. **Install the HackRF SDK for Windows** (pick one): + - **Prebuilt (easiest):** Download a Windows build from [greatscottgadgets/hackrf Actions](https://github.com/greatscottgadgets/hackrf/actions) (log in, pick a successful run, download the Windows artifact). Or check [python_hackrf Releases](https://github.com/GvozdevLeonid/python_hackrf/releases) for a ZIP that contains `include/` and `lib/`. + - **Extract** so you have: + - `C:\Program Files\HackRF\include\hackrf.h` + - `C:\Program Files\HackRF\lib\` with `hackrf.dll`, `hackrf.lib` (MSVC), and dependencies (e.g. `libusb-1.0.dll`, `pthreadVC2.dll`). + - **Or build from source:** See [HackRF docs – Windows: Building From Source](https://hackrf.readthedocs.io/en/latest/installing_hackrf_software.html) (Visual Studio, CMake, vcpkg). Then copy the built `include/` and `lib/` (or `.dll`/`.lib`) into `C:\Program Files\HackRF\` or set the env vars below. + +2. **Custom install path:** If you put HackRF elsewhere, set before building: + ```powershell + $env:PYTHON_HACKRF_INCLUDE_PATH = "C:\path\to\hackrf\include" + $env:PYTHON_HACKRF_LIB_PATH = "C:\path\to\hackrf\lib" + $env:HACKRF_LIB_DIR = "C:\path\to\hackrf\lib" + ``` + +3. **Install the hackrf extra:** In this repo the `hackrf` extra is skipped on Windows by default so `uv sync --all-extras` doesn’t fail. After the SDK is in place, install the binding explicitly: + ```powershell + uv pip install python-hackrf + # Or add hackrf to the project and remove the Windows-only marker in pyproject.toml, then: + # uv sync --extra hackrf + ``` + +**Driver:** For the device itself, use [Zadig](https://zadig.akeo.ie/) to install the WinUSB driver for HackRF One. Alternatively, [RadioConda](https://github.com/ryanvolz/radioconda) provides HackRF binaries and a Conda environment on Windows (use that Python/conda if you prefer). + ## Development +Install dependencies first (from the **radioshaq** directory): `uv sync --extra dev --extra test`. Then use `uv run` for all commands below so the correct environment (with geoalchemy2, loguru, etc.) is used. + ```bash # Run tests (memory tests use HINDSIGHT_ENABLED=false; test_manager may skip without migrated DB) uv run pytest tests/unit tests/integration -v @@ -118,6 +163,28 @@ uv run mypy radioshaq uv run ruff check . && uv run ruff format . ``` +### Serving the web UI from the API (local) + +To test the API serving the same UI as the built bundle (e.g. before packaging or in CI): + +```bash +cd web-interface && npm run build +mkdir -p ../radioshaq/web_ui && cp -r dist/. ../radioshaq/web_ui/ +cd .. && uv run python -m radioshaq.api.server +``` + +Then open http://localhost:8000/. CI (test-ci, publish-pypi, publish-nightly) builds the web UI and copies it to `radioshaq/radioshaq/web_ui` so the served artifact matches the source for the same commit. + +### Troubleshooting: ModuleNotFoundError (geoalchemy2, loguru) + +If you see `ModuleNotFoundError: No module named 'geoalchemy2'` or `...'loguru'` when running migrations, tests, or the API, the command is using a Python that doesn't have the project's dependencies. Fix: + +1. From the **radioshaq** directory run: `uv sync --extra dev --extra test` +2. Run commands via **uv run** so the project venv is used, e.g. + `uv run python infrastructure/local/run_alembic.py upgrade head`, + `uv run pytest tests/unit -v`, + `uv run python -m radioshaq.api.server` + ## License GPL-2.0-only diff --git a/radioshaq/alembic.ini b/radioshaq/alembic.ini index ba11285..206cfc5 100644 --- a/radioshaq/alembic.ini +++ b/radioshaq/alembic.ini @@ -1,4 +1,4 @@ -# SHAKODS Alembic Configuration +# RadioShaq Alembic Configuration # For database migration management with PostgreSQL/PostGIS [alembic] diff --git a/radioshaq/alembic/README b/radioshaq/alembic/README index 5655a71..65f7e8c 100644 --- a/radioshaq/alembic/README +++ b/radioshaq/alembic/README @@ -1,7 +1,7 @@ -SHAKODS Database Migrations -============================ +RadioShaq Database Migrations +============================= -This directory contains Alembic database migrations for SHAKODS. +This directory contains Alembic database migrations for RadioShaq. Prerequisites ------------- @@ -14,8 +14,8 @@ Quick Start 1. Initialize database (first time only): ```bash - createdb shakods - psql shakods -c "CREATE EXTENSION IF NOT EXISTS postgis;" + createdb radioshaq + psql radioshaq -c "CREATE EXTENSION IF NOT EXISTS postgis;" ``` 2. Run migrations: @@ -43,14 +43,14 @@ Environment Variables --------------------- - `DATABASE_URL`: Full PostgreSQL URL - Example: `postgresql://user:pass@localhost:5432/shakods` + Example: `postgresql://user:pass@localhost:5432/radioshaq` - `POSTGRES_HOST`, `POSTGRES_PORT`, `POSTGRES_DB`, `POSTGRES_USER`, `POSTGRES_PASSWORD`: Individual components With PM2 -------- ```bash -pm2 start ecosystem.config.js --only shakods-migrate +pm2 start ecosystem.config.js --only radioshaq-migrate ``` Troubleshooting diff --git a/radioshaq/alembic/env.py b/radioshaq/alembic/env.py index 449c722..ea4457e 100644 --- a/radioshaq/alembic/env.py +++ b/radioshaq/alembic/env.py @@ -1,6 +1,6 @@ """Alembic environment configuration. -This module configures Alembic to work with SHAKODS database models, +This module configures Alembic to work with RadioShaq database models, supporting both synchronous (for migrations) and asynchronous (for runtime) PostgreSQL connections with PostGIS extension. """ @@ -20,7 +20,7 @@ from sqlalchemy import engine_from_config, pool from sqlalchemy.engine import Connection -# Import SHAKODS models for autogenerate support +# Import RadioShaq models for autogenerate support from radioshaq.database.models import Base # Alembic Config object @@ -36,27 +36,44 @@ def get_database_url() -> str: """Get database URL from environment or config. - + Returns: - PostgreSQL connection URL for migrations (sync) + PostgreSQL connection URL for migrations (sync). + Uses psycopg2, adds connect_timeout and optional sslmode=disable + so migrations do not hang (e.g. on SSL handshake or slow network). """ + # Query params: avoid hang on connect (timeout) and optional no-SSL (WSL/Docker) + connect_timeout = os.getenv("ALEMBIC_CONNECT_TIMEOUT", "10") + extra_params = f"connect_timeout={connect_timeout}" + if os.getenv("ALEMBIC_SSLMODE_DISABLE", "").lower() in ("1", "true", "yes"): + extra_params = f"sslmode=disable&{extra_params}" + # Priority: DATABASE_URL > individual vars > default if database_url := os.getenv("DATABASE_URL"): # Convert async URL to sync URL if needed if "+asyncpg" in database_url: - return database_url.replace("+asyncpg", "") + database_url = database_url.replace("+asyncpg", "") if "+aiosqlite" in database_url: - return database_url.replace("+aiosqlite", "") + database_url = database_url.replace("+aiosqlite", "") + # Ensure sync driver for migrations (psycopg2) + if "postgresql://" in database_url and "+" not in database_url.split("//")[0]: + database_url = database_url.replace("postgresql://", "postgresql+psycopg2://", 1) + # Append timeout (and optional sslmode) if not already present + base, _, query = database_url.partition("?") + if "connect_timeout" not in query: + query = f"{query}&{extra_params}" if query else extra_params + database_url = f"{base}?{query.lstrip('&')}" return database_url - - # Build from individual components + + # Build from individual components (default port 5434 to match local Docker Postgres) host = os.getenv("POSTGRES_HOST", "localhost") - port = os.getenv("POSTGRES_PORT", "5432") + port = os.getenv("POSTGRES_PORT", "5434") database = os.getenv("POSTGRES_DB", "radioshaq") user = os.getenv("POSTGRES_USER", "radioshaq") password = os.getenv("POSTGRES_PASSWORD", "radioshaq") - - return f"postgresql://{user}:{password}@{host}:{port}/{database}" + + url = f"postgresql+psycopg2://{user}:{password}@{host}:{port}/{database}?{extra_params}" + return url def run_migrations_offline() -> None: diff --git a/radioshaq/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py b/radioshaq/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py index ed43278..e0e1897 100644 --- a/radioshaq/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py +++ b/radioshaq/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py @@ -20,7 +20,7 @@ def upgrade() -> None: - """Create initial SHAKODS database schema with PostGIS support.""" + """Create initial RadioShaq database schema with PostGIS support.""" # Create PostGIS extension op.execute("CREATE EXTENSION IF NOT EXISTS postgis") @@ -210,7 +210,7 @@ def upgrade() -> None: def downgrade() -> None: - """Drop all SHAKODS tables.""" + """Drop all RadioShaq tables.""" # Drop tables in reverse order of dependencies op.drop_table("session_states") diff --git a/radioshaq/alembic/versions/2026_03_04_1100-registered_callsigns_bands.py b/radioshaq/alembic/versions/2026_03_04_1100-registered_callsigns_bands.py new file mode 100644 index 0000000..80d31f0 --- /dev/null +++ b/radioshaq/alembic/versions/2026_03_04_1100-registered_callsigns_bands.py @@ -0,0 +1,33 @@ +"""registered_callsigns preferred_bands and last_band + +Revision ID: c3d4e5f6a7b8 +Revises: b2c3d4e5f6a7 +Create Date: 2026-03-04 11:00:00.000000 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa + +revision: str = "c3d4e5f6a7b8" +down_revision: Union[str, None] = "b2c3d4e5f6a7" +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + op.add_column( + "registered_callsigns", + sa.Column("preferred_bands", sa.JSON(), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("last_band", sa.String(length=20), nullable=True), + ) + + +def downgrade() -> None: + op.drop_column("registered_callsigns", "last_band") + op.drop_column("registered_callsigns", "preferred_bands") + diff --git a/radioshaq/alembic/versions/2026_03_07_0000-registered_callsigns_contact_preferences.py b/radioshaq/alembic/versions/2026_03_07_0000-registered_callsigns_contact_preferences.py new file mode 100644 index 0000000..1d250ca --- /dev/null +++ b/radioshaq/alembic/versions/2026_03_07_0000-registered_callsigns_contact_preferences.py @@ -0,0 +1,53 @@ +"""registered_callsigns contact preferences (notify-on-relay Section 8.1) + +Revision ID: d4e5f6a7b8c9 +Revises: c3d4e5f6a7b8 +Create Date: 2026-03-07 00:00:00.000000 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa + +revision: str = "d4e5f6a7b8c9" +down_revision: Union[str, None] = "c3d4e5f6a7b8" +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + op.add_column( + "registered_callsigns", + sa.Column("notify_sms_phone", sa.String(length=20), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_whatsapp_phone", sa.String(length=20), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_on_relay", sa.Boolean(), nullable=False, server_default=sa.false()), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_consent_at", sa.DateTime(timezone=True), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_consent_source", sa.String(length=20), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_opt_out_at", sa.DateTime(timezone=True), nullable=True), + ) + + +def downgrade() -> None: + op.drop_column("registered_callsigns", "notify_opt_out_at") + op.drop_column("registered_callsigns", "notify_consent_source") + op.drop_column("registered_callsigns", "notify_consent_at") + op.drop_column("registered_callsigns", "notify_on_relay") + op.drop_column("registered_callsigns", "notify_whatsapp_phone") + op.drop_column("registered_callsigns", "notify_sms_phone") + diff --git a/radioshaq/alembic/versions/2026_03_07_1000-registered_callsigns_opt_out_per_channel.py b/radioshaq/alembic/versions/2026_03_07_1000-registered_callsigns_opt_out_per_channel.py new file mode 100644 index 0000000..1a8a1ec --- /dev/null +++ b/radioshaq/alembic/versions/2026_03_07_1000-registered_callsigns_opt_out_per_channel.py @@ -0,0 +1,33 @@ +"""registered_callsigns per-channel opt-out (notify_opt_out_at_sms, notify_opt_out_at_whatsapp) + +Revision ID: e5f6a7b8c9d0 +Revises: d4e5f6a7b8c9 +Create Date: 2026-03-07 10:00:00.000000 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa + +revision: str = "e5f6a7b8c9d0" +down_revision: Union[str, None] = "d4e5f6a7b8c9" +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + op.add_column( + "registered_callsigns", + sa.Column("notify_opt_out_at_sms", sa.DateTime(timezone=True), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_opt_out_at_whatsapp", sa.DateTime(timezone=True), nullable=True), + ) + + +def downgrade() -> None: + op.drop_column("registered_callsigns", "notify_opt_out_at_whatsapp") + op.drop_column("registered_callsigns", "notify_opt_out_at_sms") + diff --git a/radioshaq/config.example.yaml b/radioshaq/config.example.yaml index ee4294f..65cfe4a 100644 --- a/radioshaq/config.example.yaml +++ b/radioshaq/config.example.yaml @@ -26,7 +26,7 @@ database: dynamodb_endpoint: null # e.g. http://localhost:4566 for localstack dynamodb_region: us-east-1 redis_url: "redis://localhost:6379/0" - alembic_config: "infrastructure/local/alembic.ini" + alembic_config: "alembic.ini" auto_migrate: false # ----------------------------------------------------------------------------- @@ -44,13 +44,16 @@ jwt: # LLM (set API key in env or here; prefer env for secrets) # ----------------------------------------------------------------------------- llm: - provider: mistral # mistral | openai | anthropic | custom + provider: mistral # mistral | openai | anthropic | custom | huggingface | gemini model: mistral-large-latest mistral_api_key: null openai_api_key: null anthropic_api_key: null custom_api_base: null custom_api_key: null + gemini_api_key: null # For provider: gemini; or set GEMINI_API_KEY + huggingface_api_key: null # For provider: huggingface; or set HF_TOKEN + huggingface_api_base: null # Optional; default https://router.huggingface.co/v1 temperature: 0.1 max_tokens: 4096 timeout_seconds: 60.0 @@ -70,6 +73,7 @@ memory: recent_messages_limit: 40 daily_summary_days: 7 summary_timezone: "America/New_York" + memory_retention_days: 0 # Delete memory_messages older than N days; 0 = no delete # ----------------------------------------------------------------------------- # Per-role / per-subagent overrides (optional). Missing fields fall back to llm / memory above. @@ -117,7 +121,9 @@ radio: radio_reply_use_tts: true tx_audit_log_path: null tx_allowed_bands_only: true - restricted_bands_region: FCC # FCC | CEPT + # Compliance: set region/country so TX uses correct restricted bands and band plan (see docs/compliance-regulatory.md) + restricted_bands_region: FCC # Country/region for restricted bands: FCC, CA, CEPT|FR|UK|ES (EU), AU, ZA, NZ|JP|IN, or country code (AR, MX, NG, …). Do not use ITU_R1/ITU_R3 here (band-plan-only; no enforcement). + band_plan_region: null # Optional band plan override: ITU_R1 (Europe/Africa 2m 144–146), ITU_R3 (Asia–Pacific). null = use backend default. Use this for ITU_R1/ITU_R3; keep restricted_bands_region as a country (e.g. CEPT, AU). allowed_callsigns: null # e.g. [K1ABC, W2XYZ]; merged with DB registry callsign_registry_required: false sdr_tx_enabled: false @@ -133,9 +139,15 @@ radio: station_callsign: null # Our callsign for reply; defaults to packet_callsign response_radio_format_enabled: false response_radio_format_style: over # over | prosign (K) | none + voice_store_keywords: null # Only store voice segments containing at least one keyword (list; case-insensitive) + band_listener_store: true + band_listener_store_min_length: 0 + transcript_retention_days: 0 # If > 0, delete transcripts older than N days + relay_store_only_relayed: false # ----------------------------------------------------------------------------- # Audio (voice_rx pipeline: capture, VAD, ASR, triggers, response mode) +# Web UI VAD/metrics panel shows a placeholder until the pipeline feeds live metrics. # ----------------------------------------------------------------------------- audio: input_device: null @@ -150,6 +162,7 @@ audio: denoising_backend: rnnoise # rnnoise | spectral | none noise_calibration_seconds: 3.0 min_snr_db: 3.0 + eleven_voice_isolator_enabled: false # When true and asr_model is 'scribe', run ElevenLabs Voice Isolator before Scribe STT (requires ELEVENLABS_API_KEY). vad_enabled: true vad_threshold: 0.02 vad_mode: aggressive # normal | low | aggressive | very_aggressive @@ -159,7 +172,7 @@ audio: max_speech_duration_ms: 30000 silence_duration_ms: 800 asr_model: voxtral - asr_language: en + asr_language: en # en | fr | es | auto (auto = detect language; UI shows "Auto (detect)") asr_min_confidence: 0.6 response_mode: listen_only # listen_only | confirm_first | auto_respond | confirm_timeout response_timeout_seconds: 30.0 @@ -182,6 +195,18 @@ audio: voice_publish_to_bus: true # Publish transcribed voice segments to MessageBus voice_source_callsign_default: null # Sender for voice when not parsed (null = UNKNOWN) +# ----------------------------------------------------------------------------- +# TTS (text-to-speech when voice_use_tts or use_tts is true) +# ----------------------------------------------------------------------------- +tts: + provider: elevenlabs # elevenlabs (API, set ELEVENLABS_API_KEY) | kokoro (local, uv sync --extra tts_kokoro) + elevenlabs_voice_id: "21m00Tcm4TlvDq8ikWAM" + elevenlabs_model_id: eleven_multilingual_v2 + elevenlabs_output_format: mp3_44100_128 + kokoro_voice: af_heart + kokoro_lang_code: a # a (US en), b (UK en), e (es), f (fr), etc. + kokoro_speed: 1.0 + # ----------------------------------------------------------------------------- # Field mode (when mode: field) # ----------------------------------------------------------------------------- @@ -213,6 +238,15 @@ hq: auto_coordination_enabled: true coordination_interval_seconds: 30 +# ----------------------------------------------------------------------------- +# Emergency contact (emergency events, regions, approval; see response-compliance-and-monitoring.md) +# ----------------------------------------------------------------------------- +emergency_contact: + enabled: false + regions_allowed: [] # e.g. [FCC, CA] + approval_required: true + allowed_event_types: [] # e.g. [emergency] + # ----------------------------------------------------------------------------- # PM2 (process manager) # ----------------------------------------------------------------------------- diff --git a/radioshaq/ecosystem.config.js b/radioshaq/ecosystem.config.js index 94d5319..2691071 100644 --- a/radioshaq/ecosystem.config.js +++ b/radioshaq/ecosystem.config.js @@ -117,9 +117,10 @@ module.exports = { }, // ===================================================== - // WhatsApp Bridge (Node.js) + // WhatsApp Bridge (Node.js) – optional; only started if bridge/dist exists // ===================================================== - { + ...(require('fs').existsSync(path.join(__dirname, 'bridge', 'dist', 'index.js')) + ? [{ name: 'radioshaq-bridge', script: './bridge/dist/index.js', cwd: __dirname, @@ -166,7 +167,8 @@ module.exports = { min_uptime: '10s', max_restarts: 5, restart_delay: 5000, - }, + }] + : []), // ===================================================== // REACT Orchestrator Worker diff --git a/radioshaq/examples/README.md b/radioshaq/examples/README.md index 7a77f3d..b3b3256 100644 --- a/radioshaq/examples/README.md +++ b/radioshaq/examples/README.md @@ -1,4 +1,4 @@ -# SHAKODS Examples +# RadioShaq Examples ## Config sample diff --git a/radioshaq/examples/config_sample.yaml b/radioshaq/examples/config_sample.yaml index 3181404..4547b5c 100644 --- a/radioshaq/examples/config_sample.yaml +++ b/radioshaq/examples/config_sample.yaml @@ -20,6 +20,25 @@ radio: listener_concurrent_bands: true # false = single receiver, round-robin receiver_upload_store: false receiver_upload_inject: false + # SDR TX (HackRF) coordination + # When enabled, RadioShaq can transmit via HackRF either directly from HQ (local) + # or via a remote receiver service (broker). + sdr_tx_enabled: false + sdr_tx_backend: hackrf + # sdr_tx_mode: local # HQ owns HackRF directly (pyhackrf2); do not also run run-receiver with HackRF. + # sdr_tx_mode: remote # Remote receiver owns HackRF; HQ calls /tx endpoints on the receiver service. + # sdr_tx_service_base_url: "http://localhost:8765" # Required when sdr_tx_mode=remote + # sdr_tx_service_token: "" # Required when remote receiver enforces JWT on /tx/* endpoints. + +twilio: + # Twilio configuration for SMS/WhatsApp relay. + # In development you can set allow_unsigned_webhooks=true to accept unsigned webhooks, + # but in production you must configure auth_token and rely on signature validation. + account_sid: null + auth_token: null + from_number: null + whatsapp_from: null + allow_unsigned_webhooks: false cat_enabled: false audio_input_enabled: false audio_output_enabled: false diff --git a/radioshaq/infrastructure/aws/cloudformation/api_gateway.yaml b/radioshaq/infrastructure/aws/cloudformation/api_gateway.yaml index 12bc772..2ce4b63 100644 --- a/radioshaq/infrastructure/aws/cloudformation/api_gateway.yaml +++ b/radioshaq/infrastructure/aws/cloudformation/api_gateway.yaml @@ -1,6 +1,6 @@ -# SHAKODS API Gateway REST API - routes to Lambda +# RadioShaq API Gateway REST API - routes to Lambda AWSTemplateFormatVersion: "2010-09-09" -Description: SHAKODS API Gateway +Description: RadioShaq API Gateway Parameters: Environment: @@ -15,7 +15,7 @@ Resources: Type: AWS::ApiGateway::RestApi Properties: Name: !Sub "radioshaq-${Environment}-api" - Description: SHAKODS REST API + Description: RadioShaq REST API EndpointConfiguration: Types: [REGIONAL] diff --git a/radioshaq/infrastructure/aws/cloudformation/base.yaml b/radioshaq/infrastructure/aws/cloudformation/base.yaml index fa15a7c..47d105a 100644 --- a/radioshaq/infrastructure/aws/cloudformation/base.yaml +++ b/radioshaq/infrastructure/aws/cloudformation/base.yaml @@ -1,7 +1,7 @@ -# SHAKODS base infrastructure: VPC (optional), security groups +# RadioShaq base infrastructure: VPC (optional), security groups # Use default VPC for minimal serverless; this stack can export subnet/security group for Lambda if needed. AWSTemplateFormatVersion: "2010-09-09" -Description: SHAKODS base - shared resources (optional VPC, security groups) +Description: RadioShaq base - shared resources (optional VPC, security groups) Parameters: Environment: diff --git a/radioshaq/infrastructure/aws/cloudformation/database.yaml b/radioshaq/infrastructure/aws/cloudformation/database.yaml index f3bcc6d..01a6783 100644 --- a/radioshaq/infrastructure/aws/cloudformation/database.yaml +++ b/radioshaq/infrastructure/aws/cloudformation/database.yaml @@ -1,6 +1,6 @@ -# SHAKODS database layer: DynamoDB (sessions/state). RDS PostGIS optional. +# RadioShaq database layer: DynamoDB (sessions/state). RDS PostGIS optional. AWSTemplateFormatVersion: "2010-09-09" -Description: SHAKODS database - DynamoDB state/sessions +Description: RadioShaq database - DynamoDB state/sessions Parameters: Environment: diff --git a/radioshaq/infrastructure/aws/cloudformation/lambda.yaml b/radioshaq/infrastructure/aws/cloudformation/lambda.yaml index 9197d05..11cb870 100644 --- a/radioshaq/infrastructure/aws/cloudformation/lambda.yaml +++ b/radioshaq/infrastructure/aws/cloudformation/lambda.yaml @@ -1,6 +1,6 @@ -# SHAKODS Lambda functions: API handler, message handler +# RadioShaq Lambda functions: API handler, message handler AWSTemplateFormatVersion: "2010-09-09" -Description: SHAKODS Lambda - API and message handlers +Description: RadioShaq Lambda - API and message handlers Parameters: Environment: diff --git a/radioshaq/infrastructure/aws/lambda/api_handler.py b/radioshaq/infrastructure/aws/lambda/api_handler.py index 9b344b0..c5e9db2 100644 --- a/radioshaq/infrastructure/aws/lambda/api_handler.py +++ b/radioshaq/infrastructure/aws/lambda/api_handler.py @@ -1,5 +1,5 @@ """ -Lambda handler for SHAKODS API (API Gateway REST). +Lambda handler for RadioShaq API (API Gateway REST). Handles /orchestrate and health; JWT auth; optionally starts Step Functions. """ diff --git a/radioshaq/infrastructure/aws/lambda/message_handler.py b/radioshaq/infrastructure/aws/lambda/message_handler.py index 7e7cae7..5859448 100644 --- a/radioshaq/infrastructure/aws/lambda/message_handler.py +++ b/radioshaq/infrastructure/aws/lambda/message_handler.py @@ -1,5 +1,5 @@ """ -Lambda handler for SHAKODS message ingestion (e.g. SQS, API Gateway webhook). +Lambda handler for RadioShaq message ingestion (e.g. SQS, API Gateway webhook). Processes incoming messages (WhatsApp, SMS, etc.) and forwards to HQ API when RADIOSHAQ_HQ_URL is set. Expects body with InboundMessage-compatible fields: channel, sender_id, chat_id, content; optional media, metadata. @@ -38,7 +38,7 @@ def _forward_to_hq(payload: dict[str, Any]) -> bool: with urllib.request.urlopen(req, timeout=10) as resp: return 200 <= resp.status < 300 except Exception as e: - logger.warning("HQ forward failed: %s", e) + logger.warning("HQ forward failed: {}", e) return False diff --git a/radioshaq/infrastructure/aws/scripts/deploy_lambda.sh b/radioshaq/infrastructure/aws/scripts/deploy_lambda.sh index c647140..ee95b01 100644 --- a/radioshaq/infrastructure/aws/scripts/deploy_lambda.sh +++ b/radioshaq/infrastructure/aws/scripts/deploy_lambda.sh @@ -25,7 +25,10 @@ STAGING="${AWS_DIR}/build/staging" rm -rf "${STAGING}" mkdir -p "${STAGING}" -# Install radioshaq and dependencies into staging (use pip with target) +# Install radioshaq and dependencies into staging (use pip with target). +# Default install does not include optional extras; for local ASR/TTS use: +# pip install --target "${STAGING}" ".[audio,tts_kokoro]" -q (Voxtral/Kokoro; ensure Lambda has enough memory). +# API-only (ElevenLabs TTS + Scribe ASR) works with default install when ELEVENLABS_API_KEY is set. pip install --target "${STAGING}" . -q 2>/dev/null || uv pip install --target "${STAGING}" . -q 2>/dev/null || true # Copy Lambda handler modules into package diff --git a/radioshaq/infrastructure/aws/stepfunctions/react_orchestrator.asl.json b/radioshaq/infrastructure/aws/stepfunctions/react_orchestrator.asl.json index c212c49..e6b1abc 100644 --- a/radioshaq/infrastructure/aws/stepfunctions/react_orchestrator.asl.json +++ b/radioshaq/infrastructure/aws/stepfunctions/react_orchestrator.asl.json @@ -1,5 +1,5 @@ { - "Comment": "SHAKODS REACT orchestrator - single Lambda invocation; expand to full Reason/Act/Evaluate loop when judge/agent Lambdas exist", + "Comment": "RadioShaq REACT orchestrator - single Lambda invocation; expand to full Reason/Act/Evaluate loop when judge/agent Lambdas exist", "StartAt": "InvokeOrchestrator", "States": { "InvokeOrchestrator": { diff --git a/radioshaq/infrastructure/local/alembic.ini b/radioshaq/infrastructure/local/alembic.ini index f93c816..e13f58f 100644 --- a/radioshaq/infrastructure/local/alembic.ini +++ b/radioshaq/infrastructure/local/alembic.ini @@ -1,9 +1,9 @@ -# Alembic configuration for SHAKODS +# Alembic configuration for RadioShaq # Database migration management [alembic] -# Path to migration scripts -script_location = infrastructure/local/alembic +# Path to migration scripts (use root Alembic tree) +script_location = alembic # Template used to generate migration files file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s diff --git a/radioshaq/infrastructure/local/alembic/env.py b/radioshaq/infrastructure/local/alembic/env.py index 1a08cda..e6955a1 100644 --- a/radioshaq/infrastructure/local/alembic/env.py +++ b/radioshaq/infrastructure/local/alembic/env.py @@ -1,6 +1,6 @@ -"""Alembic environment configuration for SHAKODS. +"""Alembic environment configuration for RadioShaq. -This script configures Alembic migrations for the SHAKODS database, +This script configures Alembic migrations for the RadioShaq database, supporting both PostgreSQL with PostGIS. Migrations run with the sync driver (psycopg2) for compatibility and to avoid asyncpg auth issues when running from the host. @@ -26,7 +26,7 @@ from sqlalchemy import create_engine, pool from sqlalchemy.engine import Connection -# Import SHAKODS models +# Import RadioShaq models from radioshaq.database.models import Base # Alembic Config object diff --git a/radioshaq/infrastructure/local/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py b/radioshaq/infrastructure/local/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py index 8f06e9e..2b1dd29 100644 --- a/radioshaq/infrastructure/local/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py +++ b/radioshaq/infrastructure/local/alembic/versions/2025_02_28_0001-initial_schema_with_postgis.py @@ -20,7 +20,7 @@ def upgrade() -> None: - """Create initial SHAKODS database schema with PostGIS support.""" + """Create initial RadioShaq database schema with PostGIS support.""" # Create PostGIS extension op.execute("CREATE EXTENSION IF NOT EXISTS postgis") @@ -210,7 +210,7 @@ def upgrade() -> None: def downgrade() -> None: - """Drop all SHAKODS tables.""" + """Drop all RadioShaq tables.""" # Drop tables in reverse order of dependencies op.drop_table("session_states") diff --git a/radioshaq/infrastructure/local/alembic/versions/2026_03_07_0000-registered_callsigns_contact_preferences.py b/radioshaq/infrastructure/local/alembic/versions/2026_03_07_0000-registered_callsigns_contact_preferences.py new file mode 100644 index 0000000..9252a5a --- /dev/null +++ b/radioshaq/infrastructure/local/alembic/versions/2026_03_07_0000-registered_callsigns_contact_preferences.py @@ -0,0 +1,52 @@ +"""registered_callsigns contact preferences (notify-on-relay Section 8.1) + +Revision ID: d4e5f6a7b8c9 +Revises: c3d4e5f6a7b8 +Create Date: 2026-03-07 00:00:00.000000 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa + +revision: str = "d4e5f6a7b8c9" +down_revision: Union[str, None] = "c3d4e5f6a7b8" +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + op.add_column( + "registered_callsigns", + sa.Column("notify_sms_phone", sa.String(length=20), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_whatsapp_phone", sa.String(length=20), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_on_relay", sa.Boolean(), nullable=False, server_default=sa.false()), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_consent_at", sa.DateTime(timezone=True), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_consent_source", sa.String(length=20), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_opt_out_at", sa.DateTime(timezone=True), nullable=True), + ) + + +def downgrade() -> None: + op.drop_column("registered_callsigns", "notify_opt_out_at") + op.drop_column("registered_callsigns", "notify_consent_source") + op.drop_column("registered_callsigns", "notify_consent_at") + op.drop_column("registered_callsigns", "notify_on_relay") + op.drop_column("registered_callsigns", "notify_whatsapp_phone") + op.drop_column("registered_callsigns", "notify_sms_phone") diff --git a/radioshaq/infrastructure/local/alembic/versions/2026_03_07_1000-registered_callsigns_opt_out_per_channel.py b/radioshaq/infrastructure/local/alembic/versions/2026_03_07_1000-registered_callsigns_opt_out_per_channel.py new file mode 100644 index 0000000..dcc33da --- /dev/null +++ b/radioshaq/infrastructure/local/alembic/versions/2026_03_07_1000-registered_callsigns_opt_out_per_channel.py @@ -0,0 +1,32 @@ +"""registered_callsigns per-channel opt-out (notify_opt_out_at_sms, notify_opt_out_at_whatsapp) + +Revision ID: e5f6a7b8c9d0 +Revises: d4e5f6a7b8c9 +Create Date: 2026-03-07 10:00:00.000000 + +""" +from typing import Sequence, Union + +from alembic import op +import sqlalchemy as sa + +revision: str = "e5f6a7b8c9d0" +down_revision: Union[str, None] = "d4e5f6a7b8c9" +branch_labels: Union[str, Sequence[str], None] = None +depends_on: Union[str, Sequence[str], None] = None + + +def upgrade() -> None: + op.add_column( + "registered_callsigns", + sa.Column("notify_opt_out_at_sms", sa.DateTime(timezone=True), nullable=True), + ) + op.add_column( + "registered_callsigns", + sa.Column("notify_opt_out_at_whatsapp", sa.DateTime(timezone=True), nullable=True), + ) + + +def downgrade() -> None: + op.drop_column("registered_callsigns", "notify_opt_out_at_whatsapp") + op.drop_column("registered_callsigns", "notify_opt_out_at_sms") diff --git a/radioshaq/infrastructure/local/docker-compose.yml b/radioshaq/infrastructure/local/docker-compose.yml index d509c3d..4cac00b 100644 --- a/radioshaq/infrastructure/local/docker-compose.yml +++ b/radioshaq/infrastructure/local/docker-compose.yml @@ -152,7 +152,7 @@ services: - HINDSIGHT_API_LLM_PROVIDER=${RADIOSHAQ_LLM__PROVIDER:-${HINDSIGHT_API_LLM_PROVIDER:-openai}} - HINDSIGHT_API_LLM_MODEL=${RADIOSHAQ_LLM__MODEL:-${HINDSIGHT_API_LLM_MODEL:-gpt-4o-mini}} # API key: first non-empty of RadioShaq keys, then generic keys - - HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}} + - HINDSIGHT_API_LLM_API_KEY=${RADIOSHAQ_LLM__OPENAI_API_KEY:-${OPENAI_API_KEY:-${RADIOSHAQ_LLM__MISTRAL_API_KEY:-${MISTRAL_API_KEY:-${RADIOSHAQ_LLM__ANTHROPIC_API_KEY:-${ANTHROPIC_API_KEY:-${RADIOSHAQ_LLM__GEMINI_API_KEY:-${GEMINI_API_KEY:-${RADIOSHAQ_LLM__CUSTOM_API_KEY:-${HINDSIGHT_API_LLM_API_KEY:-}}}}}}}}}} # Custom base URL (e.g. OpenAI-compatible or Mistral endpoint) - HINDSIGHT_API_LLM_BASE_URL=${RADIOSHAQ_LLM__CUSTOM_API_BASE:-${HINDSIGHT_API_LLM_BASE_URL:-}} # Same Postgres as RadioShaq (postgres service, db radioshaq; pgvector in postgres/init/02-pgvector.sql) diff --git a/radioshaq/infrastructure/local/ecosystem.config.js b/radioshaq/infrastructure/local/ecosystem.config.js index 89e3e21..52f0c1b 100644 --- a/radioshaq/infrastructure/local/ecosystem.config.js +++ b/radioshaq/infrastructure/local/ecosystem.config.js @@ -67,7 +67,7 @@ module.exports = { BRIDGE_URL: 'ws://localhost:3001', BRIDGE_TOKEN: 'dev-bridge-token', // Alembic - ALEMBIC_CONFIG: 'infrastructure/local/alembic.ini', + ALEMBIC_CONFIG: 'alembic.ini', // Memory (per-callsign; optional Hindsight) RADIOSHAQ_MEMORY__ENABLED: 'true', RADIOSHAQ_MEMORY__HINDSIGHT_BASE_URL: 'http://localhost:8888', @@ -310,7 +310,7 @@ module.exports = { watch: false, one_time: true, // Run once and exit env: { - ALEMBIC_CONFIG: 'infrastructure/local/alembic.ini', + ALEMBIC_CONFIG: 'alembic.ini', DATABASE_URL: 'postgresql+asyncpg://radioshaq:radioshaq@localhost:5432/radioshaq', }, log_file: path.join(logsDir, 'alembic.log'), @@ -361,6 +361,7 @@ module.exports = { // ========================================== // PM2 Deployment Configuration // ========================================== + // For production with local ASR/TTS, use: pip install -e ".[audio]" or pip install -e ".[audio,tts_kokoro]" deploy: { production: { user: 'radioshaq', diff --git a/radioshaq/infrastructure/local/run_alembic.py b/radioshaq/infrastructure/local/run_alembic.py index ad11fa7..58963a1 100644 --- a/radioshaq/infrastructure/local/run_alembic.py +++ b/radioshaq/infrastructure/local/run_alembic.py @@ -1,10 +1,14 @@ #!/usr/bin/env python3 """Run Alembic with script_location set to absolute path (avoids 'Failed to canonicalize script path' on Windows). -Usage from radioshaq directory: - python infrastructure/local/run_alembic.py revision --autogenerate -m "add_foo" - python infrastructure/local/run_alembic.py upgrade head - python infrastructure/local/run_alembic.py current +Usage from radioshaq directory (use uv run so project deps like geoalchemy2 are available): + uv run python infrastructure/local/run_alembic.py revision --autogenerate -m "add_foo" + uv run python infrastructure/local/run_alembic.py upgrade head + uv run python infrastructure/local/run_alembic.py current + +If you see ModuleNotFoundError for geoalchemy2 or loguru, install deps first: + uv sync --extra dev --extra test +then run the command above with uv run. """ from __future__ import annotations @@ -18,6 +22,20 @@ os.chdir(_project_root) sys.path.insert(0, str(_project_root)) +# Fail fast with a clear message if project deps are missing (env.py imports radioshaq.database.models -> geoalchemy2) +try: + import geoalchemy2 # noqa: F401 +except ImportError: + print( + "Missing dependency: geoalchemy2. Migrations need the project's dependencies installed.\n" + "From the radioshaq directory run:\n" + " uv sync --extra dev --extra test\n" + "Then run migrations with:\n" + " uv run python infrastructure/local/run_alembic.py upgrade head", + file=sys.stderr, + ) + sys.exit(1) + from alembic.config import Config from alembic import command diff --git a/radioshaq/infrastructure/local/setup.ps1 b/radioshaq/infrastructure/local/setup.ps1 index 4a61aff..dc64e80 100644 --- a/radioshaq/infrastructure/local/setup.ps1 +++ b/radioshaq/infrastructure/local/setup.ps1 @@ -53,6 +53,7 @@ Write-Host "`nInstalling Python dependencies..." -ForegroundColor Yellow if ($uvAvailable) { uv sync --extra dev --extra test Write-Host "RadioShaq and dev/test deps installed (uv)" + Write-Host "Optional: uv sync --extra audio (ASR: Voxtral, Whisper; Scribe uses API). Optional: uv sync --extra tts_kokoro (local TTS: Kokoro)." } else { if (-not (Test-Path ".venv")) { python -m venv .venv @@ -101,6 +102,11 @@ radio: enabled: false port: COM3 +tts: + provider: elevenlabs + # For kokoro (local TTS): set provider to kokoro and run uv sync --extra tts_kokoro + # For elevenlabs: set ELEVENLABS_API_KEY + field: station_id: DEV-FIELD-01 sync_interval_seconds: 60 @@ -185,9 +191,9 @@ print(asyncio.run(test())) Write-Host "PostgreSQL is ready" Write-Host "Running database migrations..." if ($uvAvailable) { - uv run alembic -c infrastructure/local/alembic.ini upgrade head + uv run alembic -c alembic.ini upgrade head } else { - & .venv\Scripts\alembic.exe -c infrastructure\local\alembic.ini upgrade head + & .venv\Scripts\alembic.exe -c alembic.ini upgrade head } Write-Host "Database migrations complete" } else { diff --git a/radioshaq/infrastructure/local/setup.sh b/radioshaq/infrastructure/local/setup.sh index 5493ad0..01d648e 100644 --- a/radioshaq/infrastructure/local/setup.sh +++ b/radioshaq/infrastructure/local/setup.sh @@ -56,6 +56,7 @@ echo "Installing Python dependencies..." if [ "$UV_AVAILABLE" = true ]; then uv sync --extra dev --extra test echo "RadioShaq and dev/test deps installed (uv)" + echo "Optional: uv sync --extra audio (ASR: Voxtral, Whisper; Scribe uses API). Optional: uv sync --extra tts_kokoro (local TTS: Kokoro)." else if [ ! -d ".venv" ]; then "$PYTHON_CMD" -m venv .venv @@ -104,6 +105,11 @@ radio: enabled: false port: /dev/ttyUSB0 +tts: + provider: elevenlabs + # For kokoro (local TTS): set provider to kokoro and run uv sync --extra tts_kokoro + # For elevenlabs: set ELEVENLABS_API_KEY + field: station_id: DEV-FIELD-01 sync_interval_seconds: 60 @@ -197,9 +203,9 @@ print(asyncio.run(test())) echo "PostgreSQL is ready" echo "Running database migrations..." if [ "$UV_AVAILABLE" = true ]; then - uv run alembic -c infrastructure/local/alembic.ini upgrade head + uv run alembic -c alembic.ini upgrade head else - "$PROJECT_ROOT/.venv/bin/alembic" -c infrastructure/local/alembic.ini upgrade head + "$PROJECT_ROOT/.venv/bin/alembic" -c alembic.ini upgrade head fi echo "Database migrations complete" else diff --git a/radioshaq/loguru/__init__.py b/radioshaq/loguru/__init__.py new file mode 100644 index 0000000..0025fe6 --- /dev/null +++ b/radioshaq/loguru/__init__.py @@ -0,0 +1,25 @@ +"""Minimal loguru stub for environments without the real dependency. + +This stub provides a drop-in `logger` object with no-op methods so that +code importing `from loguru import logger` continues to run in test and +development environments where loguru is not installed. +""" + +from __future__ import annotations + +from typing import Any + + +class _Logger: + def __getattr__(self, name: str): + def _noop(*args: Any, **kwargs: Any) -> None: + return None + + return _noop + + def __call__(self, *args: Any, **kwargs: Any) -> None: # pragma: no cover - trivial + return None + + +logger = _Logger() + diff --git a/radioshaq/prompts/__init__.py b/radioshaq/prompts/__init__.py index 1253917..39695ae 100644 --- a/radioshaq/prompts/__init__.py +++ b/radioshaq/prompts/__init__.py @@ -1 +1 @@ -"""Prompt templates for SHAKODS agents.""" +"""Prompt templates for RadioShaq agents.""" diff --git a/radioshaq/prompts/judges/subtask_quality.md b/radioshaq/prompts/judges/subtask_quality.md index 66af1ad..8466da9 100644 --- a/radioshaq/prompts/judges/subtask_quality.md +++ b/radioshaq/prompts/judges/subtask_quality.md @@ -1,6 +1,6 @@ # Subtask Quality Judge -You are the subtask-level judge for SHAKODS. Your role is to evaluate the quality of a single subtask's execution. +You are the subtask-level judge for RadioShaq. Your role is to evaluate the quality of a single subtask's execution. ## Evaluation Criteria diff --git a/radioshaq/prompts/judges/task_completion.md b/radioshaq/prompts/judges/task_completion.md index 123a5de..3ac5629 100644 --- a/radioshaq/prompts/judges/task_completion.md +++ b/radioshaq/prompts/judges/task_completion.md @@ -1,6 +1,6 @@ # Task Completion Judge -You are the task-level judge for SHAKODS. Your role is to evaluate whether an overall request has been satisfactorily completed based on the current REACT loop state. +You are the task-level judge for RadioShaq. Your role is to evaluate whether an overall request has been satisfactorily completed based on the current REACT loop state. ## Evaluation Criteria diff --git a/radioshaq/prompts/orchestrator/phases/reasoning.md b/radioshaq/prompts/orchestrator/phases/reasoning.md index 9a1e69e..8779d54 100644 --- a/radioshaq/prompts/orchestrator/phases/reasoning.md +++ b/radioshaq/prompts/orchestrator/phases/reasoning.md @@ -23,7 +23,7 @@ Analyze the request, understand the goal, and decompose it into manageable subta ## Output Format -Respond with a single JSON object. Include a `decomposed_tasks` array; each element must have `id`, `description`, and optionally `agent` (exact agent name, e.g. radio_tx, whitelist, gis_agent), `capability` (e.g. voice_transmission), and `payload` (object with task parameters for the agent). +Respond with a single JSON object. Include a `decomposed_tasks` array; each element must have `id`, `description`, and optionally `agent` (exact agent name, e.g. radio_tx, whitelist, gis), `capability` (e.g. voice_transmission), and `payload` (object with task parameters for the agent). ```json { @@ -48,6 +48,10 @@ Respond with a single JSON object. Include a `decomposed_tasks` array; each elem If the request can be fulfilled entirely by tools (e.g. send a message over radio, register a callsign), you may output a single step that indicates using tools; the system will then run tool-calling. +## Location disclosure + +- If the user discloses their location (e.g. "I'm at 48.8566, 2.3522" or "I'm in Lyon"), plan a location-set action first: use the tool **set_operator_location** (with latitude and longitude) or a gis task with action **set_location** before any GIS/propagation reasoning that uses origin. This stores the operator's position for reuse in propagation and nearby queries. + ## Remember - Be thorough in analysis - this sets up the entire task diff --git a/radioshaq/prompts/orchestrator/react_system.md b/radioshaq/prompts/orchestrator/react_system.md index ad30054..dac2cda 100644 --- a/radioshaq/prompts/orchestrator/react_system.md +++ b/radioshaq/prompts/orchestrator/react_system.md @@ -1,6 +1,6 @@ -# SHAKODS REACT Orchestrator +# RadioShaq REACT Orchestrator -You are the central orchestrator for SHAKODS (Strategic Autonomous Ham Radio and Knowledge Operations Dispatch System). Your role is to coordinate specialized agents to complete ham radio operations, emergency communications, and field-to-HQ coordination tasks. +You are the central orchestrator for RadioShaq (Strategic Autonomous Ham Radio and Knowledge Operations Dispatch System). Your role is to coordinate specialized agents to complete ham radio operations, emergency communications, and field-to-HQ coordination tasks. ## Current Context @@ -38,8 +38,8 @@ You operate in a continuous REACT loop: - `radio_rx`: Monitors frequencies, receives messages - `scheduler`: Schedules calls, manages operator availability - `gis`: Geographic information, maps, location analysis -- `whatsapp`: WhatsApp message transmission -- `sms`: SMS message transmission via Twilio +- `whatsapp`: WhatsApp message transmission (Twilio; when configured, outbound reply is delivered to the same chat) +- `sms`: SMS message transmission via Twilio (when configured, outbound reply is delivered to the same number) - `propagation`: Field-to-HQ data propagation ## Task Decomposition Rules diff --git a/radioshaq/prompts/specialized/radio_rx.md b/radioshaq/prompts/specialized/radio_rx.md index 9c208e1..5a9ac84 100644 --- a/radioshaq/prompts/specialized/radio_rx.md +++ b/radioshaq/prompts/specialized/radio_rx.md @@ -1,6 +1,6 @@ # Radio Reception Agent -You are the radio reception agent for SHAKODS. Your role is to monitor frequencies and receive messages via ham radio. +You are the radio reception agent for RadioShaq. Your role is to monitor frequencies and receive messages via ham radio. ## Capabilities diff --git a/radioshaq/prompts/specialized/radio_tx.md b/radioshaq/prompts/specialized/radio_tx.md index 61cf23e..71d2811 100644 --- a/radioshaq/prompts/specialized/radio_tx.md +++ b/radioshaq/prompts/specialized/radio_tx.md @@ -1,6 +1,6 @@ # Radio Transmission Agent -You are the radio transmission agent for SHAKODS. Your role is to transmit messages via ham radio using voice, digital modes, or packet radio. +You are the radio transmission agent for RadioShaq. Your role is to transmit messages via ham radio using voice, digital modes, or packet radio. ## Capabilities diff --git a/radioshaq/pyproject.toml b/radioshaq/pyproject.toml index 91dd4d9..477b56c 100644 --- a/radioshaq/pyproject.toml +++ b/radioshaq/pyproject.toml @@ -1,6 +1,6 @@ [project] name = "radioshaq" -version = "0.1.2" +version = "0.1.3" description = "RadioShaq: Ham Radio AI Orchestration and Remote SDR Reception System" readme = "../.github/PYPI_README.md" requires-python = ">=3.11" @@ -105,8 +105,10 @@ test = [ sdr = [ "pyrtlsdr>=0.4", ] +# pyhackrf2 provides HackRF class with read_samples(); requires system libhackrf at runtime. +# Install only on non-Windows (or install libhackrf on Windows and add manually). hackrf = [ - "python-hackrf>=1.5", + "pyhackrf2>=1.0; sys_platform != 'win32'", ] # Optional: Hindsight semantic memory. Requires urllib3>=2 (conflicts with boto3 urllib3<2). @@ -118,13 +120,14 @@ radio = [ # "pyham-ax25", # For packet radio ] -# ASR (radioshaq/voxtral-asr-en) + TTS (ElevenLabs) +# ASR: Voxtral (shakods/voxtral-asr-en), Whisper (local). Scribe uses ELEVENLABS_API_KEY. audio = [ "transformers>=4.54", "peft>=0.18", "accelerate>=1.0", "torch>=2.0", "mistral-common[audio]>=1.8.1", + "openai-whisper>=20231117", ] # Voice TX: play audio to rig (sound device → rig line-in) @@ -148,6 +151,13 @@ voice_rx = [ "noisereduce>=3.0", ] +# Local TTS: Kokoro-82M (no API key). Kokoro pulls in torch, transformers, misaki[en], etc. +# On Linux, soundfile may need libsndfile: apt install libsndfile1 (or equivalent). +tts_kokoro = [ + "kokoro>=0.9.4", + "soundfile>=0.12.1", +] + [project.scripts] radioshaq = "radioshaq.cli:main" run-receiver = "radioshaq.remote_receiver.server:main" diff --git a/radioshaq/radioshaq/__init__.py b/radioshaq/radioshaq/__init__.py index 97075ce..9261aa5 100644 --- a/radioshaq/radioshaq/__init__.py +++ b/radioshaq/radioshaq/__init__.py @@ -1,4 +1,4 @@ -"""SHAKODS: Strategic Autonomous Ham Radio and Knowledge Operations Dispatch System. +"""RadioShaq: Strategic Autonomous Ham Radio and Knowledge Operations Dispatch System. A specialized derivative of nanobot implementing REACT (Reasoning, Evaluation, Acting, Communicating, Tracking) agent orchestration pattern for ham radio @@ -7,7 +7,7 @@ from __future__ import annotations -__version__ = "0.1.2" +__version__ = "0.1.3" __logo__ = "📡" __description__ = "Strategic Autonomous Ham Radio and Knowledge Operations Dispatch System" diff --git a/radioshaq/radioshaq/api/callsign_whitelist.py b/radioshaq/radioshaq/api/callsign_whitelist.py index fa146d0..31fe4f5 100644 --- a/radioshaq/radioshaq/api/callsign_whitelist.py +++ b/radioshaq/radioshaq/api/callsign_whitelist.py @@ -45,16 +45,21 @@ def is_callsign_allowed( ) -> bool: """ Return True if the callsign is allowed. + + Semantics: - If callsign is None or empty, return False. - - If allowed set is non-empty and callsign not in it, return False. - - If registry_required is True and allowed is empty, return False (no one allowed). - - Otherwise return True. + - When registry_required is True: + * If allowed is empty, no one is allowed (False). + * If allowed is non-empty, callsign must be in the set. + - When registry_required is False: + * Whitelist is advisory only; any non-empty callsign is allowed. """ normalized = _normalize(callsign) if not normalized: return False - if allowed and normalized not in allowed: - return False - if registry_required and not allowed: - return False + if registry_required: + if not allowed: + return False + return normalized in allowed + # Registry not required: accept any normalized callsign regardless of allowed set contents. return True diff --git a/radioshaq/radioshaq/api/config_semantics.py b/radioshaq/radioshaq/api/config_semantics.py new file mode 100644 index 0000000..3742f3e --- /dev/null +++ b/radioshaq/radioshaq/api/config_semantics.py @@ -0,0 +1,26 @@ +"""Config API semantics: runtime overrides vs. active runtime. + +Runtime configuration overrides (audio, llm, memory, per-role overrides) are stored +only in FastAPI app state (e.g. request.app.state.audio_config_override). They are +merged into GET responses so the UI shows the intended values, but: + +- get_config(request) returns app.state.config, the Config instance created at + lifespan startup. It is not modified by PATCH. +- Orchestrator, agents (voice_rx, radio_tx, etc.), and all components that take + config at construction receive that startup Config (or a subset like config.audio). + They never read the overlay dictionaries. + +Therefore: changes made via PATCH /config/audio, /config/llm, etc. do NOT affect +active agents or the orchestrator until the process is restarted. After restart, +the application loads config from file/env again; runtime overlays are not +persisted to disk unless the application explicitly supports a "save" action. + +Clients should treat config returned by GET as "what will apply after restart" +when overrides are present, and show a "Restart required" notice when appropriate. +""" + +CONFIG_APPLIES_AFTER_RESTART = "restart" +"""Value for _meta.config_applies_after in config API responses.""" + +X_CONFIG_EFFECTIVE_AFTER = "X-Config-Effective-After" +"""Response header set on config PATCH to indicate when changes take effect.""" diff --git a/radioshaq/radioshaq/api/routes/audio.py b/radioshaq/radioshaq/api/routes/audio.py index 8713470..b8335cf 100644 --- a/radioshaq/radioshaq/api/routes/audio.py +++ b/radioshaq/radioshaq/api/routes/audio.py @@ -2,10 +2,16 @@ from __future__ import annotations +import asyncio from typing import Any from fastapi import APIRouter, Body, Depends, HTTPException, Request, WebSocket, WebSocketDisconnect +from fastapi.responses import JSONResponse +from radioshaq.api.config_semantics import ( + CONFIG_APPLIES_AFTER_RESTART, + X_CONFIG_EFFECTIVE_AFTER, +) from radioshaq.api.dependencies import get_audio_agent, get_config, get_current_user from radioshaq.auth.jwt import TokenPayload from radioshaq.config.schema import AudioConfig, Config @@ -25,7 +31,9 @@ async def get_audio_config( config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: - """Get current audio configuration (env/file + optional runtime overrides).""" + """Get current audio configuration (env/file + optional runtime overrides). + Runtime overrides do not affect active agents until process restart. + """ audio = getattr(config, "audio", None) if not audio: raise HTTPException(status_code=503, detail="Audio config not available") @@ -33,6 +41,7 @@ async def get_audio_config( override = getattr(request.app.state, "audio_config_override", None) if override: out = {**out, **override} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} return out @@ -42,8 +51,10 @@ async def update_audio_config( body: dict[str, Any], config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), -) -> dict[str, Any]: - """Update audio configuration (runtime overlay only; does not persist to file).""" +) -> JSONResponse: + """Update audio configuration (runtime overlay only; does not persist to file). + Restart required for changes to affect active agents (voice_rx, etc.). + """ audio = getattr(config, "audio", None) if not audio: raise HTTPException(status_code=503, detail="Audio config not available") @@ -51,7 +62,9 @@ async def update_audio_config( request.app.state.audio_config_override = {} request.app.state.audio_config_override.update(body) base = _audio_config_dict(audio) - return {**base, **request.app.state.audio_config_override} + out = {**base, **request.app.state.audio_config_override} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} + return JSONResponse(content=out, headers={X_CONFIG_EFFECTIVE_AFTER: CONFIG_APPLIES_AFTER_RESTART}) @router.post("/config/audio/reset") @@ -59,14 +72,16 @@ async def reset_audio_config( request: Request, config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), -) -> dict[str, Any]: - """Clear runtime audio config overrides.""" +) -> JSONResponse: + """Clear runtime audio config overrides. Restart required for agents to use file/env config.""" if hasattr(request.app.state, "audio_config_override"): request.app.state.audio_config_override = {} audio = getattr(config, "audio", None) if not audio: raise HTTPException(status_code=503, detail="Audio config not available") - return _audio_config_dict(audio) + out = _audio_config_dict(audio) + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} + return JSONResponse(content=out, headers={X_CONFIG_EFFECTIVE_AFTER: CONFIG_APPLIES_AFTER_RESTART}) @router.get("/audio/devices") @@ -118,15 +133,48 @@ async def test_audio_device( device_id: int, _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: - """Test an audio device by ID (placeholder).""" + """Test that an audio device can be opened for playback and capture. + + This performs a very short open/close cycle for the given device ID using sounddevice. + It does not play audible tones by design, but verifies that the OS/driver accept + a basic stream configuration for this device. + """ try: import sounddevice as sd - sd.query_devices(device_id) + info = sd.query_devices(device_id) + # Try opening a half-second stream to validate basic I/O. + samplerate = float(info["default_samplerate"] or 16000) + duration_sec = 0.1 + frames = int(samplerate * duration_sec) + try: + import numpy as np + except ImportError: + # If numpy is not available, just rely on query_devices result. + return { + "success": True, + "message": f"Device {device_id} is available (query_devices OK)", + "device_id": device_id, + "name": str(info.get("name", "?")), + "sample_rate": samplerate, + } + # Generate a short buffer of silence for playback test. + silence = np.zeros((frames, 1), dtype="float32") + # Open a stream for playback and (if supported) capture; immediately close. + # Run blocking audio I/O off the event loop to avoid stalling other coroutines. + loop = asyncio.get_running_loop() + await loop.run_in_executor( + None, + lambda: sd.play(silence, samplerate=samplerate, device=device_id, blocking=True), + ) except ImportError: raise HTTPException(status_code=503, detail="sounddevice not available") except Exception as e: raise HTTPException(status_code=400, detail=str(e)) - return {"success": True, "message": f"Device {device_id} exists", "device_id": device_id} + return { + "success": True, + "message": f"Device {device_id} opened successfully for a short playback test", + "device_id": device_id, + } @router.get("/audio/pending") @@ -187,8 +235,9 @@ async def websocket_audio_metrics(websocket: WebSocket, session_id: str) -> None """ WebSocket for real-time audio metrics (VAD, SNR, state). When the voice_rx pipeline is wired, set app.state.audio_metrics_latest to a dict with - vad_active, snr_db, state (and optional type); this handler sends that when present, - otherwise sends a placeholder heartbeat. + vad_active, snr_db, state (and optional type); this handler sends that when present. + Otherwise sends placeholder heartbeats with placeholder=True so clients can show + "No live signal" / "Waiting for audio pipeline" instead of implying real metrics. """ await websocket.accept() import asyncio @@ -202,6 +251,7 @@ async def websocket_audio_metrics(websocket: WebSocket, session_id: str) -> None "vad_active": latest.get("vad_active", False), "snr_db": latest.get("snr_db"), "state": latest.get("state", "idle"), + "placeholder": False, } else: payload = { @@ -210,6 +260,7 @@ async def websocket_audio_metrics(websocket: WebSocket, session_id: str) -> None "vad_active": False, "snr_db": None, "state": "idle", + "placeholder": True, } await websocket.send_json(payload) await asyncio.sleep(1.0) diff --git a/radioshaq/radioshaq/api/routes/bus.py b/radioshaq/radioshaq/api/routes/bus.py index a2d7959..b0b99a6 100644 --- a/radioshaq/radioshaq/api/routes/bus.py +++ b/radioshaq/radioshaq/api/routes/bus.py @@ -1,15 +1,26 @@ -"""Internal message bus endpoints (nanobot InboundMessage ingestion).""" +"""Internal message bus endpoints (nanobot InboundMessage ingestion) and opt-out (§8.1).""" from datetime import datetime, timezone from typing import Any -from fastapi import APIRouter, Body, HTTPException, Request +from fastapi import APIRouter, Body, Depends, HTTPException, Request +from pydantic import BaseModel, Field +from radioshaq.api.dependencies import get_current_user +from radioshaq.auth.jwt import TokenPayload from radioshaq.vendor.nanobot.bus.events import InboundMessage router = APIRouter() +class OptOutBody(BaseModel): + """Body for POST /internal/opt-out. Used by webhook when user sends STOP.""" + + callsign: str | None = Field(None, description="Callsign to opt out") + phone: str | None = Field(None, description="Phone (E.164) to opt out; used if callsign not set") + channel: str = Field(..., description="sms or whatsapp") + + @router.post("/bus/inbound") async def publish_inbound( request: Request, @@ -49,3 +60,29 @@ async def publish_inbound( if not ok: raise HTTPException(status_code=507, detail="Inbound queue full") return {"accepted": True, "channel": channel, "chat_id": chat_id} + + +@router.post("/opt-out") +async def opt_out( + request: Request, + body: OptOutBody, + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """ + Record opt-out for notify-on-relay (§8.1). Call when user sends STOP via SMS/WhatsApp. + Provide either callsign or phone + channel (sms/whatsapp). Clears that contact and sets opt_out_at. + Requires a valid Bearer token (e.g. service JWT used by your Twilio webhook handler or Lambda). + """ + if body.channel not in ("sms", "whatsapp"): + raise HTTPException(status_code=400, detail="channel must be sms or whatsapp") + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "record_opt_out"): + raise HTTPException(status_code=503, detail="Database not available") + if body.callsign and body.callsign.strip(): + normalized = body.callsign.strip().upper() + updated = await db.record_opt_out(normalized, body.channel) + elif body.phone and body.phone.strip(): + updated = await db.record_opt_out_by_phone(body.phone.strip(), body.channel) + else: + raise HTTPException(status_code=400, detail="Provide callsign or phone") + return {"ok": True, "opted_out": updated} diff --git a/radioshaq/radioshaq/api/routes/callsigns.py b/radioshaq/radioshaq/api/routes/callsigns.py index e898ced..1a0543a 100644 --- a/radioshaq/radioshaq/api/routes/callsigns.py +++ b/radioshaq/radioshaq/api/routes/callsigns.py @@ -10,7 +10,11 @@ from radioshaq.api.dependencies import get_config, get_current_user, get_db from radioshaq.auth.jwt import TokenPayload +from radioshaq.compliance_plugin import get_band_plan_source_for_config +from radioshaq.config.schema import Config +from radioshaq.constants import E164_PATTERN, EXPLICIT_CONSENT_REGIONS from radioshaq.radio.bands import BAND_PLANS +from radioshaq.utils.phone import normalize_e164 router = APIRouter() @@ -32,6 +36,21 @@ class PatchCallsignBandsBody(BaseModel): preferred_bands: list[str] = Field(..., min_length=0, description="Preferred bands e.g. [40m, 2m]") +class PatchContactPreferencesBody(BaseModel): + """Body for PATCH /callsigns/registered/{callsign}/contact-preferences (§8.1).""" + + notify_sms_phone: str | None = Field(None, description="E.164; set to empty string to clear") + notify_whatsapp_phone: str | None = Field(None, description="E.164; set to empty string to clear") + notify_on_relay: bool | None = Field(None, description="Enable notify when a message is left for this callsign") + consent_source: str | None = Field(None, description="api / web / voice; required when enabling notify_on_relay") + consent_confirmed: bool | None = Field( + None, + description="Explicit consent; required for EU/UK/ZA when enabling notify", + ) + + + + def _normalize_callsign(callsign: str) -> str: return callsign.strip().upper() @@ -45,14 +64,15 @@ def _validate_callsign(callsign: str) -> None: ) -def _validate_bands(bands: list[str]) -> list[str]: - """Validate band names against BAND_PLANS. Returns normalized list. Raises HTTPException if invalid.""" +def _validate_bands(bands: list[str], band_plans: dict | None = None) -> list[str]: + """Validate band names against band plan. Returns normalized list. Raises HTTPException if invalid.""" + plans = band_plans if band_plans is not None else BAND_PLANS out = [] for b in bands: s = (b or "").strip() if not s: continue - if s not in BAND_PLANS: + if s not in plans: raise HTTPException(status_code=400, detail=f"Unknown band: {s}. Use e.g. 40m, 2m, 20m") out.append(s) return out @@ -79,6 +99,7 @@ async def list_registered( async def register_callsign( request: Request, body: RegisterBody, + config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: """Register a callsign so it is automatically accepted for store/relay.""" @@ -87,9 +108,14 @@ async def register_callsign( source = (body.source or "api").strip().lower() if source not in ("api", "audio"): source = "api" + radio = config.radio + band_plans = get_band_plan_source_for_config( + radio.restricted_bands_region, + getattr(radio, "band_plan_region", None), + ) preferred_bands = None if body.preferred_bands: - preferred_bands = _validate_bands(body.preferred_bands) + preferred_bands = _validate_bands(body.preferred_bands, band_plans) repo = getattr(request.app.state, "callsign_repository", None) if repo is not None: try: @@ -111,6 +137,7 @@ async def register_callsign( async def register_from_audio( request: Request, file: UploadFile, + config: Config = Depends(get_config), callsign: str | None = None, _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: @@ -131,12 +158,14 @@ async def register_from_audio( temp_path = f.name try: try: - from radioshaq.audio.asr import transcribe_audio_voxtral - transcript = transcribe_audio_voxtral(temp_path, language="en") - except ImportError: + from radioshaq.audio.asr_plugin import transcribe_audio + asr_lang = getattr(config.audio, "asr_language", "en") or "en" + asr_model = getattr(config.audio, "asr_model", "voxtral") or "voxtral" + transcript = transcribe_audio(temp_path, model_id=asr_model, language=asr_lang) + except (ImportError, RuntimeError) as e: raise HTTPException( status_code=503, - detail="ASR not available (install with uv sync --extra audio)", + detail=f"ASR not available: {e!s}", ) transcript = (transcript or "").strip() # Use query param if provided; else take first word or try to parse "CALLSIGN de OTHER" @@ -169,17 +198,23 @@ async def register_from_audio( Path(temp_path).unlink(missing_ok=True) -@router.patch("/registered/{callsign:path}") +@router.patch("/registered/{callsign}") async def patch_callsign_bands( request: Request, callsign: str, body: PatchCallsignBandsBody, + config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: - """Set preferred_bands for a registered callsign. Band names must be in BAND_PLANS (e.g. 40m, 2m).""" + """Set preferred_bands for a registered callsign. Band names must be in effective band plan (e.g. 40m, 2m).""" normalized = _normalize_callsign(callsign) _validate_callsign(callsign) - bands = _validate_bands(body.preferred_bands) + radio = config.radio + band_plans = get_band_plan_source_for_config( + radio.restricted_bands_region, + getattr(radio, "band_plan_region", None), + ) + bands = _validate_bands(body.preferred_bands, band_plans) repo = getattr(request.app.state, "callsign_repository", None) if repo is not None: updated = await repo.update_preferred_bands(normalized, bands) @@ -195,7 +230,93 @@ async def patch_callsign_bands( return {"ok": True, "callsign": normalized, "preferred_bands": bands} -@router.delete("/registered/{callsign:path}") +@router.get("/registered/{callsign}/contact-preferences") +async def get_contact_preferences( + request: Request, + callsign: str, + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """Get contact preferences for a registered callsign (§8.1).""" + normalized = _normalize_callsign(callsign) + _validate_callsign(callsign) + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "get_contact_preferences"): + raise HTTPException(status_code=503, detail="Database not available") + prefs = await db.get_contact_preferences(normalized) + if prefs is None: + raise HTTPException(status_code=404, detail="Callsign not in registry") + return prefs + + +def _require_explicit_consent_region(region: str) -> bool: + """True if region requires explicit consent (EU/UK/ZA).""" + return (region or "").strip().upper() in EXPLICIT_CONSENT_REGIONS + + +@router.patch("/registered/{callsign}/contact-preferences") +async def patch_contact_preferences( + request: Request, + callsign: str, + body: PatchContactPreferencesBody, + config: Config = Depends(get_config), + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """Set contact preferences (notify by SMS/WhatsApp when a message is left for this callsign). Sets consent_at when enabling notify_on_relay (§8.1).""" + normalized = _normalize_callsign(callsign) + _validate_callsign(callsign) + region = getattr(config.radio, "restricted_bands_region", None) or "" + if body.notify_on_relay is True: + if not body.consent_source or body.consent_source.strip() not in ("api", "web", "voice"): + raise HTTPException( + status_code=400, + detail="consent_source (api / web / voice) required when enabling notify_on_relay", + ) + if _require_explicit_consent_region(region) and body.consent_confirmed is not True: + raise HTTPException( + status_code=400, + detail="consent_confirmed=true required for this region when enabling notify", + ) + sms_phone = None + if body.notify_sms_phone is not None: + raw = (body.notify_sms_phone or "").strip() + if raw: + sms_phone = normalize_e164(raw) + if not E164_PATTERN.match(sms_phone): + raise HTTPException(status_code=400, detail="notify_sms_phone must be E.164 (10–15 digits)") + else: + sms_phone = "" + whatsapp_phone = None + if body.notify_whatsapp_phone is not None: + raw = (body.notify_whatsapp_phone or "").strip() + if raw: + whatsapp_phone = normalize_e164(raw) + if not E164_PATTERN.match(whatsapp_phone): + raise HTTPException(status_code=400, detail="notify_whatsapp_phone must be E.164 (10–15 digits)") + else: + whatsapp_phone = "" + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "set_contact_preferences"): + raise HTTPException(status_code=503, detail="Database not available") + from datetime import datetime, timezone + consent_at = datetime.now(timezone.utc) if (body.notify_on_relay is True and body.consent_source) else None + consent_source = (body.consent_source or "").strip() or None + if consent_at and not consent_source: + consent_source = "api" + updated = await db.set_contact_preferences( + normalized, + notify_sms_phone=sms_phone if sms_phone is not None else None, + notify_whatsapp_phone=whatsapp_phone if whatsapp_phone is not None else None, + notify_on_relay=body.notify_on_relay, + consent_at=consent_at, + consent_source=consent_source, + ) + if not updated: + raise HTTPException(status_code=404, detail="Callsign not in registry") + prefs = await db.get_contact_preferences(normalized) + return prefs or {} + + +@router.delete("/registered/{callsign}") async def unregister_callsign( request: Request, callsign: str, diff --git a/radioshaq/radioshaq/api/routes/config_routes.py b/radioshaq/radioshaq/api/routes/config_routes.py index b9d9af2..49725ee 100644 --- a/radioshaq/radioshaq/api/routes/config_routes.py +++ b/radioshaq/radioshaq/api/routes/config_routes.py @@ -5,7 +5,12 @@ from typing import Any from fastapi import APIRouter, Depends, Request +from fastapi.responses import JSONResponse +from radioshaq.api.config_semantics import ( + CONFIG_APPLIES_AFTER_RESTART, + X_CONFIG_EFFECTIVE_AFTER, +) from radioshaq.api.dependencies import get_config, get_current_user from radioshaq.auth.jwt import TokenPayload from radioshaq.config.schema import Config @@ -13,7 +18,14 @@ router = APIRouter(prefix="", tags=["config"]) # Keys to redact in LLM config responses -_LLM_SECRET_KEYS = {"mistral_api_key", "openai_api_key", "anthropic_api_key", "custom_api_key"} +_LLM_SECRET_KEYS = { + "mistral_api_key", + "openai_api_key", + "anthropic_api_key", + "custom_api_key", + "huggingface_api_key", + "gemini_api_key", +} def _llm_config_dict(config: Config, redact: bool = True) -> dict[str, Any]: @@ -51,13 +63,16 @@ async def get_config_llm( config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: - """Get current LLM configuration (API keys redacted). Runtime overrides merged if set.""" + """Get current LLM configuration (API keys redacted). Runtime overrides merged if set. + Runtime overrides do not affect active orchestrator/agents until process restart. + """ out = _llm_config_dict(config, redact=True) override = getattr(request.app.state, "llm_config_override", None) if override: for k in _LLM_SECRET_KEYS: override.pop(k, None) out = {**out, **override} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} return out @@ -67,15 +82,19 @@ async def update_config_llm( body: dict[str, Any], config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), -) -> dict[str, Any]: - """Update LLM configuration (runtime overlay only; does not persist to file). API keys in body are not stored.""" +) -> JSONResponse: + """Update LLM configuration (runtime overlay only; does not persist to file). + API keys in body are not stored. Restart required for changes to affect orchestrator/agents. + """ if not hasattr(request.app.state, "llm_config_override"): request.app.state.llm_config_override = {} for k in _LLM_SECRET_KEYS: body.pop(k, None) request.app.state.llm_config_override.update(body) base = _llm_config_dict(config, redact=True) - return {**base, **request.app.state.llm_config_override} + out = {**base, **request.app.state.llm_config_override} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} + return JSONResponse(content=out, headers={X_CONFIG_EFFECTIVE_AFTER: CONFIG_APPLIES_AFTER_RESTART}) @router.get("/config/memory") @@ -84,11 +103,14 @@ async def get_config_memory( config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: - """Get current memory/Hindsight configuration. Runtime overrides merged if set.""" + """Get current memory/Hindsight configuration. Runtime overrides merged if set. + Runtime overrides do not affect active components until process restart. + """ out = _memory_config_dict(config) override = getattr(request.app.state, "memory_config_override", None) if override: out = {**out, **override} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} return out @@ -98,13 +120,17 @@ async def update_config_memory( body: dict[str, Any], config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), -) -> dict[str, Any]: - """Update memory configuration (runtime overlay only; does not persist to file).""" +) -> JSONResponse: + """Update memory configuration (runtime overlay only; does not persist to file). + Restart required for changes to affect active components. + """ if not hasattr(request.app.state, "memory_config_override"): request.app.state.memory_config_override = {} request.app.state.memory_config_override.update(body) base = _memory_config_dict(config) - return {**base, **request.app.state.memory_config_override} + out = {**base, **request.app.state.memory_config_override} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} + return JSONResponse(content=out, headers={X_CONFIG_EFFECTIVE_AFTER: CONFIG_APPLIES_AFTER_RESTART}) @router.get("/config/overrides") @@ -113,11 +139,14 @@ async def get_config_overrides( config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: - """Get per-role LLM and memory overrides. Keys: orchestrator, judge, whitelist, daily_summary, memory.""" + """Get per-role LLM and memory overrides. Keys: orchestrator, judge, whitelist, daily_summary, memory. + Runtime overrides do not affect active orchestrator/agents until process restart. + """ out = _overrides_dict(config) override = getattr(request.app.state, "config_overrides_override", None) if override: out = {**out, **override} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} return out @@ -127,8 +156,10 @@ async def update_config_overrides( body: dict[str, Any], config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), -) -> dict[str, Any]: - """Update per-role overrides (runtime overlay only; does not persist to file).""" +) -> JSONResponse: + """Update per-role overrides (runtime overlay only; does not persist to file). + Restart required for changes to affect orchestrator/agents. + """ if not hasattr(request.app.state, "config_overrides_override"): request.app.state.config_overrides_override = {"llm_overrides": {}, "memory_overrides": {}} if "llm_overrides" in body and isinstance(body["llm_overrides"], dict): @@ -137,4 +168,6 @@ async def update_config_overrides( request.app.state.config_overrides_override.setdefault("memory_overrides", {}).update(body["memory_overrides"]) base = _overrides_dict(config) o = request.app.state.config_overrides_override - return {"llm_overrides": {**base.get("llm_overrides", {}), **o.get("llm_overrides", {})}, "memory_overrides": {**base.get("memory_overrides", {}), **o.get("memory_overrides", {})}} + out = {"llm_overrides": {**base.get("llm_overrides", {}), **o.get("llm_overrides", {})}, "memory_overrides": {**base.get("memory_overrides", {}), **o.get("memory_overrides", {})}} + out["_meta"] = {"config_applies_after": CONFIG_APPLIES_AFTER_RESTART} + return JSONResponse(content=out, headers={X_CONFIG_EFFECTIVE_AFTER: CONFIG_APPLIES_AFTER_RESTART}) diff --git a/radioshaq/radioshaq/api/routes/emergency.py b/radioshaq/radioshaq/api/routes/emergency.py new file mode 100644 index 0000000..fb053ce --- /dev/null +++ b/radioshaq/radioshaq/api/routes/emergency.py @@ -0,0 +1,268 @@ +"""Emergency coordination events: request and approve (§9).""" + +from __future__ import annotations + +import asyncio +import json +from datetime import datetime, timezone +from typing import Any, AsyncIterator + +from fastapi import APIRouter, Depends, HTTPException, Request +from fastapi.responses import StreamingResponse +from pydantic import BaseModel, Field + +from radioshaq.api.dependencies import get_config, get_current_user +from radioshaq.auth.jwt import TokenPayload +from radioshaq.config.schema import Config +from radioshaq.constants import E164_PATTERN +from radioshaq.messaging_compliance import emergency_messaging_allowed +from radioshaq.utils.phone import normalize_e164 + +router = APIRouter() + +# Allowed status values for GET /emergency/events (validated before passing to DB) +ALLOWED_EVENT_STATUSES = frozenset({"pending", "approving", "approved", "rejected"}) + + +class EmergencyRequestBody(BaseModel): + """Body for POST /emergency/request.""" + + target_callsign: str | None = Field(None, description="Target callsign (optional)") + contact_phone: str = Field(..., description="Contact phone E.164 for SMS/WhatsApp") + contact_channel: str = Field(..., description="sms or whatsapp") + notes: str | None = Field(None) + + +class ApproveBody(BaseModel): + """Body for POST /emergency/events/{id}/approve.""" + + notes: str | None = Field(None) + + +class RejectBody(BaseModel): + """Body for POST /emergency/events/{id}/reject.""" + + notes: str | None = Field(None) + + +@router.post("/request") +async def create_emergency_request( + request: Request, + body: EmergencyRequestBody, + config: Config = Depends(get_config), + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """ + Create an emergency coordination event (status=pending). Only allowed when + emergency_contact is enabled and current region is in regions_allowed. + """ + region = getattr(config.radio, "restricted_bands_region", None) or "" + if not emergency_messaging_allowed(region, getattr(config, "emergency_contact", None)): + raise HTTPException( + status_code=403, + detail="Emergency SMS/WhatsApp not allowed in this region", + ) + if body.contact_channel not in ("sms", "whatsapp"): + raise HTTPException(status_code=400, detail="contact_channel must be sms or whatsapp") + phone = normalize_e164(body.contact_phone) + if not E164_PATTERN.match(phone): + raise HTTPException(status_code=400, detail="contact_phone must be E.164 (10–15 digits)") + initiator = getattr(_user, "callsign", None) or getattr(_user, "sub", "api") + if isinstance(initiator, str) and len(initiator) > 20: + initiator = "api" + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "store_coordination_event"): + raise HTTPException(status_code=503, detail="Database not available") + extra = { + "emergency_contact_phone": phone, + "emergency_contact_channel": body.contact_channel, + } + event_id = await db.store_coordination_event( + event_type="emergency", + initiator_callsign=initiator, + target_callsign=body.target_callsign.strip().upper() if body.target_callsign else None, + status="pending", + priority=1, + notes=body.notes, + extra_data=extra, + ) + return {"ok": True, "event_id": event_id, "status": "pending"} + + +async def _get_pending_emergency_count(request: Request) -> int: + """Return number of pending emergency events (for use by pending-count and SSE stream).""" + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "get_pending_coordination_events"): + return 0 + events = await db.get_pending_coordination_events(max_results=1000, event_type="emergency") + return len(events) + + +@router.get("/pending-count") +async def emergency_pending_count( + request: Request, + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """ + Return the number of pending emergency events. Use this to inform the operator + (e.g. dashboard polling or script) that action is required; then list with GET /emergency/events. + """ + count = await _get_pending_emergency_count(request) + return {"count": count} + + +async def _emergency_stream_generator(request: Request) -> AsyncIterator[str]: + """SSE: send pending_count every 10s so the operator UI can show alerts without polling.""" + interval = 10.0 + while True: + if await request.is_disconnected(): + break + try: + count = await _get_pending_emergency_count(request) + payload = json.dumps({"pending_count": count}) + yield f"data: {payload}\n\n" + except asyncio.CancelledError: + break + except Exception: + yield f"data: {json.dumps({'pending_count': 0, 'error': True})}\n\n" + await asyncio.sleep(interval) + + +@router.get("/events/stream") +async def emergency_events_stream( + request: Request, + _user: TokenPayload = Depends(get_current_user), +) -> StreamingResponse: + """ + Server-Sent Events stream of pending emergency count. Send event every 10s. + Operator UI can subscribe to trigger audio and browser notifications when count > 0. + """ + return StreamingResponse( + _emergency_stream_generator(request), + media_type="text/event-stream", + headers={"Cache-Control": "no-cache", "Connection": "keep-alive", "X-Accel-Buffering": "no"}, + ) + + +@router.get("/events") +async def list_emergency_events( + request: Request, + status: str | None = None, + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """List coordination events with event_type=emergency. Optional filter by status (e.g. pending, approved, rejected).""" + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "get_pending_coordination_events"): + return {"events": [], "count": 0} + status_filter = status.strip() if (status is not None and str(status).strip()) else "pending" + if status_filter not in ALLOWED_EVENT_STATUSES: + raise HTTPException(status_code=400, detail=f"status must be one of: {', '.join(sorted(ALLOWED_EVENT_STATUSES))}") + events = await db.get_pending_coordination_events( + max_results=1000, event_type="emergency", status=status_filter + ) + return {"events": events, "count": len(events)} + + +@router.post("/events/{event_id:int}/approve") +async def approve_emergency_event( + request: Request, + event_id: int, + body: ApproveBody, + config: Config = Depends(get_config), + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """ + Approve an emergency event and queue the SMS/WhatsApp for outbound delivery. + Sets status=approved, records approved_at/approved_by/queued_at, and returns queued state. + """ + region = getattr(config.radio, "restricted_bands_region", None) or "" + if not emergency_messaging_allowed(region, getattr(config, "emergency_contact", None)): + raise HTTPException(status_code=403, detail="Emergency SMS/WhatsApp not allowed in this region") + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "claim_emergency_event_pending") or not hasattr(db, "update_coordination_event"): + raise HTTPException(status_code=503, detail="Database not available") + if not (hasattr(db, "get_coordination_event_by_id_raw") or hasattr(db, "get_coordination_event_by_id")): + raise HTTPException(status_code=503, detail="Database not available") + get_event = getattr(db, "get_coordination_event_by_id_raw", db.get_coordination_event_by_id) + # Atomic claim: only one concurrent approval can transition pending -> approving + claimed = await db.claim_emergency_event_pending(event_id) + if claimed is None: + raise HTTPException(status_code=400, detail="Event already processed") + event = await get_event(event_id) + if not event or event.get("event_type") != "emergency": + await db.update_coordination_event(event_id, status="pending") # roll back claim + raise HTTPException(status_code=400, detail="Not an emergency event") + extra = event.get("extra_data") or {} + phone = extra.get("emergency_contact_phone") + channel = extra.get("emergency_contact_channel") + if not phone or channel not in ("sms", "whatsapp"): + await db.update_coordination_event(event_id, status="pending") # roll back claim + raise HTTPException(status_code=400, detail="Missing contact_phone or contact_channel") + approver = getattr(_user, "sub", None) or getattr(_user, "callsign", "api") + now = datetime.now(timezone.utc).isoformat() + message_bus = getattr(request.app.state, "message_bus", None) + if not message_bus or not hasattr(message_bus, "publish_outbound"): + await db.update_coordination_event(event_id, status="pending") + return {"ok": True, "event_id": event_id, "status": "pending", "sent": False, "detail": "Message bus not available"} + from radioshaq.vendor.nanobot.bus.events import OutboundMessage + content = extra.get("message") or event.get("notes") or "Emergency notification from RadioShaq." + try: + ok = await message_bus.publish_outbound( + OutboundMessage( + channel=channel, + chat_id=phone, + content=content, + reply_to=None, + media=[], + metadata={"emergency_event_id": event_id, "approved_by": str(approver)}, + ) + ) + except Exception as exc: + await db.update_coordination_event(event_id, status="pending") + raise HTTPException(status_code=503, detail=f"Outbound bus error: {exc}") from exc + if not ok: + await db.update_coordination_event(event_id, status="pending") + return {"ok": True, "event_id": event_id, "status": "pending", "sent": False, "detail": "Outbound queue full"} + await db.update_coordination_event( + event_id, + status="approved", + extra_data={ + "approved_at": now, + "approved_by": str(approver), + "queued_at": now, + **({"notes": body.notes} if body.notes else {}), + }, + ) + return {"ok": True, "event_id": event_id, "status": "approved", "queued": True, "sent": False} + + +@router.post("/events/{event_id:int}/reject") +async def reject_emergency_event( + request: Request, + event_id: int, + body: RejectBody, + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """Reject an emergency event (do not send). Sets status=rejected and records rejected_at, rejected_by.""" + db = getattr(request.app.state, "db", None) + if db is None or not hasattr(db, "claim_emergency_event_pending") or not hasattr(db, "get_coordination_event_by_id") or not hasattr(db, "update_coordination_event"): + raise HTTPException(status_code=503, detail="Database not available") + claimed = await db.claim_emergency_event_pending(event_id) + if claimed is None: + raise HTTPException(status_code=400, detail="Event already processed") + event = await db.get_coordination_event_by_id(event_id) + if not event or event.get("event_type") != "emergency": + await db.update_coordination_event(event_id, status="pending") + raise HTTPException(status_code=400, detail="Not an emergency event") + rejector = getattr(_user, "sub", None) or getattr(_user, "callsign", "api") + now = datetime.now(timezone.utc).isoformat() + await db.update_coordination_event( + event_id, + status="rejected", + extra_data={ + "rejected_at": now, + "rejected_by": str(rejector), + **({"notes": body.notes} if body.notes else {}), + }, + ) + return {"ok": True, "event_id": event_id, "status": "rejected"} diff --git a/radioshaq/radioshaq/api/routes/gis.py b/radioshaq/radioshaq/api/routes/gis.py new file mode 100644 index 0000000..9b0e59f --- /dev/null +++ b/radioshaq/radioshaq/api/routes/gis.py @@ -0,0 +1,195 @@ +"""GIS location and operators-nearby API. + +POST/GET /gis/location for operator location CRUD. +GET /gis/operators-nearby for spatial query. +GET /gis/emergency-events for emergency events with locations (for map overlays). +""" + +from __future__ import annotations + +from datetime import datetime +from typing import Any + +from fastapi import APIRouter, Depends, HTTPException, Query +from pydantic import BaseModel, Field + +from radioshaq.api.dependencies import get_current_user, get_db +from radioshaq.auth.jwt import TokenPayload +from radioshaq.api.routes.callsigns import CALLSIGN_PATTERN + +router = APIRouter() + +# Bounds for WGS 84 +LAT_MIN, LAT_MAX = -90.0, 90.0 +LON_MIN, LON_MAX = -180.0, 180.0 + + +class PostLocationBody(BaseModel): + """Body for POST /gis/location. Provide either (latitude, longitude) or location_text (v1: text alone returns 400).""" + + callsign: str = Field(..., min_length=1, description="Operator callsign") + latitude: float | None = Field(None, ge=LAT_MIN, le=LAT_MAX, description="Latitude (WGS 84)") + longitude: float | None = Field(None, ge=LON_MIN, le=LON_MAX, description="Longitude (WGS 84)") + location_text: str | None = Field(None, description="Free-text place (v1 strict: not used for storage alone)") + accuracy_meters: float | None = Field(None, ge=0) + altitude_meters: float | None = Field(None) + + +class LocationResponse(BaseModel): + """Response for stored or retrieved location (explicit lat/lon, no raw geometry).""" + + id: int + callsign: str + latitude: float + longitude: float + source: str + timestamp: str | None + confidence: float | None = None + + +@router.post("/location", response_model=LocationResponse) +async def post_location( + body: PostLocationBody, + db: Any = Depends(get_db), + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """ + Store operator location. v1 strict: requires latitude and longitude. + If only location_text is provided, returns 400 with clarification. + """ + if db is None: + raise HTTPException(status_code=503, detail="Database not available") + + callsign = body.callsign.strip().upper() + if not callsign: + raise HTTPException(status_code=400, detail="callsign is required") + if not CALLSIGN_PATTERN.match(callsign): + raise HTTPException( + status_code=400, + detail="callsign must be 3–7 alphanumeric chars, optional -digit (e.g. K5ABC or W1XYZ-1)", + ) + + lat, lon = body.latitude, body.longitude + if lat is not None and lon is not None: + # Explicit coords: store and return + loc = await db.store_operator_location( + callsign=callsign, + latitude=lat, + longitude=lon, + altitude_meters=body.altitude_meters, + accuracy_meters=body.accuracy_meters, + source="user_disclosed", + ) + return { + "id": loc["id"], + "callsign": loc["callsign"], + "latitude": loc["latitude"], + "longitude": loc["longitude"], + "source": loc["source"], + "timestamp": loc["timestamp"], + "confidence": 1.0, + } + # v1 strict: only location_text → 400 + if body.location_text and (lat is None and lon is None): + raise HTTPException( + status_code=400, + detail={ + "error": "ambiguous_location", + "message": "Provide latitude and longitude for v1. Location text alone is not stored.", + }, + ) + raise HTTPException( + status_code=400, + detail="Provide both latitude and longitude to store location.", + ) + + +@router.get("/location/{callsign}", response_model=LocationResponse) +async def get_location( + callsign: str, + db: Any = Depends(get_db), + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """Return latest stored location for callsign (explicit lat/lon).""" + if db is None: + raise HTTPException(status_code=503, detail="Database not available") + + normalized = callsign.strip().upper() + if not normalized: + raise HTTPException(status_code=400, detail="callsign is required") + if not CALLSIGN_PATTERN.match(normalized): + raise HTTPException( + status_code=400, + detail="callsign must be 3–7 alphanumeric chars, optional -digit (e.g. K5ABC or W1XYZ-1)", + ) + + loc = await db.get_latest_location_decoded(normalized) + if loc is None: + raise HTTPException(status_code=404, detail="No location found for this callsign") + + return { + "id": loc["id"], + "callsign": loc["callsign"], + "latitude": loc["latitude"], + "longitude": loc["longitude"], + "source": loc["source"], + "timestamp": loc["timestamp"], + "confidence": None, + } + + +@router.get("/operators-nearby") +async def get_operators_nearby( + latitude: float = Query(..., ge=LAT_MIN, le=LAT_MAX), + longitude: float = Query(..., ge=LON_MIN, le=LON_MAX), + radius_meters: float = Query(50000, ge=0), + recent_hours: int = Query(24, ge=0), + max_results: int = Query(100, ge=1, le=500), + db: Any = Depends(get_db), + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """Find operators within radius of a point (from persisted operator_locations).""" + if db is None: + raise HTTPException(status_code=503, detail="Database not available") + + operators = await db.find_operators_nearby( + latitude=latitude, + longitude=longitude, + radius_meters=radius_meters, + max_results=max_results, + recent_only=recent_hours > 0, + recent_hours=recent_hours, + ) + # Ensure each operator has last_seen_at for mapping clients (alias of timestamp) + operators_for_response = [ + {**op, "last_seen_at": op.get("last_seen_at") or op.get("timestamp")} + for op in operators + ] + return { + "latitude": latitude, + "longitude": longitude, + "radius_meters": radius_meters, + "operators": operators_for_response, + "count": len(operators_for_response), + } + + +@router.get("/emergency-events") +async def get_emergency_events_with_locations( + since: datetime | None = Query(None, description="ISO timestamp; only events created_at >= since"), + status: str | None = Query(None, description="Filter by status (e.g. pending, approved)"), + limit: int = Query(100, ge=1, le=500), + db: Any = Depends(get_db), + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """Return emergency coordination events that have a location, with lat/lon for map overlays.""" + if db is None: + raise HTTPException(status_code=503, detail="Database not available") + if not hasattr(db, "get_emergency_events_with_locations"): + return {"events": [], "count": 0} + events = await db.get_emergency_events_with_locations( + since=since.isoformat() if since is not None else None, + status=status, + limit=limit, + ) + return {"events": events, "count": len(events)} diff --git a/radioshaq/radioshaq/api/routes/messages.py b/radioshaq/radioshaq/api/routes/messages.py index cca8748..211813c 100644 --- a/radioshaq/radioshaq/api/routes/messages.py +++ b/radioshaq/radioshaq/api/routes/messages.py @@ -1,9 +1,12 @@ """Message and orchestrator request endpoints. -Request body may include InboundMessage-compatible fields for outbound routing: +When the MessageBus consumer is enabled (RADIOSHAQ_BUS_CONSUMER_ENABLED=1), the orchestrator's +reply is published as an OutboundMessage. The request body can include: - message or text: required, content to process -- channel: optional (e.g. whatsapp, sms, api), for future OutboundMessage routing -- chat_id: optional, for future OutboundMessage routing +- channel: optional (e.g. whatsapp, sms, api, radio_rx). Reply is delivered to this channel + via the outbound dispatcher (radio_rx -> radio_tx; sms/whatsapp -> Twilio). +- chat_id: optional. For sms/whatsapp this should be the destination phone (E.164). + Preserved on the outbound message so the dispatcher sends to the correct recipient. - sender_id: optional, for logging/context """ @@ -17,6 +20,7 @@ from fastapi import APIRouter, Depends, File, Form, HTTPException, Request, UploadFile from pydantic import BaseModel, Field as PydanticField +from loguru import logger from radioshaq.api.callsign_whitelist import get_effective_allowed_callsigns, is_callsign_allowed from radioshaq.api.dependencies import ( @@ -27,6 +31,8 @@ get_transcript_storage, ) from radioshaq.auth.jwt import TokenPayload +from radioshaq.compliance_plugin import get_band_plan_source_for_config +from radioshaq.config.schema import Config from radioshaq.radio.bands import BAND_PLANS from radioshaq.radio.injection import get_injection_queue @@ -75,6 +81,7 @@ async def process_message( @router.post("/whitelist-request") async def whitelist_request( request: Request, + config: Config = Depends(get_config), user: TokenPayload = Depends(get_current_user), orchestrator: Any = Depends(get_orchestrator), radio_tx_agent: Any = Depends(get_radio_tx_agent), @@ -152,10 +159,14 @@ async def whitelist_request( f.write(content) temp_path = f.name try: - from radioshaq.audio.asr import transcribe_audio_voxtral - request_text = await asyncio.to_thread(transcribe_audio_voxtral, temp_path, language="en") - except ImportError: - raise HTTPException(status_code=503, detail="ASR not available") + from radioshaq.audio.asr_plugin import transcribe_audio + asr_lang = getattr(config.audio, "asr_language", "en") or "en" + asr_model = getattr(config.audio, "asr_model", "voxtral") or "voxtral" + request_text = await asyncio.to_thread( + transcribe_audio, temp_path, asr_model, language=asr_lang + ) + except (ImportError, RuntimeError) as e: + raise HTTPException(status_code=503, detail=f"ASR not available: {e!s}") finally: Path(temp_path).unlink(missing_ok=True) request_text = (request_text or "").strip() @@ -176,8 +187,12 @@ async def whitelist_request( orchestrator_input += f" Stated callsign: {callsign}." result = await orchestrator.process_request(request=orchestrator_input, callsign=callsign) - if response_frequency_hz is None and response_band and response_band in BAND_PLANS: - plan = BAND_PLANS[response_band] + band_plans = get_band_plan_source_for_config( + config.radio.restricted_bands_region, + getattr(config.radio, "band_plan_region", None), + ) + if response_frequency_hz is None and response_band and response_band in band_plans: + plan = band_plans[response_band] response_frequency_hz = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 if not response_mode: response_mode = (plan.modes or ["FM"])[0] @@ -253,14 +268,16 @@ async def message_from_audio( f.write(content) temp_path = f.name try: - from radioshaq.audio.asr import transcribe_audio_voxtral + from radioshaq.audio.asr_plugin import transcribe_audio + asr_lang = getattr(config.audio, "asr_language", "en") or "en" + asr_model = getattr(config.audio, "asr_model", "voxtral") or "voxtral" transcript_text = await asyncio.to_thread( - transcribe_audio_voxtral, temp_path, language="en" + transcribe_audio, temp_path, asr_model, language=asr_lang ) - except ImportError: + except (ImportError, RuntimeError) as e: raise HTTPException( status_code=503, - detail="ASR not available (uv sync --extra audio)", + detail=f"ASR not available: {e!s}", ) finally: Path(temp_path).unlink(missing_ok=True) @@ -269,25 +286,36 @@ async def message_from_audio( if not transcript_text: raise HTTPException(status_code=400, detail="No speech detected in audio") + band_plans = get_band_plan_source_for_config( + config.radio.restricted_bands_region, + getattr(config.radio, "band_plan_region", None), + ) storage = get_transcript_storage(request) db = getattr(request.app.state, "db", None) transcript_id = 0 if storage and db: sid = session_id or f"from-audio-{uuid.uuid4().hex[:12]}" freq = frequency_hz - if freq <= 0 and band and band in BAND_PLANS: - plan = BAND_PLANS[band] + if freq <= 0 and band and band in band_plans: + plan = band_plans[band] freq = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 - mode_val = (BAND_PLANS[band].modes[0]) if band and band in BAND_PLANS else mode - transcript_id = await storage.store( - session_id=sid, - source_callsign=src, - frequency_hz=freq, - mode=mode_val, - transcript_text=transcript_text, - destination_callsign=dest, - metadata={"band": band, "source": "from_audio"}, - ) + mode_val = (band_plans[band].modes[0]) if band and band in band_plans else mode + try: + transcript_id = await storage.store( + session_id=sid, + source_callsign=src, + frequency_hz=freq, + mode=mode_val, + transcript_text=transcript_text, + destination_callsign=dest, + metadata={"band": band, "source": "from_audio"}, + ) + except Exception as e: + logger.exception("Failed to store transcript from /messages/from-audio: {}", e) + raise HTTPException( + status_code=503, + detail="Database not ready for transcripts; run alembic upgrade head and restart the API.", + ) from e if inject: queue = get_injection_queue() queue.inject_message( @@ -349,7 +377,7 @@ async def inject_and_store( ) transcript_id = None storage = get_transcript_storage(request) - if storage and getattr(storage, "_db", None): + if storage and getattr(storage, "db", None): transcript_id = await storage.store( session_id=f"inject-store-{uuid.uuid4().hex[:12]}", source_callsign=src, diff --git a/radioshaq/radioshaq/api/routes/radio.py b/radioshaq/radioshaq/api/routes/radio.py index 45bf287..1aadeb4 100644 --- a/radioshaq/radioshaq/api/routes/radio.py +++ b/radioshaq/radioshaq/api/routes/radio.py @@ -1,16 +1,32 @@ """Radio and propagation endpoints.""" +from __future__ import annotations + +import asyncio +import tempfile +from pathlib import Path from typing import Any -from fastapi import APIRouter, Depends, HTTPException, Query, Request +from fastapi import APIRouter, Depends, File, HTTPException, Query, Request, UploadFile from pydantic import BaseModel, Field -from radioshaq.api.dependencies import get_current_user, get_radio_tx_agent +from radioshaq.api.dependencies import get_config, get_current_user, get_radio_tx_agent from radioshaq.auth.jwt import TokenPayload +from radioshaq.compliance_plugin import get_band_plan_source_for_config +from radioshaq.config.schema import Config from radioshaq.database.gis import propagation_prediction router = APIRouter() +# Max size for /send-audio uploads to avoid unbounded memory use (DoS). +MAX_AUDIO_UPLOAD_BYTES = 100 * 1024 * 1024 # 100 MB + + +def _write_temp_audio(tmp: Any, content: bytes) -> None: + """Write content to a temp file and close it. Used from run_in_executor to avoid blocking.""" + tmp.write(content) + tmp.close() + class SendTTSBody(BaseModel): """Body for POST /radio/send-tts.""" @@ -20,6 +36,13 @@ class SendTTSBody(BaseModel): mode: str | None = None +class SendAudioBody(BaseModel): + """Body for POST /radio/send-audio (multipart with file).""" + + frequency_hz: float | None = None + mode: str | None = None + + @router.get("/propagation") async def propagation( lat_origin: float = Query(..., description="Origin latitude"), @@ -34,11 +57,16 @@ async def propagation( @router.get("/bands") async def bands( + config: Config = Depends(get_config), _user: TokenPayload = Depends(get_current_user), ) -> dict[str, list[str]]: - """List supported bands (from band plan).""" - from radioshaq.radio.bands import BAND_PLANS - return {"bands": list(BAND_PLANS.keys())} + """List supported bands (from effective band plan for config region).""" + radio = config.radio + plans = get_band_plan_source_for_config( + radio.restricted_bands_region, + getattr(radio, "band_plan_region", None), + ) + return {"bands": list(plans.keys())} @router.get("/status") @@ -47,26 +75,43 @@ async def radio_status( _user: TokenPayload = Depends(get_current_user), ) -> dict[str, Any]: """ - Report whether a radio (CAT rig) is connected. When connected, optionally include - current frequency and mode from the rig. + Report whether a radio (CAT rig) is connected and/or SDR TX (HackRF) is configured. + When CAT is connected, include current frequency and mode. For live demos, check + sdr_tx_available to ensure HackRF TX path is enabled (real hardware when device attached). """ radio_tx = get_radio_tx_agent(request) if not radio_tx: - return {"connected": False, "reason": "radio_tx_agent_not_available"} + return { + "connected": False, + "reason": "radio_tx_agent_not_available", + "sdr_tx_available": False, + "sdr_tx_reason": "radio_tx_agent_not_available", + } + out: dict[str, Any] = {} rig_manager = getattr(radio_tx, "rig_manager", None) - if not rig_manager or not hasattr(rig_manager, "is_connected"): - return {"connected": False, "reason": "rig_not_configured"} - connected = rig_manager.is_connected() - out: dict[str, Any] = {"connected": connected} - if connected: - try: - state = await rig_manager.get_state() - if state: - out["frequency_hz"] = state.frequency - out["mode"] = getattr(state.mode, "value", str(state.mode)) - out["ptt"] = state.ptt - except Exception: - pass + if rig_manager and hasattr(rig_manager, "is_connected"): + connected = rig_manager.is_connected() + out["connected"] = connected + if connected: + try: + state = await rig_manager.get_state() + if state: + out["frequency_hz"] = state.frequency + out["mode"] = getattr(state.mode, "value", str(state.mode)) + out["ptt"] = state.ptt + except Exception: + pass + else: + out["connected"] = False + out["reason"] = "rig_not_configured" + + sdr_transmitter = getattr(radio_tx, "sdr_transmitter", None) + if sdr_transmitter is not None: + out["sdr_tx_available"] = True + out["sdr_tx_reason"] = "configured" + else: + out["sdr_tx_available"] = False + out["sdr_tx_reason"] = "sdr_tx_disabled_or_unavailable" return out @@ -91,5 +136,78 @@ async def send_tts( task["mode"] = body.mode result = await radio_tx.execute(task) if not result.get("success", False): - raise HTTPException(status_code=500, detail=result.get("error", "TX failed")) + detail = result.get("error") or result.get("notes") or "TX failed" + status = 500 + # Misconfiguration (no rig, SDR TX disabled, etc.) should be surfaced as a service-unavailable error. + if "Rig manager not configured" in detail or "SDR TX" in detail: + status = 503 + # HackRF/libusb errors from SDR TX should also be treated as transient service-unavailable conditions. + if "HackRF libusb error" in detail or "libusb" in detail.lower(): + status = 503 + raise HTTPException(status_code=status, detail=detail) return {"ok": True} + + +@router.post("/send-audio") +async def send_audio( + request: Request, + file: UploadFile = File(...), + frequency_hz: float | None = None, + mode: str | None = None, + _user: TokenPayload = Depends(get_current_user), +) -> dict[str, Any]: + """Transmit an uploaded audio file over radio (CAT or SDR via radio_tx agent). + + This is primarily for live demos where the client cannot reference server-local paths. + """ + if not file.content_type or not ( + file.content_type.startswith("audio/") or file.content_type == "application/octet-stream" + ): + raise HTTPException(status_code=400, detail="Expected audio file") + content = await file.read(MAX_AUDIO_UPLOAD_BYTES + 1) + if len(content) > MAX_AUDIO_UPLOAD_BYTES: + raise HTTPException( + status_code=413, + detail=f"Audio file too large (max {MAX_AUDIO_UPLOAD_BYTES // (1024 * 1024)} MB)", + ) + if not content: + raise HTTPException(status_code=400, detail="Empty file") + radio_tx = get_radio_tx_agent(request) + if not radio_tx: + raise HTTPException(status_code=503, detail="Radio TX agent not available") + + suffix = Path(file.filename or "audio.wav").suffix or ".wav" + tmp = tempfile.NamedTemporaryFile(suffix=suffix, delete=False) + temp_path = tmp.name + try: + # Offload large file write off the event loop to avoid blocking other requests. + loop = asyncio.get_running_loop() + await loop.run_in_executor(None, _write_temp_audio, tmp, content) + except Exception: + tmp.close() + Path(temp_path).unlink(missing_ok=True) + raise + try: + task: dict[str, Any] = { + "transmission_type": "voice", + "message": "", + "audio_path": temp_path, + "use_tts": False, + } + if frequency_hz is not None: + task["frequency"] = frequency_hz + if mode: + task["mode"] = mode + result = await radio_tx.execute(task) + if not result.get("success", False): + detail = result.get("error") or result.get("notes") or "TX failed" + status = 500 + if "Rig manager not configured" in detail or "SDR TX" in detail: + status = 503 + # HackRF/libusb errors from SDR TX should also be treated as transient service-unavailable conditions. + if "HackRF libusb error" in detail or "libusb" in detail.lower(): + status = 503 + raise HTTPException(status_code=status, detail=detail) + return {"ok": True, "notes": result.get("notes")} + finally: + Path(temp_path).unlink(missing_ok=True) diff --git a/radioshaq/radioshaq/api/routes/relay.py b/radioshaq/radioshaq/api/routes/relay.py index af64e3a..732cd81 100644 --- a/radioshaq/radioshaq/api/routes/relay.py +++ b/radioshaq/radioshaq/api/routes/relay.py @@ -11,7 +11,9 @@ from radioshaq.api.callsign_whitelist import get_effective_allowed_callsigns, is_callsign_allowed from radioshaq.api.dependencies import get_config, get_current_user, get_radio_tx_agent, get_transcript_storage from radioshaq.auth.jwt import TokenPayload -from radioshaq.radio.bands import BAND_PLANS +from radioshaq.constants import E164_PATTERN +from radioshaq.utils.phone import normalize_e164 +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.radio.injection import get_injection_queue from radioshaq.relay.service import relay_message_between_bands_service @@ -19,19 +21,22 @@ class RelayBody(BaseModel): - """Body for POST /messages/relay (band translation).""" + """Body for POST /messages/relay (band translation or SMS/WhatsApp).""" message: str = Field(..., min_length=1) source_band: str = Field(...) - target_band: str = Field(...) + target_band: str | None = Field(None, description="Target band (e.g. 2m) when target_channel=radio; ignored when target_channel is sms/whatsapp") source_frequency_hz: float | None = Field(None) target_frequency_hz: float | None = Field(None) source_callsign: str = Field("UNKNOWN") destination_callsign: str | None = Field(None) session_id: str | None = Field(None) - deliver_at: str | None = Field(None, description="ISO datetime when message should be delivered on target band (optional)") + deliver_at: str | None = Field(None, description="ISO datetime when message should be delivered (optional)") source_audio_path: str | None = Field(None) target_audio_path: str | None = Field(None) + target_channel: str = Field("radio", description="Delivery channel: radio, sms, or whatsapp") + destination_phone: str | None = Field(None, description="E.164 phone for SMS/WhatsApp when target_channel is sms or whatsapp") + emergency: bool = Field(False, description="If true and target_channel is sms/whatsapp, queue for human approval (Section 9)") @router.post("/relay") @@ -61,18 +66,41 @@ async def relay_message_between_bands( msg = body.message source_band = body.source_band target_band = body.target_band - if source_band not in BAND_PLANS or target_band not in BAND_PLANS: - raise HTTPException(status_code=400, detail="Unknown band; use e.g. 40m, 2m, 20m") + config = get_config(request) + radio = config.radio + band_plans = get_band_plan_source_for_config( + radio.restricted_bands_region, + getattr(radio, "band_plan_region", None), + ) + target_channel = (body.target_channel or "radio").strip().lower() + if target_channel not in ("radio", "sms", "whatsapp"): + raise HTTPException(status_code=400, detail="target_channel must be radio, sms, or whatsapp") + destination_phone_e164: str | None = None + if target_channel in ("sms", "whatsapp"): + if not (body.destination_phone and str(body.destination_phone).strip()): + raise HTTPException(status_code=400, detail="destination_phone required when target_channel is sms or whatsapp") + destination_phone_e164 = normalize_e164(body.destination_phone or "") + if not E164_PATTERN.match(destination_phone_e164): + raise HTTPException(status_code=400, detail="destination_phone must be E.164 (10–15 digits)") + if source_band not in band_plans: + raise HTTPException(status_code=400, detail="Unknown source_band; use e.g. 40m, 2m, 20m") + if target_channel == "radio": + if not (target_band and str(target_band).strip()): + raise HTTPException(status_code=400, detail="target_band required when target_channel is radio") + if target_band not in band_plans: + raise HTTPException(status_code=400, detail="Unknown target_band; use e.g. 40m, 2m, 20m") - source_plan = BAND_PLANS[source_band] - target_plan = BAND_PLANS[target_band] + source_plan = band_plans[source_band] source_freq = body.source_frequency_hz or (source_plan.freq_start_hz + (source_plan.freq_end_hz - source_plan.freq_start_hz) / 2) - target_freq = body.target_frequency_hz or (target_plan.freq_start_hz + (target_plan.freq_end_hz - target_plan.freq_start_hz) / 2) + if target_channel == "radio": + target_plan = band_plans[target_band] + target_freq = body.target_frequency_hz or (target_plan.freq_start_hz + (target_plan.freq_end_hz - target_plan.freq_start_hz) / 2) + else: + target_freq = body.target_frequency_hz or 0.0 source_callsign = (body.source_callsign or "UNKNOWN").upper() destination_callsign = (body.destination_callsign or "").upper() or None session_id = body.session_id or f"relay-{uuid.uuid4().hex[:12]}" - config = get_config(request) allowed = await get_effective_allowed_callsigns(getattr(request.app.state, "db", None), config.radio) if not is_callsign_allowed(source_callsign, allowed, config.radio.callsign_registry_required): raise HTTPException(status_code=403, detail="Source callsign not allowed") @@ -82,10 +110,12 @@ async def relay_message_between_bands( storage = get_transcript_storage(request) queue = get_injection_queue() radio_tx = get_radio_tx_agent(request) + message_bus = getattr(request.app.state, "message_bus", None) + target_band_val = (target_band if target_channel == "radio" else target_channel) or target_channel result = await relay_message_between_bands_service( message=msg, source_band=source_band, - target_band=target_band, + target_band=target_band_val, source_frequency_hz=source_freq, target_frequency_hz=target_freq, source_callsign=source_callsign, @@ -95,13 +125,24 @@ async def relay_message_between_bands( storage=storage, injection_queue=queue, radio_tx_agent=radio_tx, - config=config.radio, + config=config, source_audio_path=body.source_audio_path, target_audio_path=body.target_audio_path, store_only_relayed=getattr(config.radio, "relay_store_only_relayed", False), + target_channel=target_channel, + destination_phone=destination_phone_e164 if target_channel in ("sms", "whatsapp") else (body.destination_phone or "").strip() or None, + emergency=body.emergency, + message_bus=message_bus, ) if not result.get("ok"): raise HTTPException(status_code=400, detail=result.get("error", "Relay failed")) + if result.get("queued_for_approval"): + return { + "ok": result["ok"], + "queued_for_approval": True, + "event_id": result.get("event_id"), + "target_channel": result.get("target_channel", "radio"), + } if result.get("relay") == "no_storage": return result return { @@ -114,4 +155,5 @@ async def relay_message_between_bands( "target_frequency_hz": result["target_frequency_hz"], "session_id": result["session_id"], "deliver_at": result.get("deliver_at"), + "target_channel": result.get("target_channel", "radio"), } diff --git a/radioshaq/radioshaq/api/routes/transcripts.py b/radioshaq/radioshaq/api/routes/transcripts.py index 2802577..79378e6 100644 --- a/radioshaq/radioshaq/api/routes/transcripts.py +++ b/radioshaq/radioshaq/api/routes/transcripts.py @@ -56,7 +56,11 @@ async def search_transcripts( out = list(results) config = get_config(request) allowed = await get_effective_allowed_callsigns(db, config.radio) + # When whitelist is applied, include the authenticated user's callsign so they can see their own transcripts if allowed: + user_callsign = (getattr(user, "station_id", None) or getattr(user, "sub", None) or "").strip().upper() + if user_callsign: + allowed = set(allowed) | {user_callsign} out = [ t for t in out @@ -101,13 +105,26 @@ async def play_transcript_over_radio( if not text: raise HTTPException(status_code=400, detail="Transcript has no text") try: - from radioshaq.audio.tts import text_to_speech_elevenlabs + from radioshaq.audio.tts_plugin import synthesize_speech except ImportError: - raise HTTPException(status_code=503, detail="TTS not available (ElevenLabs)") - with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as f: + raise HTTPException(status_code=503, detail="TTS plugin not available") + config = get_config(request) + tts_cfg = getattr(config, "tts", None) + provider = getattr(tts_cfg, "provider", "elevenlabs") if tts_cfg else "elevenlabs" + suffix = ".wav" if provider == "kokoro" else ".mp3" + with tempfile.NamedTemporaryFile(suffix=suffix, delete=False) as f: temp_path = f.name try: - text_to_speech_elevenlabs(text, output_path=temp_path) + kwargs: dict[str, Any] = {} + if tts_cfg and provider == "elevenlabs": + kwargs["voice"] = getattr(tts_cfg, "elevenlabs_voice_id", None) + kwargs["model_id"] = getattr(tts_cfg, "elevenlabs_model_id", None) + kwargs["output_format"] = getattr(tts_cfg, "elevenlabs_output_format", None) + elif tts_cfg and provider == "kokoro": + kwargs["voice"] = getattr(tts_cfg, "kokoro_voice", None) + kwargs["lang_code"] = getattr(tts_cfg, "kokoro_lang_code", None) + kwargs["speed"] = getattr(tts_cfg, "kokoro_speed", None) + synthesize_speech(text, provider, output_path=temp_path, **kwargs) task = { "transmission_type": "voice", "message": text, @@ -117,6 +134,11 @@ async def play_transcript_over_radio( result = await radio_tx.execute(task) if not result.get("success", False): raise HTTPException(status_code=500, detail=result.get("error", "TX failed")) + except RuntimeError as e: + raise HTTPException( + status_code=503, + detail=f"TTS unavailable or synthesis failed: {e}", + ) from e finally: Path(temp_path).unlink(missing_ok=True) return {"ok": True, "transcript_id": transcript_id} diff --git a/radioshaq/radioshaq/api/routes/twilio.py b/radioshaq/radioshaq/api/routes/twilio.py new file mode 100644 index 0000000..646cf04 --- /dev/null +++ b/radioshaq/radioshaq/api/routes/twilio.py @@ -0,0 +1,185 @@ +"""Twilio webhook handlers (SMS + WhatsApp). + +Twilio delivers inbound messages as application/x-www-form-urlencoded. We validate the webhook +signature when `config.twilio.auth_token` is configured, then publish an InboundMessage to the +MessageBus so the orchestrator can handle it. +""" + +from __future__ import annotations + +from datetime import datetime, timezone +from typing import Any + +from fastapi import APIRouter, HTTPException, Request, Response +from loguru import logger + +from radioshaq.utils.phone import normalize_e164 +from radioshaq.vendor.nanobot.bus.events import InboundMessage + +router = APIRouter() + +_OPTOUT_KEYWORDS = {"STOP", "STOPALL", "UNSUBSCRIBE", "CANCEL", "END", "QUIT"} + + +def _normalized_channel_from_form(explicit_channel: str, form: dict[str, Any]) -> str: + frm = str(form.get("From", "") or "") + if frm.startswith("whatsapp:"): + return "whatsapp" + return explicit_channel + + +def _normalize_from_phone(form: dict[str, Any]) -> str: + frm = str(form.get("From", "") or "") + if frm.startswith("whatsapp:"): + frm = frm[len("whatsapp:") :] + return normalize_e164(frm) or frm.strip() + + +def _collect_media_urls(form: dict[str, Any]) -> list[str]: + urls: list[str] = [] + try: + n = int(str(form.get("NumMedia", "0") or "0")) + except ValueError: + n = 0 + for i in range(max(0, n)): + u = form.get(f"MediaUrl{i}") + if u: + urls.append(str(u)) + return urls + + +def _twilio_request_url_for_signature(request: Request) -> str: + # Twilio signs the full URL it requested. When behind a proxy, starlette's request.url + # may reflect internal scheme/host, so we honor common X-Forwarded-* headers. + proto = (request.headers.get("x-forwarded-proto") or "").split(",")[0].strip() + host = (request.headers.get("x-forwarded-host") or "").split(",")[0].strip() + if not proto and request.headers.get("x-forwarded-ssl", "").lower() == "on": + proto = "https" + if not proto: + proto = request.url.scheme + if not host: + host = request.url.netloc + return f"{proto}://{host}{request.url.path}" + + +def _validate_twilio_signature_if_configured( + request: Request, form: dict[str, Any], channel: str +) -> bool: + cfg = getattr(request.app.state, "config", None) + twilio_cfg = getattr(cfg, "twilio", None) if cfg else None + auth_token = getattr(twilio_cfg, "auth_token", None) if twilio_cfg else None + allow_unsigned = getattr(twilio_cfg, "allow_unsigned_webhooks", False) if twilio_cfg else False + signature = request.headers.get("x-twilio-signature") + + if not auth_token: + if not allow_unsigned: + logger.error( + "Twilio webhook misconfigured: twilio.auth_token is required (channel={})", + channel, + ) + raise HTTPException( + status_code=503, + detail="Twilio webhook misconfigured: twilio.auth_token is required", + ) + logger.warning( + "Twilio webhook accepted without signature validation (allow_unsigned_webhooks=True, channel={})", + channel, + ) + return False + if not signature: + raise HTTPException(status_code=403, detail="Missing X-Twilio-Signature") + + try: + from twilio.request_validator import RequestValidator + except Exception as e: # pragma: no cover + # If auth_token is configured, we MUST validate — missing SDK is a hard error. + raise HTTPException( + status_code=503, + detail="Twilio SDK not installed; cannot validate webhook signature", + ) from e + + url = _twilio_request_url_for_signature(request) + # Twilio validator expects a plain dict of str->str values. + params = {k: str(v) for k, v in form.items()} + ok = RequestValidator(str(auth_token)).validate(url, params, signature) + if not ok: + raise HTTPException(status_code=403, detail="Invalid Twilio signature") + return True + + +def _twiml_response(message: str | None = None) -> Response: + try: + from twilio.twiml.messaging_response import MessagingResponse + except Exception: + # Fallback: Twilio accepts empty 200s; but TwiML is preferred. + return Response(content="", media_type="text/plain") + + r = MessagingResponse() + if message: + r.message(message) + return Response(content=str(r), media_type="application/xml") + + +async def _handle_inbound(request: Request, explicit_channel: str) -> Response: + bus = getattr(request.app.state, "message_bus", None) + if not bus: + raise HTTPException(status_code=503, detail="Message bus not available") + + form_obj = await request.form() + form: dict[str, Any] = dict(form_obj) + + channel = _normalized_channel_from_form(explicit_channel, form) + signature_validated = _validate_twilio_signature_if_configured(request, form, channel) + + from_phone = _normalize_from_phone(form) + body = str(form.get("Body", "") or "").strip() + msg_sid = str(form.get("MessageSid", "") or "").strip() or None + to_val = str(form.get("To", "") or "").strip() or None + + media_urls = _collect_media_urls(form) + metadata: dict[str, Any] = { + "provider": "twilio", + "twilio_message_sid": msg_sid, + "to": to_val, + "from_raw": str(form.get("From", "") or ""), + "signature_validated": signature_validated, + } + + # Opt-out handling (STOP): record directly if DB available, then acknowledge. + if body.upper() in _OPTOUT_KEYWORDS: + db = getattr(request.app.state, "db", None) + if db is not None and hasattr(db, "record_opt_out_by_phone"): + try: + await db.record_opt_out_by_phone(from_phone, channel) + except Exception as e: + logger.warning("Opt-out record failed (channel={} phone={}): {}", channel, from_phone, e) + return _twiml_response("You have been opted out. Reply START to re-subscribe.") + + inbound = InboundMessage( + channel=channel, + sender_id=from_phone, + chat_id=from_phone, + content=body, + timestamp=datetime.now(timezone.utc), + media=media_urls, + metadata=metadata, + ) + ok = await bus.publish_inbound(inbound) + if not ok: + raise HTTPException(status_code=507, detail="Inbound queue full") + + # Let the orchestrator respond via outbound dispatcher (no immediate auto-reply). + return _twiml_response(None) + + +@router.post("/sms") +async def twilio_sms_webhook(request: Request) -> Response: + """Inbound SMS webhook from Twilio.""" + return await _handle_inbound(request, explicit_channel="sms") + + +@router.post("/whatsapp") +async def twilio_whatsapp_webhook(request: Request) -> Response: + """Inbound WhatsApp webhook from Twilio.""" + return await _handle_inbound(request, explicit_channel="whatsapp") + diff --git a/radioshaq/radioshaq/api/server.py b/radioshaq/radioshaq/api/server.py index 82a6ef2..2480806 100644 --- a/radioshaq/radioshaq/api/server.py +++ b/radioshaq/radioshaq/api/server.py @@ -44,7 +44,7 @@ async def lifespan(app: FastAPI): _cron_task = None try: - if config.database.postgres_url and "localhost" in config.database.postgres_url: + if config.database.postgres_url: try: from radioshaq.database.postgres_gis import PostGISManager app.state.db = PostGISManager(config.database.postgres_url) @@ -72,7 +72,7 @@ async def lifespan(app: FastAPI): ) except Exception as e: from loguru import logger - logger.warning("Memory manager or cron not started: %s", e) + logger.warning("Memory manager or cron not started: {}", e) from radioshaq.orchestrator.factory import create_orchestrator, create_tool_registry from radioshaq.vendor.nanobot.bus.queue import MessageBus @@ -83,7 +83,7 @@ async def lifespan(app: FastAPI): try: app.state.tool_registry = create_tool_registry(config, db=getattr(app.state, "db", None), app=app) except Exception as e: - logger.warning("Tool registry not created: %s", e) + logger.warning("Tool registry not created: {}", e) try: app.state.orchestrator = create_orchestrator( config, @@ -94,17 +94,20 @@ async def lifespan(app: FastAPI): tool_registry=getattr(app.state, "tool_registry", None), ) app.state.agent_registry = getattr(app.state.orchestrator, "agent_registry", None) + rx_audio = app.state.agent_registry.get_agent("radio_rx_audio") if app.state.agent_registry else None + if rx_audio and hasattr(rx_audio, "set_metrics_callback"): + rx_audio.set_metrics_callback(lambda d: setattr(app.state, "audio_metrics_latest", d)) except Exception as e: - logger.warning("Orchestrator not created (messages/process will be unavailable): %s", e) + logger.warning("Orchestrator not created (messages/process will be unavailable): {}", e) - # Optional: run MessageBus inbound consumer and outbound radio handler (set RADIOSHAQ_BUS_CONSUMER_ENABLED=1) + # Optional: run MessageBus inbound consumer and single outbound dispatcher (radio_rx, sms, whatsapp) _consumer_task = None _outbound_radio_task = None _outbound_radio_stop = None if _bus_consumer_enabled: if getattr(app.state, "orchestrator", None) and getattr(app.state, "message_bus", None): from radioshaq.orchestrator.bridge import run_inbound_consumer - from radioshaq.orchestrator.outbound_radio import run_outbound_radio_handler + from radioshaq.orchestrator.outbound_dispatcher import run_outbound_handler _stop_event = asyncio.Event() _consumer_task = asyncio.create_task( run_inbound_consumer( @@ -117,19 +120,19 @@ async def lifespan(app: FastAPI): app.state._bus_consumer_stop = _stop_event app.state._bus_consumer_task = _consumer_task logger.info("MessageBus inbound consumer started") - radio_tx = app.state.agent_registry.get_agent("radio_tx") if getattr(app.state, "agent_registry", None) else None _outbound_radio_stop = asyncio.Event() _outbound_radio_task = asyncio.create_task( - run_outbound_radio_handler( + run_outbound_handler( app.state.message_bus, - radio_tx, config, + getattr(app.state, "agent_registry", None), + getattr(app.state, "db", None), stop_event=_outbound_radio_stop, ) ) app.state._outbound_radio_stop = _outbound_radio_stop app.state._outbound_radio_task = _outbound_radio_task - logger.info("Outbound radio handler started") + logger.info("Outbound handler started (radio_rx, sms, whatsapp)") # Optional: multi-band listener (listen_bands or default_band + listener_enabled) _listener_task = None @@ -161,7 +164,7 @@ async def lifespan(app: FastAPI): ) app.state._band_listener_stop = _listener_stop app.state._band_listener_task = _listener_task - logger.info("Band listener started for bands: %s", bands) + logger.info("Band listener started for bands: {}", bands) # Optional: voice listener (audio_input_enabled + voice_listener_enabled) _voice_listener_task = None @@ -204,6 +207,7 @@ async def lifespan(app: FastAPI): stop_event=_relay_delivery_stop, interval_seconds=60.0, radio_tx_agent=radio_tx, + message_bus=getattr(app.state, "message_bus", None), ) ) app.state._relay_delivery_stop = _relay_delivery_stop @@ -275,19 +279,22 @@ def create_app() -> FastAPI: lifespan=lifespan, ) - from radioshaq.api.routes import auth, audio, bus, callsigns, config_routes, health, inject, memory, messages, metrics, radio, receiver, relay, transcripts + from radioshaq.api.routes import auth, audio, bus, callsigns, config_routes, emergency, gis, health, inject, memory, messages, metrics, radio, receiver, relay, transcripts, twilio app.include_router(health.router, prefix="/health", tags=["health"]) app.include_router(metrics.metrics_router, prefix="/metrics", tags=["metrics"]) app.include_router(auth.router, prefix="/auth", tags=["auth"]) app.include_router(radio.router, prefix="/radio", tags=["radio"]) + app.include_router(gis.router, prefix="/gis", tags=["gis"]) app.include_router(memory.router, prefix="/memory", tags=["memory"]) app.include_router(messages.router, prefix="/messages", tags=["messages"]) app.include_router(relay.router, prefix="/messages", tags=["messages"]) app.include_router(transcripts.router, prefix="/transcripts", tags=["transcripts"]) app.include_router(callsigns.router, prefix="/callsigns", tags=["callsigns"]) + app.include_router(emergency.router, prefix="/emergency", tags=["emergency"]) app.include_router(inject.router, prefix="/inject", tags=["inject"]) app.include_router(receiver.router, prefix="/receiver", tags=["receiver"]) app.include_router(bus.router, prefix="/internal", tags=["internal"]) + app.include_router(twilio.router, prefix="/twilio", tags=["twilio"]) app.include_router(audio.router, prefix="/api/v1") app.include_router(config_routes.router, prefix="/api/v1") app.include_router(audio.ws_router, prefix="/ws") diff --git a/radioshaq/radioshaq/audio/__init__.py b/radioshaq/radioshaq/audio/__init__.py index c22af01..7429f7e 100644 --- a/radioshaq/radioshaq/audio/__init__.py +++ b/radioshaq/radioshaq/audio/__init__.py @@ -1,10 +1,12 @@ -"""Audio ASR (Voxtral), TTS (ElevenLabs), capture and stream processing.""" +"""Audio ASR (Voxtral, Whisper, Scribe), TTS (ElevenLabs, Kokoro), capture and stream processing.""" from __future__ import annotations __all__ = [ "transcribe_audio_voxtral", "text_to_speech_elevenlabs", + "synthesize_speech", + "transcribe_audio", "AudioCaptureService", "AudioStreamProcessor", "ProcessedSegment", @@ -18,6 +20,12 @@ def __getattr__(name: str): if name == "text_to_speech_elevenlabs": from radioshaq.audio.tts import text_to_speech_elevenlabs return text_to_speech_elevenlabs + if name == "synthesize_speech": + from radioshaq.audio.tts_plugin import synthesize_speech + return synthesize_speech + if name == "transcribe_audio": + from radioshaq.audio.asr_plugin import transcribe_audio + return transcribe_audio if name == "AudioCaptureService": from radioshaq.audio.capture import AudioCaptureService return AudioCaptureService diff --git a/radioshaq/radioshaq/audio/asr.py b/radioshaq/radioshaq/audio/asr.py index 1a5c2f9..4a5cbbb 100644 --- a/radioshaq/radioshaq/audio/asr.py +++ b/radioshaq/radioshaq/audio/asr.py @@ -1,69 +1,22 @@ -"""ASR using shakods/voxtral-asr-en (Voxtral fine-tune) via transformers.""" +"""ASR: Voxtral, Whisper, Scribe (via asr_plugin).""" from __future__ import annotations from pathlib import Path +from radioshaq.audio.asr_plugin import transcribe_audio + +VOXTRAL_ASR_HF_MODEL_ID = "shakods/voxtral-asr-en" + def transcribe_audio_voxtral( audio_path: str | Path, - model_id: str = "shakods/voxtral-asr-en", + model_id: str = VOXTRAL_ASR_HF_MODEL_ID, language: str = "en", ) -> str: """ - Transcribe audio file using Voxtral ASR (base: mistralai/Voxtral-Mini-3B-2507). - - When model_id is "shakods/voxtral-asr-en", loads the PEFT adapter on top of - the base Voxtral model for English ASR. + Transcribe audio file using Voxtral ASR (via ASR plugin). - Requires: transformers, peft, accelerate, torch, mistral-common[audio] - (install with: uv sync --extra audio) + Requires: uv sync --extra audio. """ - path = Path(audio_path) - if not path.exists(): - raise FileNotFoundError(str(audio_path)) - - try: - import torch - from transformers import AutoProcessor, VoxtralForConditionalGeneration - except ImportError as e: - raise RuntimeError( - "Install ASR deps: uv sync --extra audio. Requires transformers, torch, mistral-common[audio]." - ) from e - - base_id = "mistralai/Voxtral-Mini-3B-2507" - use_peft = model_id.strip().lower() == "shakods/voxtral-asr-en" - - device_map = "auto" # uses GPU if available, else CPU - processor = AutoProcessor.from_pretrained(base_id) - model = VoxtralForConditionalGeneration.from_pretrained( - base_id, - torch_dtype=torch.bfloat16, - device_map=device_map, - ) - - if use_peft: - try: - from peft import PeftModel - model = PeftModel.from_pretrained(model, "shakods/voxtral-asr-en") - model.eval() - except ImportError: - pass # run without adapter - except Exception: - pass # adapter load failed, use base - - # apply_transcription_request(language=..., audio=path, model_id=...) for transcription - inputs = processor.apply_transcription_request( - language=language, - audio=str(path), - model_id=base_id, - ) - inputs = inputs.to(model.device, dtype=torch.bfloat16) - - with torch.no_grad(): - outputs = model.generate(**inputs, max_new_tokens=500) - decoded = processor.batch_decode( - outputs[:, inputs.input_ids.shape[1] :], - skip_special_tokens=True, - ) - return (decoded[0] if decoded else "").strip() + return transcribe_audio(audio_path, model_id=model_id, language=language) diff --git a/radioshaq/radioshaq/audio/asr_plugin/__init__.py b/radioshaq/radioshaq/audio/asr_plugin/__init__.py new file mode 100644 index 0000000..3792686 --- /dev/null +++ b/radioshaq/radioshaq/audio/asr_plugin/__init__.py @@ -0,0 +1,80 @@ +"""ASR plugin: registry of backends (Voxtral, Whisper, Scribe) and transcribe_audio entry point.""" + +from __future__ import annotations + +from pathlib import Path + +from radioshaq.audio.asr_plugin.base import ASRBackend + +_backends: dict[str, ASRBackend] = {} + + +def register_asr_backend(model_id: str, backend: ASRBackend) -> None: + """Register an ASR backend (e.g. 'voxtral', 'whisper', 'scribe').""" + _backends[model_id] = backend + + +def get_asr_backend(model_id: str) -> ASRBackend | None: + """Return the backend for the given model_id, or None if not registered.""" + return _backends.get(model_id) + + +def _is_voxtral_like_model_id(model_id: str) -> bool: + """True if model_id looks like a Voxtral HF repo (route to voxtral backend).""" + if not model_id or not model_id.strip(): + return False + s = model_id.strip().lower() + return s == "voxtral" or "voxtral" in s or s.startswith("shakods/") or s.startswith("mistralai/voxtral") + + +def transcribe_audio( + audio_path: str | Path, + model_id: str = "voxtral", + *, + language: str | None = None, + **kwargs: object, +) -> str: + """Transcribe audio using the configured backend. Raises if backend not found or transcription fails.""" + backend_key = ( + model_id + if model_id in _backends + else ("voxtral" if _backends.get("voxtral") and _is_voxtral_like_model_id(model_id) else model_id) + ) + backend = _backends.get(backend_key) + if backend is None: + raise RuntimeError( + f"ASR backend {model_id!r} not available. " + "For voxtral/whisper run: uv sync --extra audio. For scribe set ELEVENLABS_API_KEY." + ) + # Scribe expects ElevenLabs API model name (scribe_v1/scribe_v2), not the routing key "scribe". + if backend_key == "scribe": + backend_kwargs = {**kwargs, "scribe_model_id": model_id if model_id in ("scribe_v1", "scribe_v2") else "scribe_v2"} + return backend.transcribe(audio_path, language=language, **backend_kwargs) + return backend.transcribe( + audio_path, language=language, model_id=model_id, **kwargs + ) + + +def _register_backends() -> None: + try: + from radioshaq.audio.asr_plugin.backends.voxtral import VoxtralASRBackend + register_asr_backend("voxtral", VoxtralASRBackend()) + except ImportError: + pass + try: + from radioshaq.audio.asr_plugin.backends.whisper import WhisperASRBackend + register_asr_backend("whisper", WhisperASRBackend()) + except ImportError: + pass + from radioshaq.audio.asr_plugin.backends.scribe import ScribeASRBackend + register_asr_backend("scribe", ScribeASRBackend()) + + +_register_backends() + +__all__ = [ + "ASRBackend", + "get_asr_backend", + "register_asr_backend", + "transcribe_audio", +] diff --git a/radioshaq/radioshaq/audio/asr_plugin/backends/__init__.py b/radioshaq/radioshaq/audio/asr_plugin/backends/__init__.py new file mode 100644 index 0000000..cd4a990 --- /dev/null +++ b/radioshaq/radioshaq/audio/asr_plugin/backends/__init__.py @@ -0,0 +1,16 @@ +"""ASR backends: Voxtral, Whisper (local), Scribe (ElevenLabs API).""" + +from radioshaq.audio.asr_plugin.backends.scribe import ScribeASRBackend + +__all__ = ["ScribeASRBackend"] + +try: + from radioshaq.audio.asr_plugin.backends.voxtral import VoxtralASRBackend + __all__ = list(__all__) + ["VoxtralASRBackend"] +except ImportError: + pass +try: + from radioshaq.audio.asr_plugin.backends.whisper import WhisperASRBackend + __all__ = list(__all__) + ["WhisperASRBackend"] +except ImportError: + pass diff --git a/radioshaq/radioshaq/audio/asr_plugin/backends/scribe.py b/radioshaq/radioshaq/audio/asr_plugin/backends/scribe.py new file mode 100644 index 0000000..846a6d6 --- /dev/null +++ b/radioshaq/radioshaq/audio/asr_plugin/backends/scribe.py @@ -0,0 +1,114 @@ +"""ElevenLabs Scribe API ASR backend. Requires ELEVENLABS_API_KEY.""" + +from __future__ import annotations + +import logging +import os +from pathlib import Path + + +logger = logging.getLogger(__name__) + + +class ScribeASRBackend: + """Transcribe using ElevenLabs Scribe (Speech-to-Text) API. + + Optionally runs ElevenLabs Voice Isolator (audio-isolation) before STT when + use_voice_isolator=True is passed or RADIOSHAQ_AUDIO__ELEVEN_VOICE_ISOLATOR_ENABLED + (or ELEVEN_VOICE_ISOLATOR_ENABLED) is set to a truthy value. + """ + + def _voice_isolator_enabled(self, kwargs: dict[str, object]) -> bool: + """Return True when ElevenLabs Voice Isolator should be used.""" + flag = kwargs.get("use_voice_isolator") + if isinstance(flag, bool): + return flag + env_val = os.environ.get("RADIOSHAQ_AUDIO__ELEVEN_VOICE_ISOLATOR_ENABLED") or os.environ.get( + "ELEVEN_VOICE_ISOLATOR_ENABLED" + ) + if not env_val: + return False + return env_val.strip().lower() in ("1", "true", "yes", "on") + + def transcribe( + self, + audio_path: str | Path, + *, + language: str | None = None, + **kwargs: object, + ) -> str: + import httpx + + path = Path(audio_path) + if not path.exists(): + raise FileNotFoundError(str(audio_path)) + + # kwargs is typed as object variadically; cast to dict internally. + kw = dict(kwargs) # type: ignore[arg-type] + + api_key = kw.get("api_key") or os.environ.get("ELEVENLABS_API_KEY") + if not api_key: + raise RuntimeError( + "Set ELEVENLABS_API_KEY or pass api_key= to use Scribe ASR." + ) + + # Use scribe_model_id so the plugin does not send the routing key "scribe" as the API model. + api_model_id = kw.get("scribe_model_id") or "scribe_v2" + stt_url = "https://api.elevenlabs.io/v1/speech-to-text" + headers = {"xi-api-key": api_key} + + # Optional: run ElevenLabs Voice Isolator (audio-isolation) first to denoise. + cleaned_audio: bytes | None = None + if self._voice_isolator_enabled(kw): + iso_url = "https://api.elevenlabs.io/v1/audio-isolation" + try: + raw_bytes = path.read_bytes() + iso_files = {"audio": (path.name, raw_bytes, "audio/wav")} + iso_data = {"file_format": "other"} + with httpx.Client(timeout=120.0) as client: + iso_resp = client.post( + iso_url, + files=iso_files, + data=iso_data, + headers=headers, + ) + iso_resp.raise_for_status() + cleaned_audio = iso_resp.content + logger.debug( + "ElevenLabs Voice Isolator applied for %s (bytes=%s)", + path, + len(cleaned_audio), + ) + except Exception as e: # noqa: BLE001 + logger.warning( + "Voice Isolator failed for %s, using raw audio: %s", + path, + e, + ) + cleaned_audio = None + + data = {"model_id": api_model_id} + if language and language.lower() != "auto": + data["language_code"] = language + + with httpx.Client(timeout=120.0) as client: + if cleaned_audio is not None: + files = {"file": (path.name, cleaned_audio, "audio/wav")} + else: + f = path.open("rb") + try: + files = {"file": (path.name, f, "audio/wav")} + except Exception: + f.close() + raise + try: + r = client.post(stt_url, files=files, data=data, headers=headers) + r.raise_for_status() + out = r.json() + finally: + # Close file handle if we opened one (cleaned_audio path does not create f) + if "f" in locals() and not f.closed: + f.close() + + text = out.get("text") if isinstance(out, dict) else None + return (text or "").strip() diff --git a/radioshaq/radioshaq/audio/asr_plugin/backends/voxtral.py b/radioshaq/radioshaq/audio/asr_plugin/backends/voxtral.py new file mode 100644 index 0000000..fb6a167 --- /dev/null +++ b/radioshaq/radioshaq/audio/asr_plugin/backends/voxtral.py @@ -0,0 +1,112 @@ +"""Voxtral ASR backend (shakods/voxtral-asr-en). Requires: uv sync --extra audio.""" + +from __future__ import annotations + +from pathlib import Path + +from radioshaq.constants import ASR_LANGUAGE_AUTO + +VOXTRAL_ASR_BASE_ID = "mistralai/Voxtral-Mini-3B-2507" +VOXTRAL_ASR_HF_MODEL_ID = "shakods/voxtral-asr-en" + + +class VoxtralASRBackend: + """Transcribe using Voxtral (base: mistralai/Voxtral-Mini-3B-2507) with optional PEFT adapter.""" + + def __init__(self) -> None: + self._processor: object | None = None + self._model: object | None = None + self._peft_model: object | None = None + + def _load_base(self) -> tuple[object, object]: + """Load base processor and model once; cache on instance.""" + if self._model is not None: + assert self._processor is not None + return self._processor, self._model + try: + import torch + from transformers import AutoProcessor, VoxtralForConditionalGeneration + except ImportError as e: + raise RuntimeError( + "Install ASR deps: uv sync --extra audio. Requires transformers, torch, mistral-common[audio]." + ) from e + self._processor = AutoProcessor.from_pretrained(VOXTRAL_ASR_BASE_ID) + self._model = VoxtralForConditionalGeneration.from_pretrained( + VOXTRAL_ASR_BASE_ID, + torch_dtype=torch.bfloat16, + device_map="auto", + ) + return self._processor, self._model + + def transcribe( + self, + audio_path: str | Path, + *, + language: str | None = None, + **kwargs: object, + ) -> str: + path = Path(audio_path) + if not path.exists(): + raise FileNotFoundError(str(audio_path)) + + import torch + + # Routing key "voxtral" means use default RadioShaq PEFT model; treat as VOXTRAL_ASR_HF_MODEL_ID. + raw_model_id = kwargs.get("model_id") + model_id = ( + VOXTRAL_ASR_HF_MODEL_ID + if not raw_model_id or str(raw_model_id).strip().lower() == "voxtral" + else raw_model_id + ) + base_id = VOXTRAL_ASR_BASE_ID + lang_normalized = (language or "").strip().lower() + use_peft = ( + str(model_id).strip().lower() == VOXTRAL_ASR_HF_MODEL_ID.lower() + and lang_normalized == "en" + ) + + processor, base_model = self._load_base() + + if use_peft: + if self._peft_model is not None: + model = self._peft_model + else: + try: + from peft import PeftModel + self._peft_model = PeftModel.from_pretrained( + base_model, VOXTRAL_ASR_HF_MODEL_ID + ) + self._peft_model.eval() + model = self._peft_model + except ImportError: + model = base_model # PEFT not installed; run with base model + except Exception as e: + import warnings + warnings.warn( + f"PEFT adapter load failed, using base model: {e}", + stacklevel=2, + ) + model = base_model + else: + model = base_model + + if lang_normalized == ASR_LANGUAGE_AUTO: + inputs = processor.apply_transcription_request( + audio=str(path), + model_id=base_id, + ) + else: + inputs = processor.apply_transcription_request( + language=language or "en", + audio=str(path), + model_id=base_id, + ) + inputs = inputs.to(model.device, dtype=torch.bfloat16) + + with torch.no_grad(): + outputs = model.generate(**inputs, max_new_tokens=500) + decoded = processor.batch_decode( + outputs[:, inputs.input_ids.shape[1] :], + skip_special_tokens=True, + ) + return (decoded[0] if decoded else "").strip() diff --git a/radioshaq/radioshaq/audio/asr_plugin/backends/whisper.py b/radioshaq/radioshaq/audio/asr_plugin/backends/whisper.py new file mode 100644 index 0000000..a7cd8e8 --- /dev/null +++ b/radioshaq/radioshaq/audio/asr_plugin/backends/whisper.py @@ -0,0 +1,38 @@ +"""Whisper ASR backend. Requires: uv sync --extra audio (or pip install openai-whisper).""" + +from __future__ import annotations + +from pathlib import Path + + +class WhisperASRBackend: + """Transcribe using OpenAI Whisper (local).""" + + def __init__(self) -> None: + self._model: object | None = None + + def _get_model(self) -> object: + if self._model is None: + try: + import whisper + self._model = whisper.load_model("base") + except ImportError as e: + raise RuntimeError( + "Whisper ASR requires: uv sync --extra audio (or pip install openai-whisper)" + ) from e + return self._model + + def transcribe( + self, + audio_path: str | Path, + *, + language: str | None = None, + **kwargs: object, + ) -> str: + path = Path(audio_path) + if not path.exists(): + raise FileNotFoundError(str(audio_path)) + model = self._get_model() + lang_arg = language if language and str(language).strip().lower() != "auto" else None + result = model.transcribe(str(path), fp16=False, language=lang_arg) + return (result.get("text") or "").strip() diff --git a/radioshaq/radioshaq/audio/asr_plugin/base.py b/radioshaq/radioshaq/audio/asr_plugin/base.py new file mode 100644 index 0000000..6600171 --- /dev/null +++ b/radioshaq/radioshaq/audio/asr_plugin/base.py @@ -0,0 +1,29 @@ +"""ASR backend protocol: pluggable speech-to-text providers (Voxtral, Whisper, Scribe).""" + +from __future__ import annotations + +from pathlib import Path +from typing import Protocol + + +class ASRBackend(Protocol): + """Provides speech-to-text transcription. Implementations: Voxtral, Whisper (local), Scribe (API).""" + + def transcribe( + self, + audio_path: str | Path, + *, + language: str | None = None, + **kwargs: object, + ) -> str: + """Transcribe audio file to text. + + Args: + audio_path: Path to audio file (WAV or format supported by backend). + language: Optional language hint (e.g. en, fr, es, auto). + **kwargs: Backend-specific options. + + Returns: + Transcribed text. + """ + ... diff --git a/radioshaq/radioshaq/audio/capture.py b/radioshaq/radioshaq/audio/capture.py index 745974d..1484ad7 100644 --- a/radioshaq/radioshaq/audio/capture.py +++ b/radioshaq/radioshaq/audio/capture.py @@ -59,7 +59,7 @@ def audio_callback( status: Any, ) -> None: if status: - logger.warning("Audio callback status: %s", status) + logger.warning("Audio callback status: {}", status) frame = indata.copy().flatten().astype(np.float32) try: self._frame_queue.put_nowait(frame) @@ -91,7 +91,7 @@ async def on_segment(segment: ProcessedSegment) -> None: self.stream_processor.set_segment_callback(on_segment) self._capture_task = asyncio.create_task(self._process_loop()) - logger.info("Audio capture started on device %s", self.input_device) + logger.info("Audio capture started on device {}", self.input_device) async def _process_loop(self) -> None: """Main processing loop: dequeue frames and feed to stream processor.""" @@ -107,7 +107,7 @@ async def _process_loop(self) -> None: except asyncio.CancelledError: break except Exception as e: - logger.exception("Frame processing error: %s", e) + logger.exception("Frame processing error: {}", e) async def stop(self) -> None: """Stop audio capture and await task cleanup.""" diff --git a/radioshaq/radioshaq/audio/stream_processor.py b/radioshaq/radioshaq/audio/stream_processor.py index eabd5ce..9723628 100644 --- a/radioshaq/radioshaq/audio/stream_processor.py +++ b/radioshaq/radioshaq/audio/stream_processor.py @@ -238,6 +238,7 @@ def __init__( self._noise_calibration_active = True self._on_segment_ready: Callable[[ProcessedSegment], Awaitable[None]] | None = None + self._metrics_callback: Callable[[bool, float | None, str], None] | None = None def set_segment_callback( self, @@ -246,6 +247,13 @@ def set_segment_callback( """Set callback for when a speech segment is ready.""" self._on_segment_ready = callback + def set_metrics_callback( + self, + callback: Callable[[bool, float | None, str], None] | None, + ) -> None: + """Set optional callback for VAD/metrics updates (vad_active, snr_db, state). Used for websocket live metrics.""" + self._metrics_callback = callback + async def process_frame(self, raw_frame: np.ndarray) -> None: """Process a single audio frame through the pipeline.""" if len(raw_frame) != self.frame_samples: @@ -257,13 +265,15 @@ async def process_frame(self, raw_frame: np.ndarray) -> None: if len(self.denoiser._noise_profile) >= self.denoiser._noise_profile.maxlen: self._noise_calibration_active = False self._state = StreamState.LISTENING + if self._metrics_callback: + self._metrics_callback(False, None, "idle") logger.info("Noise calibration complete") return denoised_frame, snr = self.denoiser.process(frame) if len(denoised_frame) != self.frame_samples: logger.warning( - "Denoised frame length %s != %s, resizing may cause artifacts", + "Denoised frame length {} != {}, resizing may cause artifacts", len(denoised_frame), self.frame_samples, ) @@ -285,6 +295,8 @@ async def _update_state( self._speech_frames = 1 self._silence_frames = 0 self._ring_buffer.clear() + if self._metrics_callback: + self._metrics_callback(True, snr, "speech") else: self._ring_buffer.append(frame) @@ -322,6 +334,8 @@ async def _finalize_segment(self, snr: float) -> None: self._speech_frames = 0 self._silence_frames = 0 self._state = StreamState.LISTENING + if self._metrics_callback: + self._metrics_callback(False, None, "idle") def reset(self) -> None: """Reset processor state.""" diff --git a/radioshaq/radioshaq/audio/tts.py b/radioshaq/radioshaq/audio/tts.py index b0f9303..2522fa2 100644 --- a/radioshaq/radioshaq/audio/tts.py +++ b/radioshaq/radioshaq/audio/tts.py @@ -1,10 +1,11 @@ -"""Text-to-speech using ElevenLabs API.""" +"""Text-to-speech: ElevenLabs API and Kokoro (via tts_plugin).""" from __future__ import annotations -import os from pathlib import Path +from radioshaq.audio.tts_plugin import synthesize_speech + def text_to_speech_elevenlabs( text: str, @@ -15,7 +16,7 @@ def text_to_speech_elevenlabs( output_path: str | Path | None = None, ) -> bytes: """ - Convert text to speech using ElevenLabs API. + Convert text to speech using ElevenLabs API (via TTS plugin). Args: text: Text to speak. @@ -27,33 +28,13 @@ def text_to_speech_elevenlabs( Returns: Audio bytes (e.g. MP3). - - Requires: httpx (already in radioshaq deps). """ - import httpx - - key = api_key or os.environ.get("ELEVENLABS_API_KEY") - if not key: - raise RuntimeError( - "Set ELEVENLABS_API_KEY or pass api_key= to use ElevenLabs TTS." - ) - - url = f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}" - headers = { - "xi-api-key": key, - "Content-Type": "application/json", - } - payload = { - "text": text, - "model_id": model_id, - } - params = {"output_format": output_format} - - with httpx.Client(timeout=60.0) as client: - r = client.post(url, json=payload, headers=headers, params=params) - r.raise_for_status() - data = r.content - - if output_path: - Path(output_path).write_bytes(data) - return data + return synthesize_speech( + text, + "elevenlabs", + output_path=output_path, + voice=voice_id, + api_key=api_key, + model_id=model_id, + output_format=output_format, + ) diff --git a/radioshaq/radioshaq/audio/tts_plugin/__init__.py b/radioshaq/radioshaq/audio/tts_plugin/__init__.py new file mode 100644 index 0000000..54629fb --- /dev/null +++ b/radioshaq/radioshaq/audio/tts_plugin/__init__.py @@ -0,0 +1,65 @@ +"""TTS plugin: registry of backends (ElevenLabs, Kokoro) and synthesize_speech entry point.""" + +from __future__ import annotations + +from pathlib import Path + +from radioshaq.audio.tts_plugin.base import TTSBackend + +_backends: dict[str, TTSBackend] = {} + + +def register_tts_backend(provider_id: str, backend: TTSBackend) -> None: + """Register a TTS backend (e.g. 'elevenlabs', 'kokoro').""" + _backends[provider_id] = backend + + +def get_tts_backend(provider_id: str) -> TTSBackend | None: + """Return the backend for the given provider_id, or None if not registered.""" + return _backends.get(provider_id) + + +def synthesize_speech( + text: str, + provider_id: str, + *, + output_path: str | Path | None = None, + voice: str | None = None, + speed: float | None = None, + **kwargs: object, +) -> bytes: + """Synthesize text using the configured provider. Raises if provider not found or synthesis fails.""" + backend = _backends.get(provider_id) + if backend is None: + raise RuntimeError( + f"TTS provider {provider_id!r} not available. " + "For elevenlabs set ELEVENLABS_API_KEY. For kokoro run: uv sync --extra tts_kokoro" + ) + return backend.synthesize( + text, + output_path=output_path, + voice=voice, + speed=speed, + **kwargs, + ) + + +# Register built-in backends +def _register_backends() -> None: + from radioshaq.audio.tts_plugin.backends.elevenlabs import ElevenLabsTTSBackend + register_tts_backend("elevenlabs", ElevenLabsTTSBackend()) + try: + from radioshaq.audio.tts_plugin.backends.kokoro import KokoroTTSBackend + register_tts_backend("kokoro", KokoroTTSBackend()) + except ImportError: + pass # kokoro optional (uv sync --extra tts_kokoro) + + +_register_backends() + +__all__ = [ + "TTSBackend", + "get_tts_backend", + "register_tts_backend", + "synthesize_speech", +] diff --git a/radioshaq/radioshaq/audio/tts_plugin/backends/__init__.py b/radioshaq/radioshaq/audio/tts_plugin/backends/__init__.py new file mode 100644 index 0000000..098092d --- /dev/null +++ b/radioshaq/radioshaq/audio/tts_plugin/backends/__init__.py @@ -0,0 +1,11 @@ +"""TTS backends: ElevenLabs (API), Kokoro (local).""" + +from radioshaq.audio.tts_plugin.backends.elevenlabs import ElevenLabsTTSBackend + +__all__ = ["ElevenLabsTTSBackend"] + +try: + from radioshaq.audio.tts_plugin.backends.kokoro import KokoroTTSBackend + __all__ = list(__all__) + ["KokoroTTSBackend"] +except ImportError: + pass diff --git a/radioshaq/radioshaq/audio/tts_plugin/backends/elevenlabs.py b/radioshaq/radioshaq/audio/tts_plugin/backends/elevenlabs.py new file mode 100644 index 0000000..38aa872 --- /dev/null +++ b/radioshaq/radioshaq/audio/tts_plugin/backends/elevenlabs.py @@ -0,0 +1,47 @@ +"""ElevenLabs API TTS backend.""" + +from __future__ import annotations + +import os +from pathlib import Path + + +class ElevenLabsTTSBackend: + """TTS via ElevenLabs API. Requires ELEVENLABS_API_KEY.""" + + def synthesize( + self, + text: str, + *, + output_path: str | Path | None = None, + voice: str | None = None, + speed: float | None = None, + **kwargs: object, + ) -> bytes: + import httpx + + voice_id = (voice if voice is not None else kwargs.get("voice_id")) or "21m00Tcm4TlvDq8ikWAM" + model_id = kwargs.get("model_id") or "eleven_multilingual_v2" + output_format = kwargs.get("output_format") or "mp3_44100_128" + api_key = kwargs.get("api_key") or os.environ.get("ELEVENLABS_API_KEY") + if not api_key: + raise RuntimeError( + "Set ELEVENLABS_API_KEY or pass api_key= to use ElevenLabs TTS." + ) + + url = f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}" + headers = { + "xi-api-key": api_key, + "Content-Type": "application/json", + } + payload = {"text": text, "model_id": model_id} + params = {"output_format": output_format} + + with httpx.Client(timeout=60.0) as client: + r = client.post(url, json=payload, headers=headers, params=params) + r.raise_for_status() + data = r.content + + if output_path: + Path(output_path).write_bytes(data) + return data diff --git a/radioshaq/radioshaq/audio/tts_plugin/backends/kokoro.py b/radioshaq/radioshaq/audio/tts_plugin/backends/kokoro.py new file mode 100644 index 0000000..7b3c0d9 --- /dev/null +++ b/radioshaq/radioshaq/audio/tts_plugin/backends/kokoro.py @@ -0,0 +1,74 @@ +"""Kokoro-82M local TTS backend. Requires: uv sync --extra tts_kokoro.""" + +from __future__ import annotations + +import io +from pathlib import Path + +import numpy as np + + +def _to_numpy(audio: object) -> np.ndarray: + """Convert pipeline audio (tensor or array) to 1D numpy.""" + if hasattr(audio, "numpy"): + return np.asarray(audio.numpy()).flatten() + return np.asarray(audio).flatten() + + +class KokoroTTSBackend: + """TTS via local Kokoro-82M. No API key; uses KPipeline (default model).""" + + def __init__(self) -> None: + self._pipelines: dict[str, object] = {} + + def _get_pipeline(self, lang_code: str) -> object: + """Return cached KPipeline for lang_code; load once per language.""" + if lang_code not in self._pipelines: + try: + from kokoro import KPipeline + except ImportError as e: + raise RuntimeError( + "Kokoro TTS requires: uv sync --extra tts_kokoro (pip install kokoro)" + ) from e + self._pipelines[lang_code] = KPipeline(lang_code=lang_code) + return self._pipelines[lang_code] + + def synthesize( + self, + text: str, + *, + output_path: str | Path | None = None, + voice: str | None = None, + speed: float | None = None, + **kwargs: object, + ) -> bytes: + voice_name = voice or (kwargs.get("voice") or "af_heart") + lang_code = kwargs.get("lang_code") or (voice_name[0] if voice_name else "a") + speed_val = speed if speed is not None else (kwargs.get("speed") or 1.0) + split_pattern = kwargs.get("split_pattern") or r"\n+" + + pipeline = self._get_pipeline(lang_code) + generator = pipeline(text, voice=voice_name, speed=speed_val, split_pattern=split_pattern) + all_audio: list[np.ndarray] = [] + for _gs, _ps, audio in generator: + all_audio.append(_to_numpy(audio)) + + if not all_audio: + raise RuntimeError("Kokoro produced no audio") + combined = np.concatenate(all_audio) + sample_rate = 24000 + + try: + import soundfile as sf + except ImportError as e: + raise RuntimeError( + "Kokoro backend needs soundfile to write WAV (uv sync --extra voice_tx or tts_kokoro)" + ) from e + + buf = io.BytesIO() + sf.write(buf, combined, sample_rate, format="WAV") + data = buf.getvalue() + + if output_path: + Path(output_path).write_bytes(data) + return data diff --git a/radioshaq/radioshaq/audio/tts_plugin/base.py b/radioshaq/radioshaq/audio/tts_plugin/base.py new file mode 100644 index 0000000..3ef1318 --- /dev/null +++ b/radioshaq/radioshaq/audio/tts_plugin/base.py @@ -0,0 +1,33 @@ +"""TTS backend protocol: pluggable text-to-speech providers (ElevenLabs, Kokoro).""" + +from __future__ import annotations + +from pathlib import Path +from typing import Protocol + + +class TTSBackend(Protocol): + """Provides text-to-speech synthesis. Implementations: ElevenLabs (API), Kokoro (local).""" + + def synthesize( + self, + text: str, + *, + output_path: str | Path | None = None, + voice: str | None = None, + speed: float | None = None, + **kwargs: object, + ) -> bytes: + """Synthesize text to audio. + + Args: + text: Input text to speak. + output_path: If set, write audio bytes to this file. + voice: Provider-specific voice id or name (e.g. ElevenLabs voice_id, Kokoro voice name). + speed: Speech rate (provider-specific; e.g. Kokoro 0.5–2.0). + **kwargs: Provider-specific options (e.g. model_id, lang_code, api_key). + + Returns: + Audio bytes (format is provider-specific: e.g. MP3 for ElevenLabs, WAV for Kokoro). + """ + ... diff --git a/radioshaq/radioshaq/auth/__init__.py b/radioshaq/radioshaq/auth/__init__.py index 5430e9d..879aa4c 100644 --- a/radioshaq/radioshaq/auth/__init__.py +++ b/radioshaq/radioshaq/auth/__init__.py @@ -1,4 +1,4 @@ -"""Authentication for SHAKODS (JWT, OAuth, field station).""" +"""Authentication for RadioShaq (JWT, OAuth, field station).""" from radioshaq.auth.field_auth import FieldAuthManager from radioshaq.auth.jwt import JWTAuthManager, TokenPayload diff --git a/radioshaq/radioshaq/auth/field_auth.py b/radioshaq/radioshaq/auth/field_auth.py index 26e640d..a3ef6dd 100644 --- a/radioshaq/radioshaq/auth/field_auth.py +++ b/radioshaq/radioshaq/auth/field_auth.py @@ -3,8 +3,7 @@ from __future__ import annotations from radioshaq.auth.jwt import JWTAuthManager, TokenPayload -from radioshaq.config.schema import Config as ShakodsConfig -from radioshaq.config.schema import JWTConfig +from radioshaq.config.schema import Config, JWTConfig class FieldAuthManager: @@ -20,7 +19,7 @@ def __init__( ): if config is None: try: - config = ShakodsConfig().jwt + config = Config().jwt except Exception: config = JWTConfig() self.jwt = jwt_manager or JWTAuthManager(config=config) diff --git a/radioshaq/radioshaq/auth/jwt.py b/radioshaq/radioshaq/auth/jwt.py index 27eb86f..affc51a 100644 --- a/radioshaq/radioshaq/auth/jwt.py +++ b/radioshaq/radioshaq/auth/jwt.py @@ -1,4 +1,4 @@ -"""JWT authentication for SHAKODS distributed agents.""" +"""JWT authentication for RadioShaq distributed agents.""" from __future__ import annotations @@ -35,7 +35,7 @@ def is_expired(self) -> bool: class JWTAuthManager: """ - JWT authentication manager for SHAKODS distributed agents. + JWT authentication manager for RadioShaq distributed agents. Handles token generation, validation, and refresh. """ diff --git a/radioshaq/radioshaq/cli.py b/radioshaq/radioshaq/cli.py index 904bfca..69c048e 100644 --- a/radioshaq/radioshaq/cli.py +++ b/radioshaq/radioshaq/cli.py @@ -468,7 +468,7 @@ def _load_config_for_cli(config_dir: Optional[Path] = None) -> Optional[dict]: def _safe_llm_dict(llm: Any) -> dict: """Dict from LLMConfig with API keys redacted.""" d = llm.model_dump(mode="json") if hasattr(llm, "model_dump") else {} - for k in ("mistral_api_key", "openai_api_key", "anthropic_api_key", "custom_api_key"): + for k in ("mistral_api_key", "openai_api_key", "anthropic_api_key", "custom_api_key", "huggingface_api_key", "gemini_api_key"): if d.get(k): d[k] = "(set)" return d @@ -566,7 +566,7 @@ def setup( llm_provider: Optional[str] = typer.Option( None, "--llm-provider", - help="LLM provider for --no-input: mistral | openai | anthropic | custom.", + help="LLM provider for --no-input: mistral | openai | anthropic | custom | huggingface.", ), llm_model: Optional[str] = typer.Option( None, @@ -578,6 +578,11 @@ def setup( "--custom-api-base", help="Custom LLM API base URL (e.g. http://localhost:11434 for Ollama). Used with --no-input.", ), + huggingface_api_base: Optional[str] = typer.Option( + None, + "--huggingface-api-base", + help="Hugging Face Inference Providers API base (default https://router.huggingface.co/v1). Used with --no-input when provider is huggingface.", + ), hindsight_url: Optional[str] = typer.Option( None, "--hindsight-url", @@ -598,6 +603,16 @@ def setup( "--radio-reply-use-tts/--radio-reply-no-tts", help="Use TTS for outbound MessageBus radio replies (used with --no-input).", ), + restricted_bands_region: Optional[str] = typer.Option( + None, + "--restricted-bands-region", + help="Compliance region/country (e.g. FCC, CA, CEPT, AU, ZA, NZ). Used with --no-input.", + ), + band_plan_region: Optional[str] = typer.Option( + None, + "--band-plan-region", + help="Optional band plan override (ITU_R1, ITU_R3). Used with --no-input.", + ), llm_overrides: Optional[str] = typer.Option( None, "--llm-overrides", @@ -629,10 +644,13 @@ def setup( llm_provider=llm_provider, llm_model=llm_model, custom_api_base=custom_api_base, + huggingface_api_base=huggingface_api_base, hindsight_url=hindsight_url, memory_enabled=memory_enabled, radio_reply_tx_enabled=radio_reply_tx_enabled, radio_reply_use_tts=radio_reply_use_tts, + restricted_bands_region=restricted_bands_region, + band_plan_region=band_plan_region, llm_overrides=llm_overrides, ) raise typer.Exit(exit_code) diff --git a/radioshaq/radioshaq/compliance_plugin/__init__.py b/radioshaq/radioshaq/compliance_plugin/__init__.py new file mode 100644 index 0000000..41d6028 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/__init__.py @@ -0,0 +1,106 @@ +"""Compliance plugin: registry of region/country backends for restricted bands and band plans.""" + +from __future__ import annotations + +from typing import TYPE_CHECKING + +from .base import ComplianceBackend +from .backends.au import AUBackend +from .backends.ca import CABackend +from .backends.cept import ( + BEBackend, + CEPTBackend, + CHBackend, + ESBackend, + FRBackend, + LUBackend, + MCBackend, + UKBackend, +) +from .backends.fcc import FCCBackend +from .backends.itu_r1 import ITUR1Backend +from .backends.itu_r3 import ITUR3Backend +from .backends.mx import MXBackend +from .backends.r1_africa import R1AfricaBackend +from .backends.r2_americas import R2AmericasBackend +from .backends.in_ import INBackend +from .backends.jp import JPBackend +from .backends.nz import NZBackend +from .backends.za import ZABackend + +if TYPE_CHECKING: + from radioshaq.radio.bands import BandPlan + +_backends: dict[str, ComplianceBackend] = {} + + +def register_backend(backend: ComplianceBackend) -> None: + _backends[backend.region_key] = backend + + +def get_backend(region_key: str) -> ComplianceBackend | None: + return _backends.get(region_key) + + +def get_backend_or_default(region_key: str, default: ComplianceBackend) -> ComplianceBackend: + return _backends.get(region_key) or default + + +def get_band_plan_source_for_config( + restricted_region: str, + band_plan_region: str | None, +) -> dict[str, "BandPlan"]: + """Effective band plan for allowlist and /radio/bands. Uses band_plan_region override if set.""" + from radioshaq.radio.bands import BAND_PLANS + + if band_plan_region is not None and str(band_plan_region).strip(): + b = get_backend(str(band_plan_region).strip()) + if b is not None: + plans = b.get_band_plans() + if plans is not None: + return plans + return BAND_PLANS + b = get_backend(restricted_region) + if b is not None: + plans = b.get_band_plans() + if plans is not None: + return plans + return BAND_PLANS + + +# Register built-in backends +register_backend(FCCBackend()) +register_backend(CEPTBackend()) +register_backend(FRBackend()) +register_backend(UKBackend()) +register_backend(ESBackend()) +register_backend(BEBackend()) +register_backend(CHBackend()) +register_backend(LUBackend()) +register_backend(MCBackend()) +register_backend(ITUR1Backend()) +register_backend(ITUR3Backend()) +register_backend(MXBackend()) +register_backend(CABackend()) +for _key in ("AR", "CL", "CO", "PE", "VE", "EC", "UY", "PY", "BO", "CR", "PA", "GT", "DO"): + register_backend(R2AmericasBackend(_key)) +register_backend(AUBackend()) +# R1 Africa: R1 band plan, CEPT-aligned restricted (verify national rules). ZA has dedicated backend below. +_africa_keys = ( + "NG", "KE", "EG", "MA", "TN", "DZ", "GH", "TZ", "ET", "SN", "CI", "CM", + "BW", "NA", "ZW", "MZ", "UG", "RW", "GA", "ML", "BF", "NE", "TG", "BJ", "CD", "MG", +) +for _key in _africa_keys: + register_backend(R1AfricaBackend(_key)) +register_backend(ZABackend()) # South Africa: ICASA NRFP list + R1 band plan +register_backend(NZBackend()) +register_backend(JPBackend()) +register_backend(INBackend()) + +__all__ = [ + "ComplianceBackend", + "register_backend", + "get_backend", + "get_backend_or_default", + "get_band_plan_source_for_config", +] diff --git a/radioshaq/radioshaq/compliance_plugin/backends/__init__.py b/radioshaq/radioshaq/compliance_plugin/backends/__init__.py new file mode 100644 index 0000000..9d49d56 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/__init__.py @@ -0,0 +1,47 @@ +"""Compliance backends: FCC, CEPT, ITU R1/R3, R2 Americas, R1 Africa, etc.""" + +from .au import AUBackend +from .ca import CABackend +from .cept import ( + BEBackend, + CEPTBackend, + CHBackend, + ESBackend, + FRBackend, + LUBackend, + MCBackend, + UKBackend, +) +from .fcc import FCCBackend +from .in_ import INBackend +from .itu_r1 import ITUR1Backend +from .itu_r3 import ITUR3Backend +from .jp import JPBackend +from .mx import MXBackend +from .nz import NZBackend +from .r1_africa import R1AfricaBackend +from .r2_americas import R2AmericasBackend +from .za import ZABackend + +__all__ = [ + "AUBackend", + "BEBackend", + "CABackend", + "CEPTBackend", + "CHBackend", + "ESBackend", + "FCCBackend", + "FRBackend", + "INBackend", + "ITUR1Backend", + "ITUR3Backend", + "JPBackend", + "LUBackend", + "MCBackend", + "MXBackend", + "NZBackend", + "R1AfricaBackend", + "R2AmericasBackend", + "UKBackend", + "ZABackend", +] diff --git a/radioshaq/radioshaq/compliance_plugin/backends/au.py b/radioshaq/radioshaq/compliance_plugin/backends/au.py new file mode 100644 index 0000000..566b2af --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/au.py @@ -0,0 +1,82 @@ +"""Australia (ITU R3): IARU R3 band plan; restricted bands from ACMA Spectrum Plan (conservative set). + +Restricted bands from ACMA Australian Radiofrequency Spectrum Plan and related +apparatus/embargo rules (RALI SM26, etc.). No single FCC-style list; this is a +conservative set (aeronautical, radionavigation, COSPAS-SARSAT, marine, etc.). +Operator must verify national rules (ACMA). +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .itu_r3 import BAND_PLANS_R3 + +# Conservative set aligned with ITU/ACMA: aeronautical, radionav, COSPAS-SARSAT, marine, etc. +# Source: ACMA Australian Radiofrequency Spectrum Plan; operator must verify. +RESTRICTED_BANDS_AU_HZ: list[tuple[float, float]] = [ + (0.090e6, 0.110e6), + (0.495e6, 0.505e6), + (2.1735e6, 2.1905e6), + (4.125e6, 4.128e6), + (4.17725e6, 4.17775e6), + (4.20725e6, 4.20775e6), + (6.215e6, 6.218e6), + (6.26775e6, 6.26825e6), + (6.31175e6, 6.31225e6), + (8.291e6, 8.294e6), + (8.362e6, 8.366e6), + (8.37625e6, 8.38675e6), + (8.41425e6, 8.41475e6), + (12.29e6, 12.293e6), + (12.51975e6, 12.52025e6), + (12.57675e6, 12.57725e6), + (13.36e6, 13.41e6), + (16.42e6, 16.423e6), + (16.69475e6, 16.69525e6), + (16.80425e6, 16.80475e6), + (25.5e6, 25.67e6), + (37.5e6, 38.25e6), + (73e6, 74.6e6), + (74.8e6, 75.2e6), + (108e6, 121.94e6), # Aeronautical + (123e6, 138e6), + (149.9e6, 150.05e6), + (156.52475e6, 156.52525e6), + (156.7e6, 156.9e6), + (162.0125e6, 167.17e6), + (167.72e6, 173.2e6), + (399.9e6, 410e6), # COSPAS-SARSAT (406.0–406.1) and adjacent + (608e6, 614e6), + (960e6, 1240e6), + (1300e6, 1427e6), + (1435e6, 1626.5e6), + (1645.5e6, 1646.5e6), + (1660e6, 1710e6), + (1718.8e6, 1722.2e6), + (2200e6, 2300e6), + (2310e6, 2390e6), + (2483.5e6, 2500e6), + (2690e6, 2900e6), + (3260e6, 3267e6), + (3332e6, 3339e6), + (3345.8e6, 3358e6), + (3600e6, 4400e6), +] + + +class AUBackend: + """ + Australia: ITU Region 3. IARU R3 band plan (2m 144–148 MHz, 70cm 430–440 MHz; + ACMA/WIA may allow 420–450 on 70cm nationally). Restricted bands: conservative + set from ACMA Spectrum Plan; operator must verify. + """ + + region_key: str = "AU" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_AU_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R3 diff --git a/radioshaq/radioshaq/compliance_plugin/backends/ca.py b/radioshaq/radioshaq/compliance_plugin/backends/ca.py new file mode 100644 index 0000000..6323626 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/ca.py @@ -0,0 +1,29 @@ +"""Canada (ITU R2): FCC §15.205 baseline + R2 band plan. + +Restricted bands per ISED RSS-210 Section 7.1 and Annexes A/B (restricted +frequency bands); aligned with FCC §15.205 unless ISED publishes differences. +Amateur radio: RBR-4. Canada participates in CEPT T/R 61-01 for reciprocal +operation in Europe; for domestic compliance this backend uses FCC baseline. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .fcc import RESTRICTED_BANDS_FCC_HZ + + +class CABackend: + """ + Canada: ITU Region 2. Restricted bands: RSS-210 §7.1 and Annexes A/B; + FCC §15.205 used as baseline. Band plan: default R2. Operator must verify ISED. + """ + + region_key: str = "CA" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_FCC_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return None diff --git a/radioshaq/radioshaq/compliance_plugin/backends/cept.py b/radioshaq/radioshaq/compliance_plugin/backends/cept.py new file mode 100644 index 0000000..92a6461 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/cept.py @@ -0,0 +1,131 @@ +"""CEPT/EU harmonised restricted bands and R1 band plan. + +Regulatory sources (intentional radiation / protected bands): +- ERC/REC 70-03 (Short Range Devices): https://docdb.cept.org/document/845 — Annexes define + allowed SRD bands; Appendix 3 lists national restrictions. EFIS: https://efis.cept.org/ +- EU Commission Decision 2006/771/EC (as amended): harmonised SRD spectrum; annex lists + allowed bands and conditions. EUR-Lex CELEX 32006D0771. +- ETSI EN 300 220: permitted SRD bands 25–1000 MHz (e.g. 433.04–434.79, 863–876, 915–921 MHz). + Restricted = bands not in harmonised SRD/amateur allocations; safety (aeronautical, COSPAS-SARSAT, + marine) protected. National (e.g. ANFR France) may add further restrictions. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .itu_r1 import BAND_PLANS_R1 + +# CEPT/EU restricted bands derived from ECC/ETSI harmonised framework (ERC/REC 70-03, +# EU Decision 2006/771/EC as amended, ETSI EN 300 220). This list does NOT mirror FCC §15.205: +# - FCC-only ranges (e.g. 240–285 MHz, 322–335.4 MHz, US GHz blocks) are omitted. +# - EU may restrict additional ISM/SRD sub-bands; national implementations (e.g. ANFR) may add more. +# Operator must verify national rules. Reference: https://efis.cept.org/ +RESTRICTED_BANDS_CEPT_HZ: list[tuple[float, float]] = [ + # Aeronautical, radionavigation, safety + (0.090e6, 0.110e6), + (0.495e6, 0.505e6), + (2.1735e6, 2.1905e6), + (4.125e6, 4.128e6), + (4.17725e6, 4.17775e6), + (4.20725e6, 4.20775e6), + (6.215e6, 6.218e6), + (6.26775e6, 6.26825e6), + (6.31175e6, 6.31225e6), + (8.291e6, 8.294e6), + (8.362e6, 8.366e6), + (8.37625e6, 8.38675e6), + (8.41425e6, 8.41475e6), + (12.29e6, 12.293e6), + (12.51975e6, 12.52025e6), + (12.57675e6, 12.57725e6), + (13.36e6, 13.41e6), + (16.42e6, 16.423e6), + (16.69475e6, 16.69525e6), + (16.80425e6, 16.80475e6), + (25.5e6, 25.67e6), + (37.5e6, 38.25e6), + (73e6, 74.6e6), + (74.8e6, 75.2e6), + (108e6, 121.94e6), # Aeronautical + (123e6, 138e6), + (149.9e6, 150.05e6), + (156.52475e6, 156.52525e6), + (156.7e6, 156.9e6), # Marine mobile + (162.0125e6, 167.17e6), + (167.72e6, 173.2e6), + (399.9e6, 410e6), # COSPAS-SARSAT (406.0–406.1) and adjacent + (608e6, 614e6), + (960e6, 1240e6), + (1300e6, 1427e6), + (1435e6, 1626.5e6), + (1645.5e6, 1646.5e6), + (1660e6, 1710e6), + (1718.8e6, 1722.2e6), + (2200e6, 2300e6), + (2310e6, 2390e6), + (2483.5e6, 2500e6), + (2690e6, 2900e6), + (3260e6, 3267e6), + (3332e6, 3339e6), + (3345.8e6, 3358e6), + (3600e6, 4400e6), + # No US-specific GHz blocks (4.5–5.15, 5.35–5.46, 7.25–7.75, 8.025–8.5, 9–9.2, 9.3–9.5, + # 10.6–12.7, 13.25–13.4, 14.47–14.5, 15.35–16.2, 17.7–21.4, 22.01–23.12, 23.6–24, + # 31.2–31.8, 36.43–36.5, 38.6–100 GHz) — CEPT allocations differ; add per ECC if needed. +] + + +class CEPTBackend: + """CEPT/EU restricted bands + IARU R1 band plan (for France, Spain, etc.).""" + + region_key: str = "CEPT" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_CEPT_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R1 + + +class FRBackend(CEPTBackend): + """France: same as CEPT (EU harmonised + R1 band plan).""" + + region_key: str = "FR" + + +class UKBackend(CEPTBackend): + """United Kingdom: CEPT-aligned (Ofcom); R1 band plan.""" + + region_key: str = "UK" + + +class ESBackend(CEPTBackend): + """Spain: CEPT (EU) + IARU R1 band plan.""" + + region_key: str = "ES" + + +class BEBackend(CEPTBackend): + """Belgium: CEPT (TR 61-01/61-02) + IARU R1 band plan.""" + + region_key: str = "BE" + + +class CHBackend(CEPTBackend): + """Switzerland: CEPT + IARU R1 band plan.""" + + region_key: str = "CH" + + +class LUBackend(CEPTBackend): + """Luxembourg: CEPT + IARU R1 band plan.""" + + region_key: str = "LU" + + +class MCBackend(CEPTBackend): + """Monaco: CEPT + IARU R1 band plan.""" + + region_key: str = "MC" diff --git a/radioshaq/radioshaq/compliance_plugin/backends/fcc.py b/radioshaq/radioshaq/compliance_plugin/backends/fcc.py new file mode 100644 index 0000000..9c955cc --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/fcc.py @@ -0,0 +1,96 @@ +"""FCC restricted bands backend (47 CFR §15.205). + +Official source: 47 CFR §15.205 Restricted bands of operation. +- https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A/part-15/subpart-C/section-15.205 +- https://www.law.cornell.edu/cfr/text/47/15.205 +Intentional radiators may not operate in these bands; only spurious emissions limits apply. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend + +# FCC 47 CFR §15.205 restricted bands (MHz and GHz). Intentional radiation prohibited. +# Source: https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A/part-15/subpart-C/section-15.205 +# Stored as (low_hz, high_hz). +RESTRICTED_BANDS_FCC_HZ: list[tuple[float, float]] = [ + (0.090e6, 0.110e6), + (0.495e6, 0.505e6), + (2.1735e6, 2.1905e6), + (4.125e6, 4.128e6), + (4.17725e6, 4.17775e6), + (4.20725e6, 4.20775e6), + (6.215e6, 6.218e6), + (6.26775e6, 6.26825e6), + (6.31175e6, 6.31225e6), + (8.291e6, 8.294e6), + (8.362e6, 8.366e6), + (8.37625e6, 8.38675e6), + (8.41425e6, 8.41475e6), + (12.29e6, 12.293e6), + (12.51975e6, 12.52025e6), + (12.57675e6, 12.57725e6), + (13.36e6, 13.41e6), + (16.42e6, 16.423e6), + (16.69475e6, 16.69525e6), + (16.80425e6, 16.80475e6), + (25.5e6, 25.67e6), + (37.5e6, 38.25e6), + (73e6, 74.6e6), + (74.8e6, 75.2e6), + (108e6, 121.94e6), + (123e6, 138e6), + (149.9e6, 150.05e6), + (156.52475e6, 156.52525e6), + (156.7e6, 156.9e6), + (162.0125e6, 167.17e6), + (167.72e6, 173.2e6), + (240e6, 285e6), + (322e6, 335.4e6), + (399.9e6, 410e6), + (608e6, 614e6), + (960e6, 1240e6), + (1300e6, 1427e6), + (1435e6, 1626.5e6), + (1645.5e6, 1646.5e6), + (1660e6, 1710e6), + (1718.8e6, 1722.2e6), + (2200e6, 2300e6), + (2310e6, 2390e6), + (2483.5e6, 2500e6), + (2690e6, 2900e6), + (3260e6, 3267e6), + (3332e6, 3339e6), + (3345.8e6, 3358e6), + (3600e6, 4400e6), + (4.5e9, 5.15e9), + (5.35e9, 5.46e9), + (7.25e9, 7.75e9), + (8.025e9, 8.5e9), + (9.0e9, 9.2e9), + (9.3e9, 9.5e9), + (10.6e9, 12.7e9), + (13.25e9, 13.4e9), + (14.47e9, 14.5e9), + (15.35e9, 16.2e9), + (17.7e9, 21.4e9), + (22.01e9, 23.12e9), + (23.6e9, 24.0e9), + (31.2e9, 31.8e9), + (36.43e9, 36.5e9), + (38.6e9, 100e9), +] + + +class FCCBackend: + """FCC (US) restricted bands; band plan from default R2.""" + + region_key: str = "FCC" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_FCC_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return None diff --git a/radioshaq/radioshaq/compliance_plugin/backends/in_.py b/radioshaq/radioshaq/compliance_plugin/backends/in_.py new file mode 100644 index 0000000..91b1a16 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/in_.py @@ -0,0 +1,29 @@ +"""India (ITU R3): WPC; R3 band plan; conservative restricted set. + +WPC (Wireless Planning & Coordination) governs amateur service; restricted +licence 144–146 MHz, 434–438 MHz. Conservative restricted set used. +Operator must verify WPC and ARSI. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .au import RESTRICTED_BANDS_AU_HZ +from .itu_r3 import BAND_PLANS_R3 + + +class INBackend: + """ + India: ITU Region 3. Restricted bands: conservative set (WPC); + R3 band plan. Operator must verify WPC. + """ + + region_key: str = "IN" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_AU_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R3 diff --git a/radioshaq/radioshaq/compliance_plugin/backends/itu_r1.py b/radioshaq/radioshaq/compliance_plugin/backends/itu_r1.py new file mode 100644 index 0000000..c57d892 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/itu_r1.py @@ -0,0 +1,37 @@ +"""IARU Region 1 band plan (Europe, Africa, Middle East). 2m: 144–146 MHz, 70cm: 430–440 MHz.""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend + +# IARU Region 1 band plan. Key differences vs R2: 2m 144–146 MHz, 70cm 430–440 MHz. +# Source: https://www.iaru-r1.org/on-the-air/band-plans/ +BAND_PLANS_R1: dict[str, BandPlan] = { + "160m": BandPlan("160m", 1.8e6, 2.0e6, ["CW", "SSB", "DIGITAL"], 1500), + "80m": BandPlan("80m", 3.5e6, 4.0e6, ["CW", "SSB", "DIGITAL"], 1500), + "60m": BandPlan("60m", 5.3305e6, 5.4065e6, ["USB", "CW", "DIGITAL"], 100), + "40m": BandPlan("40m", 7.0e6, 7.3e6, ["CW", "SSB", "DIGITAL"], 1500), + "30m": BandPlan("30m", 10.1e6, 10.15e6, ["CW", "DIGITAL"], 200), + "20m": BandPlan("20m", 14.0e6, 14.35e6, ["CW", "SSB", "DIGITAL"], 1500), + "17m": BandPlan("17m", 18.068e6, 18.168e6, ["CW", "SSB", "DIGITAL"], 1500), + "15m": BandPlan("15m", 21.0e6, 21.45e6, ["CW", "SSB", "DIGITAL"], 1500), + "12m": BandPlan("12m", 24.89e6, 24.99e6, ["CW", "SSB", "DIGITAL"], 1500), + "10m": BandPlan("10m", 28.0e6, 29.7e6, ["CW", "SSB", "FM", "DIGITAL"], 1500), + "6m": BandPlan("6m", 50.0e6, 54.0e6, ["CW", "SSB", "FM", "DIGITAL"], 1500), + "2m": BandPlan("2m", 144.0e6, 146.0e6, ["FM", "SSB", "CW", "DIGITAL"], 1500), + "70cm": BandPlan("70cm", 430.0e6, 440.0e6, ["FM", "SSB", "DIGITAL"], 1500), +} + + +class ITUR1Backend: + """Band-plan-only backend for ITU Region 1 (no restricted bands in this backend).""" + + region_key: str = "ITU_R1" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return [] + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R1 diff --git a/radioshaq/radioshaq/compliance_plugin/backends/itu_r3.py b/radioshaq/radioshaq/compliance_plugin/backends/itu_r3.py new file mode 100644 index 0000000..0113edc --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/itu_r3.py @@ -0,0 +1,42 @@ +"""IARU Region 3 band plan (Asia–Pacific). 2m: 144–148 MHz, 70cm: 430–440 MHz. + +Source: IARU R3-004 Revised 3 September 2019 (IARU Region 3 Directors' Meeting). +National regulations take precedence; 440–450 MHz is amateur only in Australia and +Philippines per RR 5.270. See https://www.iaru.org/ and IARU R3 band plan PDF. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend + +# IARU Region 3 band plan. 2m 144–148 MHz (vs R1 144–146); 70cm 430–440 MHz (secondary in R3). +# Other bands aligned with R1/R2 ranges for consistency. +BAND_PLANS_R3: dict[str, BandPlan] = { + "160m": BandPlan("160m", 1.8e6, 2.0e6, ["CW", "SSB", "DIGITAL"], 1500), + "80m": BandPlan("80m", 3.5e6, 4.0e6, ["CW", "SSB", "DIGITAL"], 1500), + "60m": BandPlan("60m", 5.3305e6, 5.4065e6, ["USB", "CW", "DIGITAL"], 100), + "40m": BandPlan("40m", 7.0e6, 7.3e6, ["CW", "SSB", "DIGITAL"], 1500), + "30m": BandPlan("30m", 10.1e6, 10.15e6, ["CW", "DIGITAL"], 200), + "20m": BandPlan("20m", 14.0e6, 14.35e6, ["CW", "SSB", "DIGITAL"], 1500), + "17m": BandPlan("17m", 18.068e6, 18.168e6, ["CW", "SSB", "DIGITAL"], 1500), + "15m": BandPlan("15m", 21.0e6, 21.45e6, ["CW", "SSB", "DIGITAL"], 1500), + "12m": BandPlan("12m", 24.89e6, 24.99e6, ["CW", "SSB", "DIGITAL"], 1500), + "10m": BandPlan("10m", 28.0e6, 29.7e6, ["CW", "SSB", "FM", "DIGITAL"], 1500), + "6m": BandPlan("6m", 50.0e6, 54.0e6, ["CW", "SSB", "FM", "DIGITAL"], 1500), + "2m": BandPlan("2m", 144.0e6, 148.0e6, ["FM", "SSB", "CW", "DIGITAL"], 1500), + "70cm": BandPlan("70cm", 430.0e6, 440.0e6, ["FM", "SSB", "DIGITAL"], 1500), +} + + +class ITUR3Backend: + """Band-plan-only backend for ITU Region 3 (Asia–Pacific). No restricted bands in this backend.""" + + region_key: str = "ITU_R3" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return [] + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R3 diff --git a/radioshaq/radioshaq/compliance_plugin/backends/jp.py b/radioshaq/radioshaq/compliance_plugin/backends/jp.py new file mode 100644 index 0000000..6e4a496 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/jp.py @@ -0,0 +1,28 @@ +"""Japan (ITU R3): MIC/JARL; R3 band plan; conservative restricted set. + +No single FCC-style restricted list published; conservative set (aeronautical, +radionav, COSPAS-SARSAT) used. Operator must verify MIC and JARL. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .au import RESTRICTED_BANDS_AU_HZ +from .itu_r3 import BAND_PLANS_R3 + + +class JPBackend: + """ + Japan: ITU Region 3. Restricted bands: conservative set (MIC/JARL); + R3 band plan. Operator must verify MIC. + """ + + region_key: str = "JP" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_AU_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R3 diff --git a/radioshaq/radioshaq/compliance_plugin/backends/mx.py b/radioshaq/radioshaq/compliance_plugin/backends/mx.py new file mode 100644 index 0000000..dcddf9e --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/mx.py @@ -0,0 +1,28 @@ +"""Mexico (ITU R2): restricted bands baseline from FCC §15.205; R2 band plan. + +IFT CNAF (Cuadro Nacional de Atribución de Frecuencias) and IFT-016-2024 +(30 MHz–3 GHz low-power devices) apply; FCC used as baseline. Verify IFT for +national differences. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .fcc import RESTRICTED_BANDS_FCC_HZ + + +class MXBackend: + """ + Mexico: ITU Region 2. Restricted bands: FCC §15.205 baseline (IFT CNAF, + IFT-016-2024). Band plan: default R2. Operator must verify IFT. + """ + + region_key: str = "MX" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_FCC_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return None diff --git a/radioshaq/radioshaq/compliance_plugin/backends/nz.py b/radioshaq/radioshaq/compliance_plugin/backends/nz.py new file mode 100644 index 0000000..5f01d7c --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/nz.py @@ -0,0 +1,29 @@ +"""New Zealand (ITU R3): RSM PIB 21 conservative restricted bands + R3 band plan. + +Restricted bands from RSM (Radio Spectrum Management) Table of Radio Spectrum +Usage (PIB 21) and prohibited equipment rules; conservative set (aeronautical, +radionav, COSPAS-SARSAT, etc.). Operator must verify RSM. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .au import RESTRICTED_BANDS_AU_HZ +from .itu_r3 import BAND_PLANS_R3 + + +class NZBackend: + """ + New Zealand: ITU Region 3. Restricted bands: RSM PIB 21 conservative set; + R3 band plan. Operator must verify RSM. + """ + + region_key: str = "NZ" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_AU_HZ # Same conservative set (ITU-aligned) + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R3 diff --git a/radioshaq/radioshaq/compliance_plugin/backends/r1_africa.py b/radioshaq/radioshaq/compliance_plugin/backends/r1_africa.py new file mode 100644 index 0000000..c66e011 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/r1_africa.py @@ -0,0 +1,32 @@ +"""ITU Region 1 Africa: IARU R1 band plan + R1 conservative restricted bands. + +African countries are in ITU Region 1. Restricted bands: R1 conservative set +(safety/aeronautical/marine/COSPAS-SARSAT); national rules may add. ZA uses +dedicated ZABackend (ICASA list). Operators must verify national regulator +(ICASA, NCC, CA, NTRA, ANRT, BOCRA, etc.). +Reference: IARU R1 https://www.iaru-r1.org/on-the-air/band-plans/ +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .cept import RESTRICTED_BANDS_CEPT_HZ +from .itu_r1 import BAND_PLANS_R1 + + +class R1AfricaBackend: + """ + Parametrised backend for ITU R1 African countries: R1 band plan and + R1 conservative restricted bands (CEPT-aligned; ZA overridden by ZABackend). + """ + + def __init__(self, region_key: str) -> None: + self.region_key = region_key + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_CEPT_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R1 diff --git a/radioshaq/radioshaq/compliance_plugin/backends/r2_americas.py b/radioshaq/radioshaq/compliance_plugin/backends/r2_americas.py new file mode 100644 index 0000000..5c5cdb6 --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/r2_americas.py @@ -0,0 +1,33 @@ +"""ITU Region 2 Americas: FCC §15.205 baseline + default R2 band plan. + +Many Latin American and Caribbean countries follow FCC-style restricted bands +and IARU R2 band plan. National regulators (e.g. ENACOM Argentina, SUBTEL Chile, +CRC Colombia, MTC Peru) may vary; operators must verify local rules. +Reference: IARU R2 band plan https://www.iaru-r2.org/en/reference/band-plans/ +""" + +from __future__ import annotations + +from typing import TYPE_CHECKING + +from ..base import ComplianceBackend +from .fcc import RESTRICTED_BANDS_FCC_HZ + +if TYPE_CHECKING: + from radioshaq.radio.bands import BandPlan + + +class R2AmericasBackend: + """ + Parametrised backend for ITU R2 Americas: FCC §15.205 restricted bands, + default R2 band plan (bands.py). Use for Argentina, Chile, Colombia, etc. + """ + + def __init__(self, region_key: str) -> None: + self.region_key = region_key + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_FCC_HZ + + def get_band_plans(self) -> dict[str, "BandPlan"] | None: + return None diff --git a/radioshaq/radioshaq/compliance_plugin/backends/za.py b/radioshaq/radioshaq/compliance_plugin/backends/za.py new file mode 100644 index 0000000..bae2bcd --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/backends/za.py @@ -0,0 +1,84 @@ +"""South Africa (ITU R1): ICASA NRFP-derived restricted bands + R1 band plan. + +Restricted bands from ICASA National Radio Frequency Plan and RFSAPs; +aeronautical, radionav, COSPAS-SARSAT, marine, etc. 433.05–434.79 MHz is +shared ISM/amateur — intentional radiation permitted under amateur licence; +not in restricted list so 70cm simplex (e.g. 433.5 MHz) is allowed. +Operator must verify ICASA / SARL. +""" + +from __future__ import annotations + +from radioshaq.radio.bands import BandPlan + +from ..base import ComplianceBackend +from .itu_r1 import BAND_PLANS_R1 + +# Conservative set from ICASA NRFP / RFSAPs: aeronautical, radionav, COSPAS-SARSAT, marine, +# 336–366 MHz fixed/PPDR; etc. 433.05–434.79 MHz omitted (shared ISM/amateur, TX permitted). +# Source: ICASA National Radio Frequency Plan; operator must verify. +RESTRICTED_BANDS_ZA_HZ: list[tuple[float, float]] = [ + (0.090e6, 0.110e6), + (0.495e6, 0.505e6), + (2.1735e6, 2.1905e6), + (4.125e6, 4.128e6), + (4.17725e6, 4.17775e6), + (4.20725e6, 4.20775e6), + (6.215e6, 6.218e6), + (6.26775e6, 6.26825e6), + (6.31175e6, 6.31225e6), + (8.291e6, 8.294e6), + (8.362e6, 8.366e6), + (8.37625e6, 8.38675e6), + (8.41425e6, 8.41475e6), + (12.29e6, 12.293e6), + (12.51975e6, 12.52025e6), + (12.57675e6, 12.57725e6), + (13.36e6, 13.41e6), + (16.42e6, 16.423e6), + (16.69475e6, 16.69525e6), + (16.80425e6, 16.80475e6), + (25.5e6, 25.67e6), + (37.5e6, 38.25e6), + (73e6, 74.6e6), + (74.8e6, 75.2e6), + (108e6, 121.94e6), # Aeronautical + (123e6, 138e6), + (149.9e6, 150.05e6), + (156.52475e6, 156.52525e6), + (156.7e6, 156.9e6), + (162.0125e6, 167.17e6), + (167.72e6, 173.2e6), + (336e6, 366e6), # ICASA fixed/PPDR + (399.9e6, 410e6), # COSPAS-SARSAT and adjacent + (608e6, 614e6), + (960e6, 1240e6), + (1300e6, 1427e6), + (1435e6, 1626.5e6), + (1645.5e6, 1646.5e6), + (1660e6, 1710e6), + (1718.8e6, 1722.2e6), + (2200e6, 2300e6), + (2310e6, 2390e6), + (2483.5e6, 2500e6), + (2690e6, 2900e6), + (3260e6, 3267e6), + (3332e6, 3339e6), + (3345.8e6, 3358e6), + (3600e6, 4400e6), +] + + +class ZABackend: + """ + South Africa: ITU R1. Restricted bands from ICASA NRFP/RFSAPs; R1 band plan. + Operator must verify ICASA and SARL. + """ + + region_key: str = "ZA" + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + return RESTRICTED_BANDS_ZA_HZ + + def get_band_plans(self) -> dict[str, BandPlan] | None: + return BAND_PLANS_R1 diff --git a/radioshaq/radioshaq/compliance_plugin/base.py b/radioshaq/radioshaq/compliance_plugin/base.py new file mode 100644 index 0000000..acfa1db --- /dev/null +++ b/radioshaq/radioshaq/compliance_plugin/base.py @@ -0,0 +1,23 @@ +"""Compliance backend protocol: restricted bands and optional band plans per region.""" + +from __future__ import annotations + +from typing import Protocol + +from radioshaq.radio.bands import BandPlan + + +class ComplianceBackend(Protocol): + """Provides restricted bands and optional band plans for a region/country.""" + + region_key: str + """Unique key for this backend (e.g. FCC, CEPT, FR).""" + + + def get_restricted_bands_hz(self) -> list[tuple[float, float]]: + """List of (low_hz, high_hz) where intentional radiation is prohibited. Empty = none enforced.""" + ... + + def get_band_plans(self) -> dict[str, BandPlan] | None: + """Band plans for allowlist and /radio/bands. None = use default (e.g. R2 from bands.py).""" + ... diff --git a/radioshaq/radioshaq/config/__init__.py b/radioshaq/radioshaq/config/__init__.py index 526b019..69c3f18 100644 --- a/radioshaq/radioshaq/config/__init__.py +++ b/radioshaq/radioshaq/config/__init__.py @@ -1,4 +1,4 @@ -"""Configuration system for SHAKODS. +"""Configuration system for RadioShaq. Provides Pydantic-based configuration with support for: - YAML/JSON config files diff --git a/radioshaq/radioshaq/config/schema.py b/radioshaq/radioshaq/config/schema.py index a4ac7cb..85675de 100644 --- a/radioshaq/radioshaq/config/schema.py +++ b/radioshaq/radioshaq/config/schema.py @@ -1,23 +1,40 @@ -"""Configuration schema for SHAKODS using Pydantic. +"""Configuration schema for RadioShaq using Pydantic. -This module defines all configuration models for the SHAKODS system, +This module defines all configuration models for the RadioShaq system, supporting file-based config, environment variables, and validation. + +Runtime overrides applied via the config API (PATCH /config/audio, etc.) are +stored in app state only and merged into GET responses; they do not modify +the Config instance used at startup. Agents and the orchestrator are created +with the startup Config and do not see API overrides until process restart. +See radioshaq.api.config_semantics for API semantics. """ from __future__ import annotations +import logging import uuid from datetime import datetime, timezone from enum import Enum, StrEnum from pathlib import Path from typing import Any, Literal +from urllib.parse import urlparse, urlunparse from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator from pydantic_settings import BaseSettings, SettingsConfigDict +try: + # Pydantic Settings v2 YAML support; optional so older versions still work. + from pydantic_settings import YamlConfigSettingsSource +except ImportError: # pragma: no cover - fallback when YAML source is unavailable + YamlConfigSettingsSource = None + +from radioshaq.constants import ASR_LANGUAGE_AUTO, ASR_LANGUAGE_VALUES + +logger = logging.getLogger(__name__) class Mode(StrEnum): - """Operational mode for SHAKODS.""" + """Operational mode for RadioShaq.""" FIELD = "field" # Edge/field station mode HQ = "hq" # Headquarters/central mode @@ -36,11 +53,13 @@ class LogLevel(StrEnum): class LLMProvider(StrEnum): """Supported LLM providers.""" - + MISTRAL = "mistral" OPENAI = "openai" ANTHROPIC = "anthropic" CUSTOM = "custom" + HUGGINGFACE = "huggingface" + GEMINI = "gemini" class RadioMode(StrEnum): @@ -121,15 +140,26 @@ class DatabaseConfig(BaseModel): redis_url: str | None = Field(default="redis://localhost:6379/0") # Alembic - alembic_config: str = Field(default="infrastructure/local/alembic.ini") + alembic_config: str = Field(default="alembic.ini") auto_migrate: bool = Field(default=False) # Run migrations on startup @field_validator("postgres_url") @classmethod def validate_postgres_url(cls, v: str) -> str: - """Ensure URL uses asyncpg driver.""" + """Ensure URL uses asyncpg driver and normalize local host naming.""" if v.startswith("postgresql://") and "asyncpg" not in v: v = v.replace("postgresql://", "postgresql+asyncpg://") + # Normalize 127.0.0.1 to localhost in the host component only (avoid corrupting password/db). + parsed = urlparse(v) + if parsed.hostname == "127.0.0.1": + netloc = parsed.netloc + if "@" in netloc: + userinfo, hostport = netloc.rsplit("@", 1) + hostport = hostport.replace("127.0.0.1", "localhost", 1) + netloc = f"{userinfo}@{hostport}" + else: + netloc = netloc.replace("127.0.0.1", "localhost", 1) + v = urlunparse(parsed._replace(netloc=netloc)) return v @@ -172,11 +202,19 @@ class LLMConfig(BaseModel): mistral_api_key: str | None = Field(default=None) openai_api_key: str | None = Field(default=None) anthropic_api_key: str | None = Field(default=None) - + gemini_api_key: str | None = Field(default=None) + # Custom provider custom_api_base: str | None = Field(default=None) custom_api_key: str | None = Field(default=None) - + + # Hugging Face Inference Providers (https://router.huggingface.co/v1) + huggingface_api_key: str | None = Field(default=None) + huggingface_api_base: str | None = Field( + default=None, + description="Default: https://router.huggingface.co/v1 when provider is huggingface.", + ) + # Generation parameters temperature: float = Field(default=0.1, ge=0.0, le=2.0) max_tokens: int = Field(default=4096, ge=1, le=100000) @@ -229,6 +267,10 @@ class RadioConfig(BaseModel): tx_audit_log_path: str | None = Field(default=None, description="Path to JSONL file for TX audit log") tx_allowed_bands_only: bool = Field(default=True, description="Only allow TX in band_plan bands") restricted_bands_region: str = Field(default="FCC", description="Region for restricted bands (FCC, CEPT)") + band_plan_region: str | None = Field( + default=None, + description="Band plan region override (e.g. ITU_R1, ITU_R2). None = use backend from restricted_bands_region.", + ) # Multi-band listening (Project 1) default_band: str | None = Field(default=None, description="Default band when listen_bands not set (e.g. 40m, 2m)") @@ -300,6 +342,18 @@ def normalize_listen_bands(cls, v: list[str] | None) -> list[str] | None: sdr_tx_serial: str | None = Field(default=None, description="HackRF serial (optional)") sdr_tx_max_gain: int = Field(default=47, ge=0, le=47) sdr_tx_allow_bands_only: bool = Field(default=True) + sdr_tx_mode: str = Field( + default="local", + description="SDR TX mode: 'local' (direct HackRF via pyhackrf2) or 'remote' (use HackRF broker service).", + ) + sdr_tx_service_base_url: str | None = Field( + default=None, + description="Base URL for HackRF broker when sdr_tx_mode='remote' (e.g. http://localhost:8765).", + ) + sdr_tx_service_token: str | None = Field( + default=None, + description="Bearer token (JWT or opaque) used by HackRFServiceClient when sdr_tx_mode='remote'.", + ) # Audio RX/TX integration (voice_rx pipeline) audio_input_enabled: bool = Field(default=False) @@ -412,6 +466,13 @@ class AudioConfig(BaseModel): denoising_backend: str = Field(default="rnnoise") # "rnnoise", "spectral", "none" noise_calibration_seconds: float = Field(default=3.0, ge=1.0, le=10.0) min_snr_db: float = Field(default=3.0, ge=-10.0, le=40.0) + eleven_voice_isolator_enabled: bool = Field( + default=False, + description=( + "When True and asr_model is 'scribe', run ElevenLabs Voice Isolator " + "before Scribe ASR (requires ELEVENLABS_API_KEY)." + ), + ) # VAD vad_enabled: bool = Field(default=True) @@ -423,11 +484,28 @@ class AudioConfig(BaseModel): max_speech_duration_ms: int = Field(default=30000, ge=5000, le=60000) silence_duration_ms: int = Field(default=800, ge=200, le=2000) - # ASR - asr_model: str = Field(default="voxtral") - asr_language: str = Field(default="en") + # ASR (voxtral, whisper = local; scribe = ElevenLabs API) + asr_model: str = Field( + default="voxtral", + description="ASR backend: voxtral (local, default), whisper (local), scribe (ElevenLabs API).", + ) + asr_language: str = Field( + default="en", + description="ASR language: en, fr, es, or auto (detect).", + ) asr_min_confidence: float = Field(default=0.6, ge=0.0, le=1.0) + @field_validator("asr_language", mode="before") + @classmethod + def _normalize_asr_language(cls, v: Any) -> str: + raw = (v or "").strip().lower() + if raw in ("", ASR_LANGUAGE_AUTO): + return ASR_LANGUAGE_AUTO + if raw in ASR_LANGUAGE_VALUES: + return raw + logger.warning("Unrecognized asr_language %r; falling back to 'en'", raw) + return "en" + # Response behavior auto_respond: bool = Field(default=False) # Legacy; prefer response_mode response_mode: ResponseMode = Field(default=ResponseMode.LISTEN_ONLY) @@ -475,6 +553,40 @@ class AudioConfig(BaseModel): ) +class TTSConfig(BaseModel): + """Text-to-speech provider and options (used when voice_use_tts or use_tts is true).""" + + model_config = ConfigDict(extra="ignore") + + provider: Literal["elevenlabs", "kokoro"] = Field( + default="elevenlabs", + description="TTS provider: elevenlabs (API) or kokoro (local).", + ) + # ElevenLabs + elevenlabs_voice_id: str = Field( + default="21m00Tcm4TlvDq8ikWAM", + description="ElevenLabs voice ID (e.g. Rachel). List voices: GET /v1/voices.", + ) + elevenlabs_model_id: str = Field( + default="eleven_multilingual_v2", + description="ElevenLabs model: eleven_multilingual_v2, eleven_turbo_v2_5, eleven_flash_v2_5, etc.", + ) + elevenlabs_output_format: str = Field( + default="mp3_44100_128", + description="ElevenLabs output format, e.g. mp3_44100_128, wav_22050.", + ) + # Kokoro (local) + kokoro_voice: str = Field( + default="af_heart", + description="Kokoro voice name (e.g. af_heart, am_michael). Requires uv sync --extra tts_kokoro.", + ) + kokoro_lang_code: str = Field( + default="a", + description="Kokoro language code: a (US English), b (UK English), e (es), f (fr), etc.", + ) + kokoro_speed: float = Field(default=1.0, ge=0.5, le=2.0, description="Kokoro speech rate.") + + class PendingResponse(BaseModel): """A pending response awaiting human confirmation (in-memory).""" @@ -588,6 +700,67 @@ class HQConfig(BaseModel): coordination_interval_seconds: int = Field(default=30, ge=5) +class TwilioConfig(BaseModel): + """Twilio configuration for SMS and WhatsApp (same account; E.164 for numbers).""" + + model_config = ConfigDict(extra="ignore") + + account_sid: str | None = Field( + default=None, + description="Twilio Account SID (env: RADIOSHAQ_TWILIO__ACCOUNT_SID)", + ) + auth_token: str | None = Field( + default=None, + description="Twilio Auth Token (env: RADIOSHAQ_TWILIO__AUTH_TOKEN)", + ) + from_number: str | None = Field( + default=None, + description="SMS sender phone number, E.164 (env: RADIOSHAQ_TWILIO__FROM_NUMBER)", + ) + whatsapp_from: str | None = Field( + default=None, + description="WhatsApp sender phone number, E.164; must be WhatsApp-enabled in Twilio (env: RADIOSHAQ_TWILIO__WHATSAPP_FROM)", + ) + allow_unsigned_webhooks: bool = Field( + default=False, + description=( + "Allow processing Twilio webhooks without signature validation (development only). " + "When False, missing auth_token or signature blocks processing." + ), + ) + + @field_validator("account_sid", "auth_token", "from_number", "whatsapp_from", mode="before") + @classmethod + def _empty_str_to_none(cls, v: str | None) -> str | None: + # Normalize empty strings (from env or YAML) to None so tests can reliably + # detect "Twilio not configured" via attribute is None checks. + if v is None: + return None + if isinstance(v, str) and not v.strip(): + return None + return v + + +class EmergencyContactConfig(BaseModel): + """Region-aware emergency SMS/WhatsApp contact loop (§9). Human approval required when approval_required=True.""" + + model_config = ConfigDict(extra="ignore") + + enabled: bool = Field( + default=False, + description="Enable emergency contact (SMS/WhatsApp) flow; only allowed in regions_allowed.", + ) + regions_allowed: list[str] = Field( + default_factory=list, + description="Region codes where emergency SMS/WhatsApp is allowed (e.g. FCC, CA). See docs/notify-and-emergency-compliance-plan.md.", + ) + approval_required: bool = Field(default=True, description="Require human approval before sending emergency message.") + allowed_event_types: list[str] = Field( + default_factory=lambda: ["emergency"], + description="Event types that use this config (e.g. emergency).", + ) + + # ============================================================================= # Main Configuration # ============================================================================= @@ -613,7 +786,35 @@ class Config(BaseSettings): yaml_file_encoding="utf-8", extra="ignore", ) - + + @classmethod + def settings_customise_sources( + cls, + settings_cls, + init_settings, + env_settings, + dotenv_settings, + file_secret_settings, + ): + """ + Load configuration from (highest to lowest precedence): + 1. Environment variables (RADIOSHAQ_*) + 2. YAML file (config.yaml, when YamlConfigSettingsSource is available) + 3. Explicit kwargs (init_settings) + 4. Secrets files + """ + yaml_settings = () + if YamlConfigSettingsSource is not None: + yaml_settings = (YamlConfigSettingsSource(settings_cls),) + # Keep env vars highest priority so demos can override YAML easily. + return ( + env_settings, + *yaml_settings, + dotenv_settings, + init_settings, + file_secret_settings, + ) + # Core settings mode: Mode = Field(default=Mode.FIELD) debug: bool = Field(default=False) @@ -626,7 +827,10 @@ class Config(BaseSettings): radio: RadioConfig = Field(default_factory=RadioConfig) memory: MemoryConfig = Field(default_factory=MemoryConfig) audio: AudioConfig = Field(default_factory=AudioConfig) + tts: TTSConfig = Field(default_factory=TTSConfig) pm2: PM2Config = Field(default_factory=PM2Config) + twilio: TwilioConfig = Field(default_factory=TwilioConfig) + emergency_contact: EmergencyContactConfig = Field(default_factory=EmergencyContactConfig) # Per-role overrides: keys e.g. orchestrator, judge, whitelist, daily_summary, memory llm_overrides: dict[str, Any] | None = Field( @@ -727,6 +931,9 @@ def save_config(config: Config, path: Path | str) -> None: "AudioActivationMode", "AudioConfig", "Config", + "EmergencyContactConfig", + "TwilioConfig", + "TTSConfig", "MemoryConfig", "DatabaseConfig", "FieldConfig", diff --git a/radioshaq/radioshaq/constants.py b/radioshaq/radioshaq/constants.py new file mode 100644 index 0000000..d85e996 --- /dev/null +++ b/radioshaq/radioshaq/constants.py @@ -0,0 +1,19 @@ +"""Shared constants for RadioShaq (e.g. ASR language support).""" + +from __future__ import annotations + +import re + +# E.164 phone validation: optional +, 10–15 digits (shared across emergency, relay, callsigns) +E164_PATTERN: re.Pattern[str] = re.compile(r"^\+?[0-9]{10,15}$") + +# Regions that require explicit consent for notify-on-relay (§8.1, §8.3) +EXPLICIT_CONSENT_REGIONS: frozenset[str] = frozenset( + ("CEPT", "FR", "UK", "ES", "BE", "CH", "LU", "MC", "ZA") +) + +# ASR (Voxtral) languages supported for UI and validation (en, fr, es) +ASR_SUPPORTED_LANGUAGE_CODES: tuple[str, ...] = ("en", "fr", "es") +ASR_LANGUAGE_AUTO = "auto" +# All valid asr_language values: codes + auto for detection +ASR_LANGUAGE_VALUES: tuple[str, ...] = (*ASR_SUPPORTED_LANGUAGE_CODES, ASR_LANGUAGE_AUTO) diff --git a/radioshaq/radioshaq/database/__init__.py b/radioshaq/radioshaq/database/__init__.py index 9071b75..e6515d2 100644 --- a/radioshaq/radioshaq/database/__init__.py +++ b/radioshaq/radioshaq/database/__init__.py @@ -1,4 +1,4 @@ -"""Database layer for SHAKODS. +"""Database layer for RadioShaq. Provides SQLAlchemy models, PostGIS integration, and data access layers. """ diff --git a/radioshaq/radioshaq/database/dynamodb.py b/radioshaq/radioshaq/database/dynamodb.py index 2dbb2fa..88a64b9 100644 --- a/radioshaq/radioshaq/database/dynamodb.py +++ b/radioshaq/radioshaq/database/dynamodb.py @@ -74,7 +74,7 @@ def _get() -> dict | None: try: resp = self._table.get_item(Key={"session_id": session_id}) except ClientError as e: - logger.warning("DynamoDB get_session_state failed: %s", e) + logger.warning("DynamoDB get_session_state failed: {}", e) return None return resp.get("Item") diff --git a/radioshaq/radioshaq/database/models.py b/radioshaq/radioshaq/database/models.py index 56052ae..3d72a47 100644 --- a/radioshaq/radioshaq/database/models.py +++ b/radioshaq/radioshaq/database/models.py @@ -1,4 +1,4 @@ -"""SQLAlchemy models for SHAKODS database. +"""SQLAlchemy models for RadioShaq database. Defines the core database schema with PostGIS support for location-based operations and ham radio coordination. @@ -119,6 +119,20 @@ class RegisteredCallsign(Base): preferred_bands: Mapped[list | None] = mapped_column(JSON, nullable=True) # e.g. ["40m", "2m"] last_band: Mapped[str | None] = mapped_column(String(20), nullable=True) + # Contact preferences: notify when a message is left for this callsign (§8.1, §8.3) + notify_sms_phone: Mapped[str | None] = mapped_column(String(20), nullable=True) # E.164 + notify_whatsapp_phone: Mapped[str | None] = mapped_column(String(20), nullable=True) # E.164 + notify_on_relay: Mapped[bool] = mapped_column(nullable=False, default=False) + notify_consent_at: Mapped[datetime | None] = mapped_column(DateTime(timezone=True), nullable=True) + notify_consent_source: Mapped[str | None] = mapped_column( + String(20), + nullable=True, + doc="api / web / voice", + ) + notify_opt_out_at: Mapped[datetime | None] = mapped_column(DateTime(timezone=True), nullable=True) + notify_opt_out_at_sms: Mapped[datetime | None] = mapped_column(DateTime(timezone=True), nullable=True) + notify_opt_out_at_whatsapp: Mapped[datetime | None] = mapped_column(DateTime(timezone=True), nullable=True) + def to_dict(self) -> dict[str, Any]: """Convert to dictionary.""" return { @@ -128,6 +142,14 @@ def to_dict(self) -> dict[str, Any]: "created_at": self.created_at.isoformat() if self.created_at else None, "preferred_bands": self.preferred_bands, "last_band": self.last_band, + "notify_sms_phone": self.notify_sms_phone, + "notify_whatsapp_phone": self.notify_whatsapp_phone, + "notify_on_relay": self.notify_on_relay, + "notify_consent_at": self.notify_consent_at.isoformat() if self.notify_consent_at else None, + "notify_consent_source": self.notify_consent_source, + "notify_opt_out_at": self.notify_opt_out_at.isoformat() if self.notify_opt_out_at else None, + "notify_opt_out_at_sms": self.notify_opt_out_at_sms.isoformat() if self.notify_opt_out_at_sms else None, + "notify_opt_out_at_whatsapp": self.notify_opt_out_at_whatsapp.isoformat() if self.notify_opt_out_at_whatsapp else None, } @@ -288,7 +310,11 @@ class CoordinationEvent(Base): ) def to_dict(self) -> dict[str, Any]: - """Convert to dictionary.""" + """Convert to dictionary. Redacts emergency_contact_phone in extra_data for privacy.""" + extra = dict(self.extra_data) if self.extra_data else {} + if "emergency_contact_phone" in extra and extra["emergency_contact_phone"]: + raw = str(extra["emergency_contact_phone"]) + extra["emergency_contact_phone"] = "****" + raw[-4:] if len(raw) >= 4 else "****" return { "id": self.id, "event_type": self.event_type, @@ -301,6 +327,7 @@ def to_dict(self) -> dict[str, Any]: "priority": self.priority, "notes": self.notes, "created_at": self.created_at.isoformat() if self.created_at else None, + "extra_data": extra, } diff --git a/radioshaq/radioshaq/database/postgres_gis.py b/radioshaq/radioshaq/database/postgres_gis.py index 972e285..91a336e 100644 --- a/radioshaq/radioshaq/database/postgres_gis.py +++ b/radioshaq/radioshaq/database/postgres_gis.py @@ -1,4 +1,4 @@ -"""PostGIS database manager for SHAKODS. +"""PostGIS database manager for RadioShaq. Provides high-level operations for geographic data storage and spatial queries using SQLAlchemy and PostGIS. @@ -10,7 +10,7 @@ from typing import Any, Sequence from geoalchemy2.functions import ST_DWithin, ST_GeogFromText -from sqlalchemy import delete, select, text +from sqlalchemy import delete, select, text, update from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine from sqlalchemy.orm import sessionmaker @@ -39,11 +39,12 @@ class PostGISManager: await manager.init_db() # Store operator location - location_id = await manager.store_operator_location( + loc = await manager.store_operator_location( callsign="N0CALL", latitude=40.7128, longitude=-74.0060, ) + location_id = loc["id"] # Find nearby operators nearby = await manager.find_operators_nearby( @@ -77,7 +78,7 @@ async def init_db(self) -> None: Creates: - PostGIS extension - - All SHAKODS tables + - All RadioShaq tables - Spatial indexes """ async with self.engine.begin() as conn: @@ -106,7 +107,7 @@ async def store_operator_location( accuracy_meters: float | None = None, source: str = "manual", session_id: str | None = None, - ) -> int: + ) -> dict[str, Any]: """Store operator location with GIS data. Args: @@ -119,13 +120,14 @@ async def store_operator_location( session_id: Optional session reference Returns: - Location record ID + Dict with id, callsign, latitude, longitude, source, timestamp, etc. (avoids TOCTOU refetch). """ + callsign_upper = callsign.upper() async with self.async_session() as session: # Create Point geometry in WGS 84 # Note: PostGIS Point format is (longitude, latitude) location = OperatorLocation( - callsign=callsign.upper(), + callsign=callsign_upper, location=f"SRID=4326;POINT({longitude} {latitude})", altitude_meters=altitude_meters, accuracy_meters=accuracy_meters, @@ -134,7 +136,18 @@ async def store_operator_location( ) session.add(location) await session.commit() - return location.id + await session.refresh(location) + return { + "id": location.id, + "callsign": location.callsign, + "latitude": latitude, + "longitude": longitude, + "altitude_meters": location.altitude_meters, + "accuracy_meters": location.accuracy_meters, + "source": location.source, + "timestamp": location.timestamp.isoformat() if location.timestamp else None, + "session_id": location.session_id, + } async def find_operators_nearby( self, @@ -164,14 +177,16 @@ async def find_operators_nearby( # Build point geometry point = f"SRID=4326;POINT({longitude} {latitude})" - # Base query + # Base query (include id, lat/lon, distance for mapping; id for stable marker keys) query = select( + OperatorLocation.id, OperatorLocation.callsign, OperatorLocation.timestamp, OperatorLocation.altitude_meters, OperatorLocation.source, OperatorLocation.session_id, - # Calculate distance using geography type (accurate in meters) + text("ST_Y(location::geometry)").label("latitude"), + text("ST_X(location::geometry)").label("longitude"), text( "ST_Distance(location::geography, ST_GeogFromText(:point))" ).label("distance_meters"), @@ -184,25 +199,30 @@ async def find_operators_nearby( ) ) - # Add recent-only filter + # Add recent-only filter: use make_interval so recent_hours is a proper bind (no interpolation) if recent_only: query = query.where( - text( - f"timestamp > NOW() - INTERVAL '{recent_hours} hours'" - ) + text("timestamp > NOW() - make_interval(hours => :recent_hours)") ) # Order by most recent first query = query.order_by(OperatorLocation.timestamp.desc()) query = query.limit(max_results) - # Execute with point parameter - result = await session.execute(query, {"point": point}) + # Execute with bound parameters + params: dict[str, Any] = {"point": point} + if recent_only: + params["recent_hours"] = recent_hours + result = await session.execute(query, params) return [ { + "id": row.id, "callsign": row.callsign, + "latitude": float(row.latitude) if row.latitude is not None else None, + "longitude": float(row.longitude) if row.longitude is not None else None, "timestamp": row.timestamp.isoformat() if row.timestamp else None, + "last_seen_at": row.timestamp.isoformat() if row.timestamp else None, "altitude_meters": row.altitude_meters, "source": row.source, "session_id": row.session_id, @@ -236,7 +256,44 @@ async def get_latest_location( if location: return location.to_dict() return None - + + async def get_latest_location_decoded( + self, + callsign: str, + ) -> dict[str, Any] | None: + """Get the most recent location for a callsign with explicit latitude/longitude. + + Returns a dict with id, callsign, latitude, longitude, source, timestamp, + altitude_meters, accuracy_meters, session_id (no raw geometry). + """ + async with self.async_session() as session: + query = text(""" + SELECT id, callsign, + ST_Y(location::geometry) AS latitude, + ST_X(location::geometry) AS longitude, + altitude_meters, accuracy_meters, source, timestamp, session_id + FROM operator_locations + WHERE callsign = :callsign + ORDER BY timestamp DESC + LIMIT 1 + """) + result = await session.execute(query, {"callsign": callsign.upper()}) + row = result.first() + if not row: + return None + m = row._mapping + return { + "id": m["id"], + "callsign": m["callsign"], + "latitude": float(m["latitude"]), + "longitude": float(m["longitude"]), + "altitude_meters": m["altitude_meters"], + "accuracy_meters": m["accuracy_meters"], + "source": m["source"], + "timestamp": m["timestamp"].isoformat() if m["timestamp"] else None, + "session_id": m["session_id"], + } + async def store_transcript( self, session_id: str, @@ -417,6 +474,7 @@ async def store_coordination_event( latitude: float | None = None, longitude: float | None = None, task_id: str | None = None, + extra_data: dict | None = None, ) -> int: """Store a coordination event. @@ -433,6 +491,7 @@ async def store_coordination_event( latitude: Optional meeting point latitude longitude: Optional meeting point longitude task_id: Optional orchestrator task ID + extra_data: Optional JSON (e.g. emergency_contact_phone, emergency_contact_channel, approved_by, sent_at) Returns: Event record ID @@ -448,44 +507,172 @@ async def store_coordination_event( status=status, priority=priority, notes=notes, - location=f"SRID=4326;POINT({longitude} {latitude})" if latitude and longitude else None, + location=f"SRID=4326;POINT({longitude} {latitude})" if latitude is not None and longitude is not None else None, task_id=task_id, + extra_data=extra_data, ) session.add(event) await session.commit() return event.id - + + async def get_coordination_event_by_id(self, event_id: int) -> dict[str, Any] | None: + """Get a single coordination event by id. Returns None if not found. Uses to_dict() (redacted for API).""" + async with self.async_session() as session: + result = await session.execute(select(CoordinationEvent).where(CoordinationEvent.id == event_id)) + row = result.scalar_one_or_none() + return row.to_dict() if row else None + + async def get_coordination_event_by_id_raw(self, event_id: int) -> dict[str, Any] | None: + """Get a coordination event with unredacted extra_data (for internal use e.g. approve handler).""" + async with self.async_session() as session: + result = await session.execute(select(CoordinationEvent).where(CoordinationEvent.id == event_id)) + row = result.scalar_one_or_none() + if not row: + return None + return { + "id": row.id, + "event_type": row.event_type, + "initiator_callsign": row.initiator_callsign, + "target_callsign": row.target_callsign, + "scheduled_time": row.scheduled_time.isoformat() if row.scheduled_time else None, + "frequency_hz": row.frequency_hz, + "mode": row.mode, + "status": row.status, + "priority": row.priority, + "notes": row.notes, + "created_at": row.created_at.isoformat() if row.created_at else None, + "extra_data": dict(row.extra_data) if row.extra_data else {}, + } + + async def update_coordination_event( + self, + event_id: int, + *, + status: str | None = None, + extra_data: dict | None = None, + ) -> bool: + """Update a coordination event's status and/or extra_data. Returns True if updated.""" + async with self.async_session() as session: + result = await session.execute(select(CoordinationEvent).where(CoordinationEvent.id == event_id)) + row = result.scalar_one_or_none() + if not row: + return False + if status is not None: + row.status = status + if extra_data is not None: + existing = dict(row.extra_data or {}) + existing.update(extra_data) + row.extra_data = existing + await session.commit() + return True + + async def claim_emergency_event_pending(self, event_id: int) -> int | None: + """ + Atomically set status to 'approving' only when status is 'pending'. + Returns event_id if claimed, None if already processed or not found. + Prevents TOCTOU: only one concurrent approval can succeed. + """ + async with self.async_session() as session: + stmt = ( + update(CoordinationEvent) + .where( + CoordinationEvent.id == event_id, + CoordinationEvent.status == "pending", + ) + .values(status="approving") + .returning(CoordinationEvent.id) + ) + result = await session.execute(stmt) + row = result.one_or_none() + await session.commit() + return row[0] if row else None + async def get_pending_coordination_events( self, callsign: str | None = None, + event_type: str | None = None, max_results: int = 100, + status: str | None = "pending", ) -> list[dict[str, Any]]: - """Get pending coordination events. - + """Get coordination events, optionally filtered by status. + Args: callsign: Filter by callsign (initiator or target) + event_type: Filter by event_type (e.g. emergency) max_results: Maximum results - + status: Filter by status (default "pending"). Pass None to get all statuses. + Returns: List of event dicts """ async with self.async_session() as session: query = ( select(CoordinationEvent) - .where(CoordinationEvent.status == "pending") .order_by(CoordinationEvent.priority, CoordinationEvent.scheduled_time) .limit(max_results) ) - + if status is not None: + query = query.where(CoordinationEvent.status == status) if callsign: callsign_upper = callsign.upper() query = query.where( (CoordinationEvent.initiator_callsign == callsign_upper) | (CoordinationEvent.target_callsign == callsign_upper) ) - + if event_type: + query = query.where(CoordinationEvent.event_type == event_type) + result = await session.execute(query) return [row.to_dict() for row in result.scalars()] + + async def get_emergency_events_with_locations( + self, + since: str | None = None, + status: str | None = None, + limit: int = 100, + ) -> list[dict[str, Any]]: + """Get coordination events with event_type=emergency that have a location, with lat/lon decoded. + + Returns list of dicts with id, type, latitude, longitude, initiator_callsign, + target_callsign, status, created_at. Events without a location are excluded. + """ + async with self.async_session() as session: + conditions = [ + "event_type = 'emergency'", + "location IS NOT NULL", + ] + params: dict[str, Any] = {"limit": limit} + if since is not None: + conditions.append("created_at >= :since") + params["since"] = since + if status is not None: + conditions.append("status = :status") + params["status"] = status + where_clause = " AND ".join(conditions) + q = text(f""" + SELECT id, event_type, + ST_Y(location::geometry) AS latitude, + ST_X(location::geometry) AS longitude, + initiator_callsign, target_callsign, status, created_at + FROM coordination_events + WHERE {where_clause} + ORDER BY created_at DESC + LIMIT :limit + """) + result = await session.execute(q, params) + return [ + { + "id": row.id, + "type": row.event_type, + "latitude": float(row.latitude), + "longitude": float(row.longitude), + "initiator_callsign": row.initiator_callsign, + "target_callsign": row.target_callsign, + "status": row.status, + "created_at": row.created_at.isoformat() if row.created_at else None, + } + for row in result + ] async def save_session_state( self, @@ -636,6 +823,120 @@ async def update_callsign_preferred_bands(self, callsign: str, preferred_bands: await session.commit() return True + async def get_contact_preferences(self, callsign: str) -> dict[str, Any] | None: + """Get contact preferences for a registered callsign. Returns None if not found.""" + normalized = callsign.strip().upper() + if not normalized: + return None + async with self.async_session() as session: + result = await session.execute( + select(RegisteredCallsign).where(RegisteredCallsign.callsign == normalized) + ) + row = result.scalar_one_or_none() + if not row: + return None + # Per-channel opt-out; legacy notify_opt_out_at treats as both channels opted out + opt_out_sms = row.notify_opt_out_at_sms or row.notify_opt_out_at + opt_out_wa = row.notify_opt_out_at_whatsapp or row.notify_opt_out_at + return { + "callsign": row.callsign, + "notify_sms_phone": row.notify_sms_phone, + "notify_whatsapp_phone": row.notify_whatsapp_phone, + "notify_on_relay": row.notify_on_relay, + "notify_consent_at": row.notify_consent_at.isoformat() if row.notify_consent_at else None, + "notify_consent_source": row.notify_consent_source, + "notify_opt_out_at": row.notify_opt_out_at.isoformat() if row.notify_opt_out_at else None, + "notify_opt_out_at_sms": opt_out_sms.isoformat() if opt_out_sms else None, + "notify_opt_out_at_whatsapp": opt_out_wa.isoformat() if opt_out_wa else None, + } + + async def set_contact_preferences( + self, + callsign: str, + *, + notify_sms_phone: str | None = None, + notify_whatsapp_phone: str | None = None, + notify_on_relay: bool | None = None, + consent_at: datetime | None = None, + consent_source: str | None = None, + ) -> bool: + """Set contact preferences for a registered callsign. Returns True if updated.""" + normalized = callsign.strip().upper() + if not normalized: + return False + async with self.async_session() as session: + result = await session.execute( + select(RegisteredCallsign).where(RegisteredCallsign.callsign == normalized) + ) + row = result.scalar_one_or_none() + if not row: + return False + if notify_sms_phone is not None: + row.notify_sms_phone = notify_sms_phone.strip() or None + if row.notify_sms_phone: + row.notify_opt_out_at_sms = None # Re-enabling SMS clears opt-out for that channel + if notify_whatsapp_phone is not None: + row.notify_whatsapp_phone = notify_whatsapp_phone.strip() or None + if row.notify_whatsapp_phone: + row.notify_opt_out_at_whatsapp = None # Re-enabling WhatsApp clears opt-out for that channel + if notify_on_relay is not None: + row.notify_on_relay = notify_on_relay + if consent_at is not None: + row.notify_consent_at = consent_at + if consent_source is not None: + row.notify_consent_source = consent_source.strip() or None + # Clear generic opt-out sentinel when both per-channel opt-outs are cleared (full re-subscription) + if row.notify_opt_out_at_sms is None and row.notify_opt_out_at_whatsapp is None: + row.notify_opt_out_at = None + await session.commit() + return True + + async def record_opt_out(self, callsign: str, channel: str) -> bool: + """Record opt-out for a callsign (channel 'sms' or 'whatsapp'). Clears that channel's phone and sets per-channel opt_out_at. Returns True if updated.""" + normalized = callsign.strip().upper() + if not normalized or channel not in ("sms", "whatsapp"): + return False + async with self.async_session() as session: + result = await session.execute( + select(RegisteredCallsign).where(RegisteredCallsign.callsign == normalized) + ) + row = result.scalar_one_or_none() + if not row: + return False + now = datetime.now(timezone.utc) + if channel == "sms": + row.notify_opt_out_at_sms = now + row.notify_sms_phone = None + else: + row.notify_opt_out_at_whatsapp = now + row.notify_whatsapp_phone = None + if row.notify_opt_out_at is None: + row.notify_opt_out_at = now + await session.commit() + return True + + async def record_opt_out_by_phone(self, phone: str, channel: str) -> bool: + """Record opt-out by phone number. Opts out all callsigns with this phone. Returns True if at least one row was updated.""" + phone = (phone or "").strip() + if not phone or channel not in ("sms", "whatsapp"): + return False + async with self.async_session() as session: + col = RegisteredCallsign.notify_sms_phone if channel == "sms" else RegisteredCallsign.notify_whatsapp_phone + opt_out_col = ( + RegisteredCallsign.notify_opt_out_at_sms if channel == "sms" else RegisteredCallsign.notify_opt_out_at_whatsapp + ) + now = datetime.now(timezone.utc) + stmt = ( + update(RegisteredCallsign) + .where(col == phone) + .values({opt_out_col: now, col: None, RegisteredCallsign.notify_opt_out_at: now}) + .returning(RegisteredCallsign.id) + ) + result = await session.execute(stmt) + updated_ids = result.scalars().all() + await session.commit() + return len(updated_ids) > 0 + async def unregister_callsign(self, callsign: str) -> bool: """Remove a callsign from the registry. Returns True if a row was deleted.""" normalized = callsign.strip().upper() diff --git a/radioshaq/radioshaq/database/transcripts.py b/radioshaq/radioshaq/database/transcripts.py index 652e484..ee94b3a 100644 --- a/radioshaq/radioshaq/database/transcripts.py +++ b/radioshaq/radioshaq/database/transcripts.py @@ -60,6 +60,11 @@ class TranscriptStorage: def __init__(self, db: TranscriptStoreProtocol | None = None): self._db = db + @property + def db(self) -> TranscriptStoreProtocol | None: + """Public accessor to the database manager (e.g. for emergency coordination events).""" + return self._db + async def store( self, session_id: str, diff --git a/radioshaq/radioshaq/license_acceptance.py b/radioshaq/radioshaq/license_acceptance.py index d9fda33..953c697 100644 --- a/radioshaq/radioshaq/license_acceptance.py +++ b/radioshaq/radioshaq/license_acceptance.py @@ -17,13 +17,19 @@ def _license_path() -> str: """Best-effort path or URL for the GPL license text. - Prefers a repo-local LICENSE for editable installs, then falls back to the - wheel's dist-info license file when available, and finally a canonical URL. + Prefers a repo-local LICENSE for editable installs (monorepo root, then + radioshaq dir), then falls back to the wheel's dist-info license file when + available, and finally a canonical URL. """ - # Editable / source checkout: repo root contains LICENSE.md next to package dir - repo_candidate = Path(__file__).resolve().parent.parent / "LICENSE.md" - if repo_candidate.exists(): - return str(repo_candidate) + base = Path(__file__).resolve().parent.parent + # Monorepo root (e.g. .../monorepo/LICENSE.md) + repo_root_candidate = base.parent / "LICENSE.md" + if repo_root_candidate.exists(): + return str(repo_root_candidate) + # Radioshaq package dir (e.g. .../monorepo/radioshaq/LICENSE.md) + pkg_candidate = base / "LICENSE.md" + if pkg_candidate.exists(): + return str(pkg_candidate) # Regular wheel install: LICENSE.md is included via license-files in dist-info try: diff --git a/radioshaq/radioshaq/listener/band_listener.py b/radioshaq/radioshaq/listener/band_listener.py index 9783204..764ed73 100644 --- a/radioshaq/radioshaq/listener/band_listener.py +++ b/radioshaq/radioshaq/listener/band_listener.py @@ -8,6 +8,7 @@ from loguru import logger +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.config.schema import Config from radioshaq.radio.bands import BAND_PLANS from radioshaq.radio.injection import get_injection_queue @@ -23,9 +24,10 @@ def _resolve_bands(config: Config) -> list[str]: return [] -def _band_frequency_and_mode(band: str) -> tuple[float, str]: +def _band_frequency_and_mode(band: str, band_plans: dict | None = None) -> tuple[float, str]: """Center frequency (Hz) and default mode for a band.""" - plan = BAND_PLANS.get(band) + plans = band_plans if band_plans is not None else BAND_PLANS + plan = plans.get(band) if not plan: return 0.0, "FM" freq = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 @@ -72,7 +74,7 @@ async def _process_received_messages( metadata=metadata, ) except Exception as e: - logger.warning("Band listener store failed: %s", e) + logger.warning("Band listener store failed: {}", e) if inject: try: queue = get_injection_queue() @@ -85,7 +87,7 @@ async def _process_received_messages( destination_callsign=None, ) except Exception as e: - logger.warning("Band listener inject failed: %s", e) + logger.warning("Band listener inject failed: {}", e) if publish_to_bus and message_bus: try: from radioshaq.orchestrator.radio_ingestion import radio_received_to_inbound @@ -99,9 +101,9 @@ async def _process_received_messages( ) ok = await message_bus.publish_inbound(inbound) if not ok: - logger.debug("Bus full, dropped radio_rx message for %s", band) + logger.debug("Bus full, dropped radio_rx message for {}", band) except Exception as e: - logger.warning("Band listener publish_inbound failed: %s", e) + logger.warning("Band listener publish_inbound failed: {}", e) async def _monitor_band_loop( @@ -114,13 +116,14 @@ async def _monitor_band_loop( publish_to_bus: bool, stop_event: asyncio.Event, *, + band_plans: dict | None = None, store_enabled: bool = True, store_min_length: int = 0, ) -> None: """Single-band loop: monitor for cycle_seconds, process messages, repeat until stop.""" - freq, mode = _band_frequency_and_mode(band) + freq, mode = _band_frequency_and_mode(band, band_plans) if freq <= 0: - logger.warning("Band %s has no plan, skipping", band) + logger.warning("Band {} has no plan, skipping", band) return while not stop_event.is_set(): try: @@ -141,7 +144,7 @@ async def _monitor_band_loop( except asyncio.CancelledError: break except Exception as e: - logger.exception("Band listener %s error: %s", band, e) + logger.exception("Band listener {} error: {}", band, e) await asyncio.sleep(0.2) @@ -178,6 +181,10 @@ async def run_band_listener( concurrent = getattr(radio, "listener_concurrent_bands", True) store_enabled = storage is not None and getattr(radio, "band_listener_store", True) store_min_length = getattr(radio, "band_listener_store_min_length", 0) or 0 + band_plans = get_band_plan_source_for_config( + radio.restricted_bands_region, + getattr(radio, "band_plan_region", None), + ) if concurrent: tasks = [ @@ -191,6 +198,7 @@ async def run_band_listener( inject_into_queue, publish_to_bus, stop_event, + band_plans=band_plans, store_enabled=store_enabled, store_min_length=store_min_length, ) @@ -218,6 +226,7 @@ async def run_band_listener( inject_into_queue, publish_to_bus, stop_event, + band_plans=band_plans, store_enabled=store_enabled, store_min_length=store_min_length, ) diff --git a/radioshaq/radioshaq/listener/relay_delivery.py b/radioshaq/radioshaq/listener/relay_delivery.py index fa6551e..089442f 100644 --- a/radioshaq/radioshaq/listener/relay_delivery.py +++ b/radioshaq/radioshaq/listener/relay_delivery.py @@ -1,4 +1,4 @@ -"""Relay delivery worker: process pending deliver_at transcripts (inject and optionally TX on target band).""" +"""Relay delivery worker: process pending deliver_at transcripts (radio inject/TX or SMS/WhatsApp via bus).""" from __future__ import annotations @@ -7,10 +7,16 @@ from loguru import logger +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.radio.bands import BAND_PLANS from radioshaq.radio.injection import get_injection_queue +def _is_consent_valid_for_region(region: str | None, prefs: dict[str, Any]) -> bool: + """True if contact preferences have a valid consent record (API enforces consent for explicit regions before setting notify_consent_at).""" + return bool(prefs.get("notify_consent_at")) + + async def run_relay_delivery_worker( db: Any, config: Any, @@ -18,10 +24,13 @@ async def run_relay_delivery_worker( stop_event: asyncio.Event, interval_seconds: float = 60.0, radio_tx_agent: Any = None, + message_bus: Any = None, ) -> None: """ - Periodically query transcripts with deliver_at <= now and delivery_status != delivered; - for each: inject to target band, optionally call radio_tx, then mark delivered. + Periodically query transcripts with deliver_at <= now and delivery_status != delivered. + - If extra_data has delivery_channel sms or whatsapp: publish OutboundMessage to message_bus + (outbound dispatcher will send via SMS/WhatsApp). + - Else: inject to target band and optionally call radio_tx (existing behavior). db must have search_pending_relay_deliveries() and mark_transcript_delivery_done(id). """ @@ -30,6 +39,14 @@ async def run_relay_delivery_worker( return radio_cfg = getattr(config, "radio", None) relay_tx = getattr(radio_cfg, "relay_tx_target_band", False) if radio_cfg else False + band_plans = ( + get_band_plan_source_for_config( + getattr(radio_cfg, "restricted_bands_region", "FCC"), + getattr(radio_cfg, "band_plan_region", None), + ) + if radio_cfg + else BAND_PLANS + ) while not stop_event.is_set(): try: @@ -37,47 +54,139 @@ async def run_relay_delivery_worker( for t in pending: tid = t.get("id") extra = t.get("extra_data") or {} - band = extra.get("band") or extra.get("relay_from_band") or "unknown" text = t.get("transcript_text") or "" source = t.get("source_callsign") or "UNKNOWN" dest = t.get("destination_callsign") - mode = t.get("mode") or "FM" - freq = t.get("frequency_hz") or 0.0 - if not freq and band: - plan = BAND_PLANS.get(band) - if plan: - freq = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 - queue = get_injection_queue() - queue.inject_message( - text=text, - band=band, - frequency_hz=freq, - mode=mode, - source_callsign=source, - destination_callsign=dest, - ) - if relay_tx and radio_tx_agent and hasattr(radio_tx_agent, "execute"): - try: - await radio_tx_agent.execute({ - "transmission_type": "voice", - "frequency": freq, - "message": text, - "mode": mode, - }) - except Exception as e: - logger.warning("Relay delivery radio_tx failed for transcript %s: %s", tid, e) - ok = await db.mark_transcript_delivery_done(tid) - if ok: - try: - from radioshaq.api.routes.metrics import increment_relay_deliveries - increment_relay_deliveries() - except Exception: - pass + delivery_channel = extra.get("delivery_channel") + destination_phone = extra.get("destination_phone") + + mark_delivered = False + if delivery_channel in ("sms", "whatsapp"): + if destination_phone and message_bus and hasattr(message_bus, "publish_outbound"): + from radioshaq.vendor.nanobot.bus.events import OutboundMessage + ok_pub = await message_bus.publish_outbound( + OutboundMessage( + channel=delivery_channel, + chat_id=destination_phone, + content=text, + reply_to=None, + media=[], + metadata={"relay_transcript_id": tid, "source_callsign": source}, + ) + ) + if not ok_pub: + logger.warning( + "Relay delivery: outbound queue full for transcript {} ({})", + tid, + delivery_channel, + ) + else: + mark_delivered = True + else: + logger.warning( + "Relay delivery: cannot deliver transcript {} via {} (bus unavailable or no phone)", + tid, + delivery_channel, + ) + # Do NOT fall through to radio injection; leave undelivered for retry else: - logger.warning("Could not mark transcript %s delivered", tid) + band = extra.get("band") or extra.get("relay_from_band") or "unknown" + mode = t.get("mode") or "FM" + freq = t.get("frequency_hz") or 0.0 + if not freq and band and band in band_plans: + plan = band_plans.get(band) + if plan: + freq = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 + + queue = get_injection_queue() + queue.inject_message( + text=text, + band=band, + frequency_hz=freq, + mode=mode, + source_callsign=source, + destination_callsign=dest, + ) + if relay_tx and radio_tx_agent and hasattr(radio_tx_agent, "execute") and freq > 0: + try: + await radio_tx_agent.execute({ + "transmission_type": "voice", + "frequency": freq, + "message": text, + "mode": mode, + }) + except Exception as e: + logger.warning("Relay delivery radio_tx failed for transcript {}: {}", tid, e) + mark_delivered = True + + if mark_delivered: + ok = await db.mark_transcript_delivery_done(tid) + if ok: + try: + from radioshaq.api.routes.metrics import increment_relay_deliveries + increment_relay_deliveries() + except Exception: + pass + # Notify-on-relay (§8.3): only after confirmed delivery, for radio only; if destination has notify preferences, send short SMS/WhatsApp + if ( + delivery_channel not in ("sms", "whatsapp") + and dest + and message_bus + and hasattr(message_bus, "publish_outbound") + and hasattr(db, "get_contact_preferences") + ): + try: + prefs = await db.get_contact_preferences(dest) + if not prefs: + continue + if not prefs.get("notify_on_relay"): + continue + region = getattr(radio_cfg, "restricted_bands_region", None) if radio_cfg else None + if not _is_consent_valid_for_region(region, prefs): + continue + sms_phone = prefs.get("notify_sms_phone") if not prefs.get("notify_opt_out_at_sms") else None + whatsapp_phone = prefs.get("notify_whatsapp_phone") if not prefs.get("notify_opt_out_at_whatsapp") else None + if not sms_phone and not whatsapp_phone: + continue + band = extra.get("band") or extra.get("relay_from_band") or "radio" + snippet = (text or "")[:80].replace("\n", " ") + if len(text or "") > 80: + snippet += "..." + notify_text = f"You have a new message on {band} from {source}: {snippet}" + from radioshaq.vendor.nanobot.bus.events import OutboundMessage + for ch, phone in (("sms", sms_phone), ("whatsapp", whatsapp_phone)): + if not phone: + continue + ok_pub = await message_bus.publish_outbound( + OutboundMessage( + channel=ch, + chat_id=phone, + content=notify_text, + reply_to=None, + media=[], + metadata={ + "notify_on_relay": True, + "destination_callsign": dest, + "relay_transcript_id": tid, + }, + ) + ) + if ok_pub: + logger.info( + "Notify-on-relay sent to {} for callsign {} (transcript {})", + ch, + dest, + tid, + ) + else: + logger.warning("Notify-on-relay queue full for {} {}", ch, dest) + except Exception as e: + logger.warning("Notify-on-relay failed for dest {}: {}", dest, e) + else: + logger.warning("Could not mark transcript {} delivered", tid) except asyncio.CancelledError: break except Exception as e: - logger.exception("Relay delivery worker error: %s", e) + logger.exception("Relay delivery worker error: {}", e) await asyncio.sleep(interval_seconds) diff --git a/radioshaq/radioshaq/listener/voice_listener.py b/radioshaq/radioshaq/listener/voice_listener.py index 76e67e5..596d6b2 100644 --- a/radioshaq/radioshaq/listener/voice_listener.py +++ b/radioshaq/radioshaq/listener/voice_listener.py @@ -11,6 +11,7 @@ from loguru import logger +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.config.schema import Config from radioshaq.radio.bands import BAND_PLANS @@ -28,9 +29,10 @@ def _resolve_voice_band(config: Config) -> str | None: return None -def _voice_frequency_and_mode(band: str) -> tuple[float, str]: +def _voice_frequency_and_mode(band: str, band_plans: dict | None = None) -> tuple[float, str]: """Center frequency (Hz) and default mode for a band.""" - plan = BAND_PLANS.get(band) + plans = band_plans if band_plans is not None else BAND_PLANS + plan = plans.get(band) if not plan: return 0.0, "FM" freq = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 @@ -58,15 +60,24 @@ async def run_voice_listener( if not band: logger.debug("Voice listener: no default_band or listen_bands, exiting") return - freq, mode = _voice_frequency_and_mode(band) + radio = getattr(config, "radio", None) + band_plans = ( + get_band_plan_source_for_config( + getattr(radio, "restricted_bands_region", "FCC"), + getattr(radio, "band_plan_region", None), + ) + if radio + else BAND_PLANS + ) + freq, mode = _voice_frequency_and_mode(band, band_plans) if freq <= 0: - logger.warning("Voice listener: band %s has no plan, skipping", band) + logger.warning("Voice listener: band {} has no plan, skipping", band) return if not radio_rx_audio_agent: logger.warning("Voice listener: no radio_rx_audio agent, exiting") return - logger.info("Voice listener started for band %s (%.0f Hz %s)", band, freq, mode) + logger.info("Voice listener started for band {} ({:.0f} Hz {})", band, freq, mode) while not stop_event.is_set(): try: await radio_rx_audio_agent.execute( @@ -81,7 +92,7 @@ async def run_voice_listener( except asyncio.CancelledError: break except Exception as e: - logger.exception("Voice listener error: %s", e) + logger.exception("Voice listener error: {}", e) if not stop_event.is_set(): await asyncio.sleep(0.5) logger.debug("Voice listener stopped") diff --git a/radioshaq/radioshaq/llm/__init__.py b/radioshaq/radioshaq/llm/__init__.py index 8a4fdbd..a0193f2 100644 --- a/radioshaq/radioshaq/llm/__init__.py +++ b/radioshaq/radioshaq/llm/__init__.py @@ -1,4 +1,4 @@ -"""LLM client abstraction for SHAKODS.""" +"""LLM client abstraction for RadioShaq.""" from radioshaq.llm.client import LLMClient diff --git a/radioshaq/radioshaq/llm/client.py b/radioshaq/radioshaq/llm/client.py index c1c7a3c..ea356ce 100644 --- a/radioshaq/radioshaq/llm/client.py +++ b/radioshaq/radioshaq/llm/client.py @@ -3,6 +3,7 @@ from __future__ import annotations from dataclasses import dataclass +import os from typing import Any from loguru import logger @@ -44,6 +45,23 @@ class ChatResponseWithTools(ChatResponse): class LLMClient: """LLM client using LiteLLM (supports Mistral, OpenAI, Anthropic, etc.).""" + _PROVIDER_ENV_KEYS: dict[str, tuple[str, ...]] = { + "mistral": ("MISTRAL_API_KEY",), + "openai": ("OPENAI_API_KEY",), + "anthropic": ("ANTHROPIC_API_KEY",), + "huggingface": ("HF_TOKEN", "HUGGINGFACE_API_KEY"), + "gemini": ("GEMINI_API_KEY",), + } + + _FALLBACK_ENV_CHAIN: tuple[str, ...] = ( + "MISTRAL_API_KEY", + "OPENAI_API_KEY", + "ANTHROPIC_API_KEY", + "HF_TOKEN", + "HUGGINGFACE_API_KEY", + "GEMINI_API_KEY", + ) + def __init__( self, model: str = "mistral/mistral-large-latest", @@ -62,6 +80,33 @@ def __init__( self.temperature = temperature self.max_tokens = max_tokens + def _provider_from_model(self) -> str: + if "/" not in self.model: + return "" + return self.model.split("/", 1)[0].lower().strip() + + def _resolve_api_key(self) -> str | None: + """Resolve API key by explicit value, then provider-matched env vars, then generic fallback.""" + if self.api_key: + return self.api_key + + provider = self._provider_from_model() + if self.api_base and "huggingface.co" in self.api_base: + provider = "huggingface" + + provider_keys = self._PROVIDER_ENV_KEYS.get(provider, ()) + for env_name in provider_keys: + value = os.environ.get(env_name) + if value: + return value + + for env_name in self._FALLBACK_ENV_CHAIN: + value = os.environ.get(env_name) + if value: + return value + + return None + async def chat( self, messages: list[dict[str, str]], @@ -69,13 +114,11 @@ async def chat( max_tokens: int | None = None, ) -> ChatResponse: """Send chat messages and return response.""" - import os - import litellm temp = temperature if temperature is not None else self.temperature max_tok = max_tokens if max_tokens is not None else self.max_tokens - api_key = self.api_key or os.environ.get("MISTRAL_API_KEY") or os.environ.get("OPENAI_API_KEY") + api_key = self._resolve_api_key() kwargs: dict[str, Any] = { "model": self.model, @@ -99,7 +142,7 @@ async def chat( }, ) except Exception as e: - logger.error("LLM chat failed: %s", e) + logger.error("LLM chat failed: {}", e) raise async def chat_with_tools( @@ -115,13 +158,11 @@ async def chat_with_tools( Send messages with tool definitions; return content and tool_calls. Does not loop; caller must execute tools, append results, and call again until no tool_calls. """ - import os - import litellm temp = temperature if temperature is not None else self.temperature max_tok = max_tokens if max_tokens is not None else self.max_tokens - api_key = self.api_key or os.environ.get("MISTRAL_API_KEY") or os.environ.get("OPENAI_API_KEY") + api_key = self._resolve_api_key() kwargs_tools: dict[str, Any] = { "model": self.model, @@ -161,5 +202,5 @@ async def chat_with_tools( tool_calls=tool_calls, ) except Exception as e: - logger.error("LLM chat_with_tools failed: %s", e) + logger.error("LLM chat_with_tools failed: {}", e) raise diff --git a/radioshaq/radioshaq/memory/daily_summary_cron.py b/radioshaq/radioshaq/memory/daily_summary_cron.py index 8a75f98..d70df06 100644 --- a/radioshaq/radioshaq/memory/daily_summary_cron.py +++ b/radioshaq/radioshaq/memory/daily_summary_cron.py @@ -11,17 +11,21 @@ from radioshaq.config.resolve import get_llm_config_for_role from radioshaq.config.schema import Config from radioshaq.llm.client import LLMClient -from radioshaq.orchestrator.factory import _llm_api_key_from_llm_config, _llm_model_string_from_llm_config +from radioshaq.orchestrator.factory import ( + _llm_api_base_for_provider, + _llm_api_key_from_llm_config, + _llm_model_string_from_llm_config, +) DEFAULT_TZ = ZoneInfo("America/New_York") -SUMMARY_PROMPT = """You are summarizing a day's conversation between a ham radio operator and SHAKODS (an AI assistant for ham radio operations). +SUMMARY_PROMPT = """You are summarizing a day's conversation between a ham radio operator and RadioShaq (an AI assistant for ham radio operations). Below are the messages from this operator's conversation today. Write a concise daily summary (3-8 sentences) covering: - Key topics discussed - Tasks or requests handled - Anything worth remembering for future context -Be factual and concise. Write in third person ("The operator asked...", "SHAKODS helped..."). +Be factual and concise. Write in third person ("The operator asked...", "RadioShaq helped..."). Messages: {messages} @@ -90,9 +94,9 @@ async def run_daily_summary_job( summary, ) written += 1 - logger.info("Daily summary written for %s (%s)", callsign, summary_date) + logger.info("Daily summary written for {} ({})", callsign, summary_date) except Exception as e: - logger.warning("Daily summary failed for %s: %s", callsign, e) + logger.warning("Daily summary failed for {}: {}", callsign, e) return written @@ -113,7 +117,7 @@ async def run_midnight_cron_loop( llm_cfg = get_llm_config_for_role(config, "daily_summary") model = _llm_model_string_from_llm_config(llm_cfg) api_key = _llm_api_key_from_llm_config(llm_cfg) - api_base = getattr(llm_cfg, "custom_api_base", None) + api_base = _llm_api_base_for_provider(llm_cfg) llm = LLMClient(model=model, api_key=api_key, api_base=api_base, temperature=0.2, max_tokens=512) while True: @@ -124,7 +128,7 @@ async def run_midnight_cron_loop( ) wait_seconds = (next_midnight - now).total_seconds() logger.info( - "Daily summary cron: next run at %s (in %.0f s)", + "Daily summary cron: next run at {} (in {:.0f} s)", next_midnight, wait_seconds, ) @@ -151,6 +155,10 @@ async def run_midnight_cron_loop( summary_date=yesterday, timezone=timezone, ) - logger.info("Daily summary cron: wrote %d summaries for %s", n, yesterday) + logger.info( + "Daily summary cron: wrote {} summaries for {}", + n, + yesterday, + ) except Exception as e: - logger.exception("Daily summary cron failed: %s", e) + logger.exception("Daily summary cron failed: {}", e) diff --git a/radioshaq/radioshaq/memory/hindsight.py b/radioshaq/radioshaq/memory/hindsight.py index 0a783a6..319ca04 100644 --- a/radioshaq/radioshaq/memory/hindsight.py +++ b/radioshaq/radioshaq/memory/hindsight.py @@ -126,7 +126,7 @@ def retain_exchange( client.retain(**kwargs) return True except Exception as e: - logger.warning("Hindsight retain failed: %s", e) + logger.warning("Hindsight retain failed: {}", e) return False diff --git a/radioshaq/radioshaq/memory/manager.py b/radioshaq/radioshaq/memory/manager.py index 8360fcc..1938568 100644 --- a/radioshaq/radioshaq/memory/manager.py +++ b/radioshaq/radioshaq/memory/manager.py @@ -6,6 +6,7 @@ from __future__ import annotations +import json from datetime import datetime, timezone from zoneinfo import ZoneInfo from typing import Any @@ -43,6 +44,7 @@ def __init__(self, database_url: str): pool_size=5, max_overflow=10, echo=False, + connect_args={"timeout": 10}, ) self.async_session = sessionmaker( self.engine, @@ -63,6 +65,9 @@ async def get_core_blocks(self, callsign: str) -> dict[str, str]: ) rows = result.fetchall() result_dict = {row[0]: (row[1] or "") for row in rows} + # Ensure standard block keys exist (empty string if not in DB) + for key in ("user", "identity", "ideaspace"): + result_dict.setdefault(key, "") # Add system instructions (global, read-only) sys_instr = await self.get_system_instructions() result_dict["system_instructions"] = sys_instr @@ -260,11 +265,13 @@ async def append_messages( else: role, content, meta_extra, reasoning = item[0], item[1], item[2], item[3] metadata = dict(meta_extra or {}) + # asyncpg JSONB requires JSON string when using raw text() + metadata_json = json.dumps(metadata) await session.execute( text( """ INSERT INTO memory_messages (callsign, idx, role, content, reasoning, metadata) - VALUES (:callsign, :idx, :role, :content, :reasoning, :metadata) + VALUES (:callsign, :idx, :role, :content, :reasoning, CAST(:metadata AS jsonb)) """ ), { @@ -273,7 +280,7 @@ async def append_messages( "role": role, "content": content, "reasoning": reasoning, - "metadata": metadata, + "metadata": metadata_json, }, ) next_idx += 1 @@ -349,7 +356,12 @@ async def get_callsigns_with_activity_since( async def delete_messages_older_than( self, cutoff: datetime, *, limit: int = 10_000 ) -> int: - """Delete memory_messages rows with created_at < cutoff. Returns count deleted. Batch limited by limit.""" + """Delete memory_messages rows with created_at < cutoff. Returns count deleted. Batch limited by limit. + If cutoff is in the future, deletes nothing and returns 0.""" + if cutoff.tzinfo is None: + cutoff = cutoff.replace(tzinfo=timezone.utc) + if cutoff > datetime.now(timezone.utc): + return 0 async with self.async_session() as session: result = await session.execute( text( diff --git a/radioshaq/radioshaq/messaging_compliance.py b/radioshaq/radioshaq/messaging_compliance.py new file mode 100644 index 0000000..c560e4f --- /dev/null +++ b/radioshaq/radioshaq/messaging_compliance.py @@ -0,0 +1,20 @@ +"""Messaging compliance: emergency SMS/WhatsApp region allowlist (Section 9).""" + +from __future__ import annotations + +from radioshaq.config.schema import EmergencyContactConfig + + +def emergency_messaging_allowed(region: str, config: EmergencyContactConfig | None) -> bool: + """ + Return True if emergency SMS/WhatsApp is allowed in the given region. + + Requires config.enabled and region to be in config.regions_allowed. + Region is typically config.radio.restricted_bands_region (e.g. FCC, CA, CEPT). + See docs/notify-and-emergency-compliance-plan.md for which regions are supported. + """ + if config is None or not getattr(config, "enabled", False): + return False + regions = getattr(config, "regions_allowed", None) or [] + region_upper = (region or "").strip().upper() + return region_upper in [r.strip().upper() for r in regions if r] diff --git a/radioshaq/radioshaq/middleware/upstream.py b/radioshaq/radioshaq/middleware/upstream.py index 0e37e1f..8966271 100644 --- a/radioshaq/radioshaq/middleware/upstream.py +++ b/radioshaq/radioshaq/middleware/upstream.py @@ -43,7 +43,8 @@ def emit(self, event: UpstreamEvent) -> None: self._event_queue.put_nowait(event) except asyncio.QueueFull: logger.warning( - "Upstream event queue full, dropping event from %s", event.source + "Upstream event queue full, dropping event from {}", + event.source, ) def subscribe(self, source_id: str) -> None: @@ -78,12 +79,12 @@ async def process_upstream_events(self, context: REACTState) -> None: try: await handler(event) except Exception as e: - logger.warning("Upstream handler error: %s", e) + logger.warning("Upstream handler error: {}", e) processed += 1 if processed: - logger.debug("Processed %d upstream events into context", processed) + logger.debug("Processed {} upstream events into context", processed) async def _integrate_memory(self, event: UpstreamEvent, context: REACTState) -> None: """Integrate upstreamed memory into orchestrator context.""" diff --git a/radioshaq/radioshaq/modes/field.py b/radioshaq/radioshaq/modes/field.py index 53ea3e9..90e7edc 100644 --- a/radioshaq/radioshaq/modes/field.py +++ b/radioshaq/radioshaq/modes/field.py @@ -80,7 +80,7 @@ async def _propagate_to_hq(self) -> None: if success: self._pending_propagation.clear() except Exception as e: - logger.error("Propagation to HQ failed: %s", e) + logger.error("Propagation to HQ failed: {}", e) async def run_sync_loop(self) -> None: """Background loop: periodic propagate and pull updates.""" @@ -94,8 +94,8 @@ async def run_sync_loop(self) -> None: for update in updates: await self._apply_hq_update(update) except Exception as e: - logger.error("Failed to get updates from HQ: %s", e) + logger.error("Failed to get updates from HQ: {}", e) async def _apply_hq_update(self, update: dict[str, Any]) -> None: """Apply an update from HQ (override for custom logic).""" - logger.debug("HQ update: %s", update) + logger.debug("HQ update: {}", update) diff --git a/radioshaq/radioshaq/modes/hq.py b/radioshaq/radioshaq/modes/hq.py index 36b83cd..d7f287e 100644 --- a/radioshaq/radioshaq/modes/hq.py +++ b/radioshaq/radioshaq/modes/hq.py @@ -37,8 +37,17 @@ async def receive_field_submission( raise AuthenticationError(f"Invalid auth for station {station_id}") if self.database and hasattr(self.database, "store_coordination_event"): - # Store as coordination event if DB supports it - pass # Optional: store field submission in DB + # Store a coordination event for traceability when DB support is available. + try: + await self.database.store_coordination_event( # type: ignore[attr-defined] + event_type="field_submission", + initiator_callsign=getattr(payload, "sub", None), + target_callsign=station_id, + notes=str(packet.get("original_message") or "")[:512], + status="received", + ) + except Exception as e: # pragma: no cover - defensive logging only + logger.warning("Failed to store coordination event for field submission: {}", e) task_id = packet.get("orchestrator_result", {}).get("task_id") if self._requires_hq_coordination(packet): @@ -108,5 +117,5 @@ async def coordinate_operators( "frequency": None, "mode": "FM", } - logger.info("Coordination plan: %s", coordination_plan) + logger.info("Coordination plan: {}", coordination_plan) return {"success": True, "plan": coordination_plan} diff --git a/radioshaq/radioshaq/orchestrator/bridge.py b/radioshaq/radioshaq/orchestrator/bridge.py index e6ae680..6485082 100644 --- a/radioshaq/radioshaq/orchestrator/bridge.py +++ b/radioshaq/radioshaq/orchestrator/bridge.py @@ -46,7 +46,8 @@ async def process_inbound_message( try: await callsign_repository.update_last_band(callsign, band) except Exception as e: - logger.debug("update_last_band failed: %s", e) + logger.debug("update_last_band failed: {}", e) + # channel/chat_id are used by outbound handlers for delivery (radio_rx, sms, whatsapp) out = OutboundMessage( channel=message.channel, chat_id=message.chat_id, @@ -57,7 +58,11 @@ async def process_inbound_message( ) ok = await bus.publish_outbound(out) if not ok: - logger.warning("Outbound queue full, could not send reply to %s:%s", message.channel, message.chat_id) + logger.warning( + "Outbound queue full, could not send reply to {}:{}", + message.channel, + message.chat_id, + ) return result @@ -88,4 +93,4 @@ async def run_inbound_consumer( # Bus uses inbound_timeout; wake periodically to check stop_event continue except Exception as e: - logger.exception("Inbound consumer error: %s", e) + logger.exception("Inbound consumer error: {}", e) diff --git a/radioshaq/radioshaq/orchestrator/factory.py b/radioshaq/radioshaq/orchestrator/factory.py index b0445fd..cc8c739 100644 --- a/radioshaq/radioshaq/orchestrator/factory.py +++ b/radioshaq/radioshaq/orchestrator/factory.py @@ -7,6 +7,7 @@ from loguru import logger +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.config.resolve import get_llm_config_for_role, get_memory_config_for_role from radioshaq.config.schema import Config, LLMConfig from radioshaq.llm.client import LLMClient @@ -27,6 +28,11 @@ from radioshaq.specialized.whitelist_tools import ListRegisteredCallsignsTool, RegisterCallsignTool from radioshaq.specialized.memory_tools import RecallMemoryTool, ReflectMemoryTool from radioshaq.specialized.relay_tools import RelayMessageTool +from radioshaq.specialized.gis_tools import ( + GetOperatorLocationTool, + OperatorsNearbyTool, + SetOperatorLocationTool, +) from radioshaq.callsign import get_callsign_repository @@ -56,28 +62,60 @@ def _llm_model_string_from_llm_config(llm: LLMConfig) -> str: return f"openai/{model}" if p == "anthropic" and "/" not in model: return f"anthropic/{model}" + if p == "huggingface": + if not model: + raise ValueError("huggingface provider requires a non-empty model name") + if model.startswith("openai/"): + return model + return f"openai/{model}" if p == "custom": return f"custom/{model}" if "/" not in model else model + if p == "gemini": + raw_model = (getattr(llm, "model", None) or "").strip() + if not raw_model: + model = "gemini-2.5-flash" + else: + model = raw_model + if model.startswith("gemini/"): + return model + return f"gemini/{model}" if "/" not in model and not model.startswith(("openai/", "anthropic/", "mistral/", "custom/", "ollama/")): return f"mistral/{model}" return model +def _llm_api_base_for_provider(llm_cfg: LLMConfig) -> str | None: + """Return api_base for the configured provider (huggingface router or custom).""" + provider = getattr(llm_cfg, "provider", None) + p = str(provider).lower() if provider else "" + if p == "huggingface": + return getattr(llm_cfg, "huggingface_api_base", None) or "https://router.huggingface.co/v1" + if p == "custom": + return getattr(llm_cfg, "custom_api_base", None) + return None + + def _llm_api_key(config: Config) -> str | None: """Get API key for configured provider (global llm).""" return _llm_api_key_from_llm_config(config.llm) def _llm_api_key_from_llm_config(llm: LLMConfig) -> str | None: - """Get API key from an LLMConfig.""" - if getattr(llm, "mistral_api_key", None): - return llm.mistral_api_key - if getattr(llm, "openai_api_key", None): - return llm.openai_api_key - if getattr(llm, "anthropic_api_key", None): - return llm.anthropic_api_key - if getattr(llm, "custom_api_key", None): - return llm.custom_api_key + """Get API key for the configured provider (provider-matched key only).""" + provider = getattr(llm, "provider", None) + p = str(provider).lower() if provider else "" + if p == "huggingface": + return getattr(llm, "huggingface_api_key", None) + if p == "custom": + return getattr(llm, "custom_api_key", None) + if p == "anthropic": + return getattr(llm, "anthropic_api_key", None) + if p == "openai": + return getattr(llm, "openai_api_key", None) + if p == "mistral": + return getattr(llm, "mistral_api_key", None) + if p == "gemini": + return getattr(llm, "gemini_api_key", None) return None @@ -102,7 +140,7 @@ def create_judge(config: Config) -> JudgeSystem: provider = LLMClient( model=model, api_key=api_key, - api_base=getattr(llm_cfg, "custom_api_base", None), + api_base=_llm_api_base_for_provider(llm_cfg), temperature=getattr(llm_cfg, "temperature", 0.1), max_tokens=getattr(llm_cfg, "max_tokens", 4096), ) @@ -122,7 +160,7 @@ def _create_rig_manager(config: Config) -> Any: from radioshaq.radio import RigManager from radioshaq.radio.cat_control import HamlibCATControl except ImportError as e: - logger.warning("Radio stack not available: %s", e) + logger.warning("Radio stack not available: {}", e) return None rm = RigManager() cat = HamlibCATControl( @@ -167,23 +205,56 @@ def _create_packet_radio(config: Config) -> Any: def _create_sdr_transmitter(config: Config) -> Any: - """Create HackRF transmitter if sdr_tx_enabled and backend is hackrf. Return None otherwise.""" - if not getattr(config.radio, "sdr_tx_enabled", False): + """Create SDR transmitter for HackRF. + + When radio.sdr_tx_mode == 'local', return a HackRFTransmitter using pyhackrf2. + When radio.sdr_tx_mode == 'remote', return a HackRFServiceClient that calls a + remote HackRF broker service (e.g. the remote receiver). + """ + radio_cfg = getattr(config, "radio", None) + if not radio_cfg or not getattr(radio_cfg, "sdr_tx_enabled", False): return None - if getattr(config.radio, "sdr_tx_backend", "hackrf").strip().lower() != "hackrf": + if getattr(radio_cfg, "sdr_tx_backend", "hackrf").strip().lower() != "hackrf": return None + mode = getattr(radio_cfg, "sdr_tx_mode", "local").strip().lower() + band_plan = get_band_plan_source_for_config( + getattr(radio_cfg, "restricted_bands_region", "FCC"), + getattr(radio_cfg, "band_plan_region", None), + ) try: + if mode == "remote": + from radioshaq.radio.sdr_tx import HackRFServiceClient + + base_url = getattr(radio_cfg, "sdr_tx_service_base_url", None) + if not base_url: + logger.warning( + "sdr_tx_mode='remote' but radio.sdr_tx_service_base_url is not set; SDR TX disabled" + ) + return None + return HackRFServiceClient( + base_url=base_url, + auth_token=getattr(radio_cfg, "sdr_tx_service_token", None), + request_timeout_sec=30.0, + allow_bands_only=getattr(radio_cfg, "sdr_tx_allow_bands_only", True), + audit_log_path=getattr(radio_cfg, "tx_audit_log_path", None), + restricted_region=getattr(radio_cfg, "restricted_bands_region", "FCC"), + band_plan_source=band_plan, + ) + + # Default: local HackRF via pyhackrf2. from radioshaq.radio.sdr_tx import HackRFTransmitter + return HackRFTransmitter( - device_index=getattr(config.radio, "sdr_tx_device_index", 0), - serial_number=getattr(config.radio, "sdr_tx_serial", None), - max_gain=getattr(config.radio, "sdr_tx_max_gain", 47), - allow_bands_only=getattr(config.radio, "sdr_tx_allow_bands_only", True), - audit_log_path=getattr(config.radio, "tx_audit_log_path", None), - restricted_region=getattr(config.radio, "restricted_bands_region", "FCC"), + device_index=getattr(radio_cfg, "sdr_tx_device_index", 0), + serial_number=getattr(radio_cfg, "sdr_tx_serial", None), + max_gain=getattr(radio_cfg, "sdr_tx_max_gain", 47), + allow_bands_only=getattr(radio_cfg, "sdr_tx_allow_bands_only", True), + audit_log_path=getattr(radio_cfg, "tx_audit_log_path", None), + restricted_region=getattr(radio_cfg, "restricted_bands_region", "FCC"), + band_plan_source=band_plan, ) except Exception as e: - logger.warning("SDR TX (HackRF) not available: %s", e) + logger.warning("SDR TX (HackRF) not available: {}", e) return None @@ -209,23 +280,25 @@ def create_agent_registry(config: Config, db: Any = None, message_bus: Any = Non ) logger.debug("PTTCoordinator created for half-duplex safety") except Exception as e: - logger.warning("PTTCoordinator not created: %s", e) + logger.warning("PTTCoordinator not created: {}", e) + twilio_cfg = getattr(config, "twilio", None) sms_client = None sms_from = None - if getattr(config, "twilio_sid", None) or getattr(config, "twilio_from", None): + if twilio_cfg and getattr(twilio_cfg, "account_sid", None) and getattr(twilio_cfg, "auth_token", None): try: from twilio.rest import Client - sid = getattr(config, "twilio_sid", None) or getattr(config, "twilio_account_sid", None) - token = getattr(config, "twilio_token", None) or getattr(config, "twilio_auth_token", None) - if sid and token: - sms_client = Client(sid, token) - sms_from = getattr(config, "twilio_from", None) or getattr(config, "twilio_from_number", None) + sms_client = Client(twilio_cfg.account_sid, twilio_cfg.auth_token) + sms_from = getattr(twilio_cfg, "from_number", None) except ImportError: pass registry.register_agent(SMSAgent(twilio_client=sms_client, from_number=sms_from)) - registry.register_agent(WhatsAppAgent(client=None)) + whatsapp_from = getattr(twilio_cfg, "whatsapp_from", None) if twilio_cfg else None + if sms_client and whatsapp_from: + registry.register_agent(WhatsAppAgent(client=sms_client, from_number=whatsapp_from)) + else: + registry.register_agent(WhatsAppAgent(client=None, from_number=None)) registry.register_agent( RadioTransmissionAgent( @@ -296,9 +369,9 @@ def create_agent_registry(config: Config, db: Any = None, message_bus: Any = Non registry.register_agent(rx_audio_agent) logger.debug("Registered RadioAudioReceptionAgent (voice_rx)") except ImportError as e: - logger.warning("Voice RX not available (missing voice_rx deps): %s", e) + logger.warning("Voice RX not available (missing voice_rx deps): {}", e) except Exception as e: - logger.warning("Could not register RadioAudioReceptionAgent: %s", e) + logger.warning("Could not register RadioAudioReceptionAgent: {}", e) gis_agent = GISAgent(db=db) registry.register_agent(gis_agent) @@ -317,15 +390,18 @@ def create_agent_registry(config: Config, db: Any = None, message_bus: Any = Non llm_client = LLMClient( model=_llm_model_string_from_llm_config(llm_cfg), api_key=_llm_api_key_from_llm_config(llm_cfg), - api_base=getattr(llm_cfg, "custom_api_base", None), + api_base=_llm_api_base_for_provider(llm_cfg), temperature=getattr(llm_cfg, "temperature", 0.1), max_tokens=getattr(llm_cfg, "max_tokens", 4096), ) registry.register_agent( - WhitelistAgent(repository=callsign_repo, llm_client=llm_client, eval_prompt=whitelist_eval_prompt) + WhitelistAgent( + repository=callsign_repo, + llm_client=llm_client, + eval_prompt=whitelist_eval_prompt, + ) ) - - logger.debug("Agent registry created with %d agents", len(registry.list_agents())) + logger.debug("Agent registry created with {} agents", len(registry.list_agents())) return registry @@ -336,16 +412,24 @@ def create_tool_registry(config: Config, db: Any = None, *, app: Any = None) -> try: tool = SendAudioOverRadioTool(rig_manager=rig_manager, config=config) registry.register(tool) - logger.debug("Tool registry created with tool: %s", tool.name) + logger.debug("Tool registry created with tool: {}", tool.name) except Exception as e: - logger.warning("Could not register SendAudioOverRadioTool: %s", e) + logger.warning("Could not register SendAudioOverRadioTool: {}", e) callsign_repo = get_callsign_repository(db) try: registry.register(ListRegisteredCallsignsTool(callsign_repo)) registry.register(RegisterCallsignTool(callsign_repo)) logger.debug("Registered whitelist tools: list_registered_callsigns, register_callsign") except Exception as e: - logger.warning("Could not register whitelist tools: %s", e) + logger.warning("Could not register whitelist tools: {}", e) + if db is not None: + try: + registry.register(SetOperatorLocationTool(db)) + registry.register(GetOperatorLocationTool(db)) + registry.register(OperatorsNearbyTool(db)) + logger.debug("Registered GIS tools: set_operator_location, get_operator_location, operators_nearby") + except Exception as e: + logger.warning("Could not register GIS tools: {}", e) if getattr(config, "memory", None) and getattr(config.memory, "enabled", False): try: from types import SimpleNamespace @@ -355,7 +439,7 @@ def create_tool_registry(config: Config, db: Any = None, *, app: Any = None) -> registry.register(ReflectMemoryTool(tools_config)) logger.debug("Registered memory tools: recall_memory, reflect_memory") except Exception as e: - logger.warning("Could not register memory tools: %s", e) + logger.warning("Could not register memory tools: {}", e) if db is not None and app is not None: try: from radioshaq.database.transcripts import TranscriptStorage @@ -366,17 +450,19 @@ def create_tool_registry(config: Config, db: Any = None, *, app: Any = None) -> app.state.agent_registry.get_agent("radio_tx") if getattr(app.state, "agent_registry", None) else None ) + message_bus = getattr(app.state, "message_bus", None) if app else None relay_tool = RelayMessageTool( storage=storage, injection_queue=injection_queue, get_radio_tx=get_radio_tx, config=config, callsign_repository=callsign_repo, + message_bus=message_bus, ) registry.register(relay_tool) - logger.debug("Registered relay tool: %s", relay_tool.name) + logger.debug("Registered relay tool: {}", relay_tool.name) except Exception as e: - logger.warning("Could not register RelayMessageTool: %s", e) + logger.warning("Could not register RelayMessageTool: {}", e) return registry @@ -427,7 +513,7 @@ def create_orchestrator( llm_client = LLMClient( model=_llm_model_string_from_llm_config(llm_cfg), api_key=_llm_api_key_from_llm_config(llm_cfg), - api_base=getattr(llm_cfg, "custom_api_base", None), + api_base=_llm_api_base_for_provider(llm_cfg), temperature=getattr(llm_cfg, "temperature", 0.1), max_tokens=getattr(llm_cfg, "max_tokens", 4096), ) @@ -441,6 +527,7 @@ def create_orchestrator( tool_registry=tool_registry, llm_client=llm_client, memory_manager=memory_manager, + db=db, ) setattr(orchestrator, "_config", config) if message_bus is not None: diff --git a/radioshaq/radioshaq/orchestrator/judge.py b/radioshaq/radioshaq/orchestrator/judge.py index a821045..f633aac 100644 --- a/radioshaq/radioshaq/orchestrator/judge.py +++ b/radioshaq/radioshaq/orchestrator/judge.py @@ -140,7 +140,7 @@ def _parse_task_evaluation(self, content: str, state: REACTState) -> TaskEvaluat next_action=data.get("next_action"), ) except (json.JSONDecodeError, TypeError, ValueError) as e: - logger.warning("Failed to parse task evaluation: %s", e) + logger.warning("Failed to parse task evaluation: {}", e) return default def _extract_json(self, text: str) -> str | None: @@ -196,7 +196,7 @@ async def evaluate_subtask( retry_eligible=bool(data.get("retry_eligible", False)), ) except Exception as e: - logger.warning("Subtask evaluation failed: %s", e) + logger.warning("Subtask evaluation failed: {}", e) return SubtaskEvaluation( subtask_id=subtask_id, diff --git a/radioshaq/radioshaq/orchestrator/outbound_dispatcher.py b/radioshaq/radioshaq/orchestrator/outbound_dispatcher.py new file mode 100644 index 0000000..5f3b907 --- /dev/null +++ b/radioshaq/radioshaq/orchestrator/outbound_dispatcher.py @@ -0,0 +1,160 @@ +"""Single outbound consumer: dispatch by channel to radio_rx, sms, or whatsapp (Option A).""" + +from __future__ import annotations + +import asyncio +import dataclasses +from datetime import datetime, timezone +from typing import Any + +from loguru import logger + +from radioshaq.orchestrator.outbound_radio import handle_one_outbound_radio + +MAX_OUTBOUND_RETRIES = 3 + + +async def _maybe_reenqueue_outbound(bus: Any, msg: Any, channel_label: str) -> None: + """Re-enqueue msg with incremented _retries, or log as dead-letter if over limit.""" + meta = dict(getattr(msg, "metadata", None) or {}) + retries = int(meta.get("_retries", 0)) + if retries >= MAX_OUTBOUND_RETRIES: + logger.error( + "Outbound {} to {} failed {} times, dropping to dead-letter", + channel_label, + msg.chat_id, + MAX_OUTBOUND_RETRIES, + ) + return + meta["_retries"] = retries + 1 + try: + if dataclasses.is_dataclass(msg): + await bus.publish_outbound(dataclasses.replace(msg, metadata=meta)) + else: + from radioshaq.vendor.nanobot.bus.events import OutboundMessage + await bus.publish_outbound(OutboundMessage( + channel=msg.channel, + chat_id=msg.chat_id, + content=msg.content or "", + reply_to=getattr(msg, "reply_to", None), + media=list(getattr(msg, "media", [])), + metadata=meta, + )) + except Exception: + logger.error( + "Failed to re-enqueue outbound {} message to {}", + channel_label, + msg.chat_id, + ) + + +async def _mark_emergency_sent(db: Any, msg: Any) -> None: + """Stamp sent_at only after the downstream channel agent reports success.""" + if not db or not hasattr(db, "update_coordination_event"): + return + meta = dict(getattr(msg, "metadata", None) or {}) + event_id = meta.get("emergency_event_id") + if not event_id: + return + try: + await db.update_coordination_event( + int(event_id), + extra_data={"sent_at": datetime.now(timezone.utc).isoformat()}, + ) + except Exception as e: + logger.warning( + "Could not update sent_at for emergency event {}: {}", + event_id, + e, + ) + + +async def run_outbound_handler( + bus: Any, + config: Any, + agent_registry: Any, + db: Any = None, + *, + stop_event: asyncio.Event, +) -> None: + """ + Consume outbound messages and dispatch by channel: + - radio_rx -> handle_one_outbound_radio (radio_tx agent) + - sms -> SMS agent execute(send) + - whatsapp -> WhatsApp agent execute(send_message) + Other channels are logged and skipped. + """ + if not bus or not hasattr(bus, "consume_outbound"): + logger.debug("Outbound handler: no bus or consume_outbound, exiting") + return + + consume_timeout = 5.0 + while not stop_event.is_set(): + try: + msg = await asyncio.wait_for(bus.consume_outbound(), timeout=consume_timeout) + if msg.channel == "radio_rx": + radio_tx = agent_registry.get_agent("radio_tx") if agent_registry else None + try: + handled = await handle_one_outbound_radio(msg, radio_tx, config) + if handled is False: + await _maybe_reenqueue_outbound(bus, msg, "radio_rx") + except Exception as e: + logger.warning("Outbound radio_rx failed: {}", e) + await _maybe_reenqueue_outbound(bus, msg, "radio_rx") + elif msg.channel == "sms": + sms_agent = agent_registry.get_agent("sms") if agent_registry else None + if sms_agent and hasattr(sms_agent, "execute"): + try: + result = await sms_agent.execute( + {"action": "send", "to": msg.chat_id, "message": msg.content or ""}, + upstream_callback=None, + ) + if result.get("success") is False: + logger.warning( + "Outbound sms execute returned unsuccessful result for {}", + msg.chat_id, + ) + await _maybe_reenqueue_outbound(bus, msg, "sms") + else: + await _mark_emergency_sent(db, msg) + except Exception as e: + logger.warning("Outbound sms execute failed: {}", e) + await _maybe_reenqueue_outbound(bus, msg, "sms") + else: + logger.debug("Outbound sms: no sms agent, re-enqueuing") + await _maybe_reenqueue_outbound(bus, msg, "sms") + elif msg.channel == "whatsapp": + wa_agent = agent_registry.get_agent("whatsapp") if agent_registry else None + if wa_agent and hasattr(wa_agent, "execute"): + try: + result = await wa_agent.execute( + { + "action": "send_message", + "to": msg.chat_id, + "message": msg.content or "", + }, + upstream_callback=None, + ) + if result.get("success") is False: + logger.warning( + "Outbound whatsapp execute returned unsuccessful result for {}", + msg.chat_id, + ) + await _maybe_reenqueue_outbound(bus, msg, "whatsapp") + else: + await _mark_emergency_sent(db, msg) + except Exception as e: + logger.warning("Outbound whatsapp execute failed: {}", e) + await _maybe_reenqueue_outbound(bus, msg, "whatsapp") + else: + logger.debug("Outbound whatsapp: no whatsapp agent, re-enqueuing") + await _maybe_reenqueue_outbound(bus, msg, "whatsapp") + else: + logger.debug("Outbound unsupported channel: {}", msg.channel) + except asyncio.CancelledError: + logger.debug("Outbound handler cancelled") + break + except asyncio.TimeoutError: + continue + except Exception as e: + logger.exception("Outbound handler error: {}", e) diff --git a/radioshaq/radioshaq/orchestrator/outbound_radio.py b/radioshaq/radioshaq/orchestrator/outbound_radio.py index 1c1f99f..595c1cb 100644 --- a/radioshaq/radioshaq/orchestrator/outbound_radio.py +++ b/radioshaq/radioshaq/orchestrator/outbound_radio.py @@ -7,9 +7,66 @@ from loguru import logger +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.radio.bands import BAND_PLANS +async def handle_one_outbound_radio( + msg: Any, + radio_tx_agent: Any, + config: Any, +) -> bool: + """ + Handle a single outbound message for channel=radio_rx: resolve band/freq/mode and + call radio_tx agent. No-op if tx disabled or agent missing. Used by run_outbound_radio_handler + and by the single outbound dispatcher. + """ + radio_cfg = getattr(config, "radio", None) + tx_enabled = getattr(radio_cfg, "radio_reply_tx_enabled", True) if radio_cfg else True + reply_use_tts = getattr(radio_cfg, "radio_reply_use_tts", True) if radio_cfg else True + band_plans = ( + get_band_plan_source_for_config( + getattr(radio_cfg, "restricted_bands_region", "FCC"), + getattr(radio_cfg, "band_plan_region", None), + ) + if radio_cfg + else BAND_PLANS + ) + if not tx_enabled: + return True + if not radio_tx_agent or not hasattr(radio_tx_agent, "execute"): + return False + band = msg.chat_id or msg.metadata.get("reply_band") or "" + freq = msg.metadata.get("frequency_hz") + mode = msg.metadata.get("mode") + if not band and freq is None: + logger.warning("Outbound radio_rx: no band or frequency_hz, skipping") + return False + plan = band_plans.get(band) if band else None + if plan: + if freq is None or freq <= 0: + freq = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 + if not mode: + mode = (plan.modes or ["FM"])[0] + else: + mode = mode or "FM" + if freq is None or freq <= 0: + logger.warning("Outbound radio_rx: could not resolve frequency for band {}", band) + return False + try: + await radio_tx_agent.execute({ + "transmission_type": "voice", + "frequency": freq, + "message": msg.content or "", + "mode": mode, + "use_tts": bool(reply_use_tts), + }) + return True + except Exception as e: + logger.warning("Outbound radio_tx execute failed: {}", e) + return False + + async def run_outbound_radio_handler( bus: Any, radio_tx_agent: Any, @@ -24,51 +81,17 @@ async def run_outbound_radio_handler( if not bus or not hasattr(bus, "consume_outbound"): logger.debug("Outbound radio handler: no bus or consume_outbound, exiting") return - radio_cfg = getattr(config, "radio", None) - tx_enabled = getattr(radio_cfg, "radio_reply_tx_enabled", True) if radio_cfg else True - reply_use_tts = getattr(radio_cfg, "radio_reply_use_tts", True) if radio_cfg else True while not stop_event.is_set(): try: msg = await bus.consume_outbound() if msg.channel != "radio_rx": continue - if not tx_enabled: - continue - if not radio_tx_agent or not hasattr(radio_tx_agent, "execute"): - logger.debug("Outbound radio: no radio_tx agent, skipping TX") - continue - band = msg.chat_id or msg.metadata.get("reply_band") or "" - freq = msg.metadata.get("frequency_hz") - mode = msg.metadata.get("mode") - if not band and freq is None: - logger.warning("Outbound radio_rx: no band or frequency_hz, skipping") - continue - plan = BAND_PLANS.get(band) if band else None - if plan: - if freq is None or freq <= 0: - freq = plan.freq_start_hz + (plan.freq_end_hz - plan.freq_start_hz) / 2 - if not mode: - mode = (plan.modes or ["FM"])[0] - else: - mode = mode or "FM" - if freq is None or freq <= 0: - logger.warning("Outbound radio_rx: could not resolve frequency for band %s", band) - continue - try: - await radio_tx_agent.execute({ - "transmission_type": "voice", - "frequency": freq, - "message": msg.content or "", - "mode": mode, - "use_tts": bool(reply_use_tts), - }) - except Exception as e: - logger.warning("Outbound radio_tx execute failed: %s", e) + await handle_one_outbound_radio(msg, radio_tx_agent, config) except asyncio.CancelledError: logger.debug("Outbound radio handler cancelled") break except asyncio.TimeoutError: continue except Exception as e: - logger.exception("Outbound radio handler error: %s", e) + logger.exception("Outbound radio handler error: {}", e) diff --git a/radioshaq/radioshaq/orchestrator/react_loop.py b/radioshaq/radioshaq/orchestrator/react_loop.py index 6e6793c..d61db38 100644 --- a/radioshaq/radioshaq/orchestrator/react_loop.py +++ b/radioshaq/radioshaq/orchestrator/react_loop.py @@ -84,6 +84,7 @@ def __init__( tool_registry: Any = None, llm_client: Any = None, memory_manager: Any = None, + db: Any = None, ): self.judge = judge self.prompt_loader = prompt_loader @@ -93,6 +94,7 @@ def __init__( self.tool_registry = tool_registry self.llm_client = llm_client self.memory_manager = memory_manager + self.db = db async def process_request( self, @@ -129,7 +131,7 @@ async def process_request( if cs.get("callsign") } except Exception as e: - logger.debug("Load whitelisted_callsign_bands failed: %s", e) + logger.debug("Load whitelisted_callsign_bands failed: {}", e) state.context["whitelisted_callsign_bands"] = {} else: state.context["whitelisted_callsign_bands"] = {} @@ -159,7 +161,7 @@ async def process_request( messages.append({"role": "user", "content": request}) state.context["messages"] = messages except Exception as e: - logger.warning("Memory context load failed: %s", e) + logger.warning("Memory context load failed: {}", e) try: state = await self._run_react_loop(state, on_progress) @@ -174,7 +176,7 @@ async def process_request( ], ) except Exception as e: - logger.warning("Memory append_messages failed: %s", e) + logger.warning("Memory append_messages failed: {}", e) try: from radioshaq.memory.hindsight import retain_exchange from radioshaq.config.resolve import get_memory_config_for_role @@ -187,14 +189,14 @@ async def process_request( config=memory_config, ) except Exception as e: - logger.debug("Hindsight retain failed (non-fatal): %s", e) + logger.debug("Hindsight retain failed (non-fatal): {}", e) return REACTResult( success=state.final_response is not None, state=state, message=state.final_response or "Incomplete", ) except Exception as e: - logger.exception("REACT loop failed: %s", e) + logger.exception("REACT loop failed: {}", e) return REACTResult( success=False, state=state, @@ -262,7 +264,7 @@ def _parse_decomposed_tasks_from_llm( ) ] except (json.JSONDecodeError, TypeError, ValueError) as e: - logger.warning("Parse decomposed tasks failed: %s", e) + logger.warning("Parse decomposed tasks failed: {}", e) return [ DecomposedTask( task_id="t1", @@ -454,7 +456,7 @@ def _inject_agent_context(self, state: REACTState, task_dict: dict[str, Any]) -> cs = callsign.strip().upper() if agent_name == "whitelist" and not (task_dict.get("callsign") or "").strip(): task_dict["callsign"] = cs - if agent_name == "gis_agent" and not (task_dict.get("callsign") or "").strip(): + if agent_name == "gis" and not (task_dict.get("callsign") or "").strip(): task_dict["callsign"] = cs if agent_name == "scheduler" and not (task_dict.get("initiator_callsign") or "").strip(): task_dict["initiator_callsign"] = cs @@ -508,7 +510,7 @@ async def _phase_reasoning(self, state: REACTState) -> None: content, state.original_request ) except Exception as e: - logger.warning("Plan LLM call failed: %s", e) + logger.warning("Plan LLM call failed: {}", e) state.decomposed_tasks = [ DecomposedTask( task_id="t1", @@ -647,7 +649,7 @@ async def upstream_callback(ev: UpstreamEvent) -> None: task.payload["_retries"] = task.payload.get("_retries", 0) + 1 task.result = None except Exception as e: - logger.exception("Agent execution failed: %s", e) + logger.exception("Agent execution failed: {}", e) task.error = str(e) task.result = {"error": str(e)} subtask_eval = await self.judge.evaluate_subtask( diff --git a/radioshaq/radioshaq/orchestrator/registry.py b/radioshaq/radioshaq/orchestrator/registry.py index bfaa06b..1408e73 100644 --- a/radioshaq/radioshaq/orchestrator/registry.py +++ b/radioshaq/radioshaq/orchestrator/registry.py @@ -26,11 +26,11 @@ def register_agent(self, agent: SpecializedAgentProtocol) -> None: """Register a specialized agent.""" name = agent.name if name in self._agents: - logger.warning("Overwriting existing agent: %s", name) + logger.warning("Overwriting existing agent: {}", name) self._agents[name] = agent for cap in agent.capabilities: self._capability_index.setdefault(cap, []).append(name) - logger.debug("Registered agent %s with capabilities %s", name, agent.capabilities) + logger.debug("Registered agent {} with capabilities {}", name, agent.capabilities) def unregister_agent(self, name: str) -> bool: """Remove an agent by name. Returns True if removed.""" @@ -56,11 +56,11 @@ def get_agent_for_task(self, task: dict[str, Any] | str) -> Any | None: Find the best agent for a task based on task type, required capability, or description. DecomposedTask.agent can be the exact agent name from this registry (e.g. radio_tx, - whitelist, sms, gis_agent); pass it as task["agent"]. If agent is None, lookup uses + whitelist, sms, gis); pass it as task["agent"]. If agent is None, lookup uses capability and description below. Task dict may include: - - agent: explicit agent name (e.g. radio_tx, whitelist, sms, gis_agent) + - agent: explicit agent name (e.g. radio_tx, whitelist, sms, gis) - capability: required capability (e.g. "voice_transmission", "frequency_monitoring") - transmission_type: for radio tasks (voice, digital, packet) - description: free-text task description for keyword matching diff --git a/radioshaq/radioshaq/prompts/__init__.py b/radioshaq/radioshaq/prompts/__init__.py index 3032f5c..4c08f28 100644 --- a/radioshaq/radioshaq/prompts/__init__.py +++ b/radioshaq/radioshaq/prompts/__init__.py @@ -1,4 +1,4 @@ -"""Prompt loading system for SHAKODS. +"""Prompt loading system for RadioShaq. All prompts are stored as markdown files in the prompts/ directory and loaded dynamically by the PromptLoader class. diff --git a/radioshaq/radioshaq/radio/__init__.py b/radioshaq/radioshaq/radio/__init__.py index ff57626..749c431 100644 --- a/radioshaq/radioshaq/radio/__init__.py +++ b/radioshaq/radioshaq/radio/__init__.py @@ -1,28 +1,89 @@ -"""Ham radio interfaces (CAT, digital modes, packet, compliance).""" +"""Ham radio interfaces (CAT, digital modes, packet, compliance). -from radioshaq.radio.bands import BAND_PLANS, BandPlan, get_band_for_frequency -from radioshaq.radio.cat_control import HamlibCATControl, RigMode, RigState -from radioshaq.radio.compliance import is_restricted, is_tx_allowed, log_tx -from radioshaq.radio.digital_modes import FLDIGIInterface, DigitalTransmission -from radioshaq.radio.sdr_tx import HackRFTransmitter, SDRTransmitter -from radioshaq.radio.packet_radio import AX25Frame, PacketRadioInterface -from radioshaq.radio.rig_manager import RigManager +This package historically re-exported many convenience symbols. To keep imports +lightweight (and usable in minimal environments that don't have optional deps), +we lazily import these symbols on attribute access. +""" + +from __future__ import annotations + +from importlib import import_module +import warnings +from typing import Any __all__ = [ + # CAT / rig "HamlibCATControl", - "RigMode", "RigState", - "FLDIGIInterface", - "DigitalTransmission", - "PacketRadioInterface", - "AX25Frame", "RigManager", + # Band plans "BandPlan", "BAND_PLANS", "get_band_for_frequency", + # Compliance "is_restricted", "is_tx_allowed", + "is_tx_spectrum_allowed", "log_tx", + # Digital / packet + "FLDIGIInterface", + "DigitalTransmission", + "PacketRadioInterface", + "AX25Frame", + # Modes + "ModeFamily", + "ModeSpec", + "RadioModeName", + "RigMode", + "normalize_mode", + "spec_for", + "hamlib_mode_for", + "external_modem_for", + # SDR TX "SDRTransmitter", "HackRFTransmitter", ] + + +_EXPORTS: dict[str, tuple[str, str]] = { + # module, attribute + "HamlibCATControl": ("radioshaq.radio.cat_control", "HamlibCATControl"), + "RigState": ("radioshaq.radio.cat_control", "RigState"), + "RigManager": ("radioshaq.radio.rig_manager", "RigManager"), + "BandPlan": ("radioshaq.radio.bands", "BandPlan"), + "BAND_PLANS": ("radioshaq.radio.bands", "BAND_PLANS"), + "get_band_for_frequency": ("radioshaq.radio.bands", "get_band_for_frequency"), + "is_restricted": ("radioshaq.radio.compliance", "is_restricted"), + "is_tx_allowed": ("radioshaq.radio.compliance", "is_tx_allowed"), + "is_tx_spectrum_allowed": ("radioshaq.radio.compliance", "is_tx_spectrum_allowed"), + "log_tx": ("radioshaq.radio.compliance", "log_tx"), + "FLDIGIInterface": ("radioshaq.radio.digital_modes", "FLDIGIInterface"), + "DigitalTransmission": ("radioshaq.radio.digital_modes", "DigitalTransmission"), + "PacketRadioInterface": ("radioshaq.radio.packet_radio", "PacketRadioInterface"), + "AX25Frame": ("radioshaq.radio.packet_radio", "AX25Frame"), + "ModeFamily": ("radioshaq.radio.modes", "ModeFamily"), + "ModeSpec": ("radioshaq.radio.modes", "ModeSpec"), + "RadioModeName": ("radioshaq.radio.modes", "RadioModeName"), + "RigMode": ("radioshaq.radio.modes", "RadioModeName"), + "normalize_mode": ("radioshaq.radio.modes", "normalize_mode"), + "spec_for": ("radioshaq.radio.modes", "spec_for"), + "hamlib_mode_for": ("radioshaq.radio.modes", "hamlib_mode_for"), + "external_modem_for": ("radioshaq.radio.modes", "external_modem_for"), + "SDRTransmitter": ("radioshaq.radio.sdr_tx", "SDRTransmitter"), + "HackRFTransmitter": ("radioshaq.radio.sdr_tx", "HackRFTransmitter"), +} + + +def __getattr__(name: str) -> Any: # pragma: no cover + if name not in _EXPORTS: + raise AttributeError(name) + if name == "RigMode": + warnings.warn( + "RigMode is deprecated; use RadioModeName from radioshaq.radio.modes instead.", + DeprecationWarning, + stacklevel=2, + ) + mod_name, attr = _EXPORTS[name] + mod = import_module(mod_name) + return getattr(mod, attr) + diff --git a/radioshaq/radioshaq/radio/analog_mod.py b/radioshaq/radioshaq/radio/analog_mod.py new file mode 100644 index 0000000..ae5925d --- /dev/null +++ b/radioshaq/radioshaq/radio/analog_mod.py @@ -0,0 +1,115 @@ +"""Analog modulation helpers for SDR transmit (AM/SSB/CW-tone). + +These functions generate complex baseband IQ for a HackRF-class SDR. +They prioritize portability (numpy/scipy only) over maximum RF fidelity. +""" + +from __future__ import annotations + +import numpy as np +try: + from scipy import signal # type: ignore +except Exception: # pragma: no cover + signal = None # type: ignore + + +def _require_scipy() -> None: + if signal is None: + raise RuntimeError("Analog modulation requires SciPy. Install project deps (scipy).") + + +def _to_mono_float(audio: np.ndarray) -> np.ndarray: + a = np.asarray(audio) + is_integer = np.issubdtype(a.dtype, np.integer) # check original dtype before mean() + if a.ndim == 2: + a = a.mean(axis=1) + if is_integer: + # Assume int16-like PCM. + a = a.astype(np.float32) / 32768.0 + else: + a = a.astype(np.float32, copy=False) + return np.clip(a, -1.0, 1.0) + + +def _lpf(audio: np.ndarray, fs: int, cutoff_hz: float) -> np.ndarray: + if audio.size == 0: + return audio.astype(np.float32) + nyq = 0.5 * fs + cutoff = min(max(200.0, float(cutoff_hz)), nyq * 0.95) + b, a = signal.butter(4, cutoff / nyq, btype="low") + return signal.lfilter(b, a, audio).astype(np.float32) + + +def am_modulate( + audio: np.ndarray, + audio_rate_hz: int, + rf_rate_hz: int, + *, + modulation_index: float = 0.6, + audio_lpf_hz: float = 3_000.0, + gain: float = 0.8, +) -> np.ndarray: + """AM (DSB-LC) modulation: (1 + m*x) * carrier.""" + _require_scipy() + fs_a = int(audio_rate_hz) + fs_rf = int(rf_rate_hz) + x = _to_mono_float(audio) + x = _lpf(x, fs_a, audio_lpf_hz) + x_rf = signal.resample_poly(x, up=fs_rf, down=fs_a).astype(np.float32) + if x_rf.size == 0: + return np.zeros(0, dtype=np.complex64) + m = float(np.clip(modulation_index, 0.0, 1.0)) + env = 1.0 + m * x_rf + env = np.clip(env, 0.0, 2.0) + iq = (env * float(gain)).astype(np.complex64) + return iq + + +def ssb_modulate( + audio: np.ndarray, + audio_rate_hz: int, + rf_rate_hz: int, + *, + sideband: str = "USB", + audio_lpf_hz: float = 2_800.0, + carrier: float = 0.0, + gain: float = 0.8, +) -> np.ndarray: + """SSB modulation (suppressed carrier by default) using analytic signal (Hilbert).""" + _require_scipy() + fs_a = int(audio_rate_hz) + fs_rf = int(rf_rate_hz) + x = _to_mono_float(audio) + x = _lpf(x, fs_a, audio_lpf_hz) + if x.size == 0: + return np.zeros(0, dtype=np.complex64) + # Create analytic baseband at audio rate. + analytic = signal.hilbert(x).astype(np.complex64) + side = str(sideband).upper() + if side == "LSB": + analytic = np.conj(analytic) + # Resample complex to RF. + iq = signal.resample_poly(analytic, up=fs_rf, down=fs_a).astype(np.complex64) + if carrier: + iq = iq + complex(float(carrier), 0.0) + iq *= float(gain) + # Keep within [-1,1] envelope-ish to avoid int8 clipping later. + mag = np.max(np.abs(iq)) if iq.size else 1.0 + if mag > 1.0: + iq = (iq / mag).astype(np.complex64) + return iq + + +def cw_tone_iq( + duration_sec: float, + rf_rate_hz: int, + *, + gain: float = 0.6, +) -> np.ndarray: + """Generate a simple continuous carrier (CW tone at RF center).""" + fs = int(rf_rate_hz) + n = max(0, int(duration_sec * fs)) + if n == 0: + return np.zeros(0, dtype=np.complex64) + return (np.ones(n, dtype=np.complex64) * complex(float(gain), 0.0)).astype(np.complex64) + diff --git a/radioshaq/radioshaq/radio/cat_control.py b/radioshaq/radioshaq/radio/cat_control.py index 4264d2a..1c462b6 100644 --- a/radioshaq/radioshaq/radio/cat_control.py +++ b/radioshaq/radioshaq/radio/cat_control.py @@ -4,11 +4,12 @@ import asyncio from dataclasses import dataclass -from enum import StrEnum from typing import Any from loguru import logger +from radioshaq.radio.modes import hamlib_mode_for + # Optional: pyhamlib for direct control (requires system hamlib) try: import hamlib @@ -18,25 +19,12 @@ hamlib = None # type: ignore -class RigMode(StrEnum): - """Radio operating modes.""" - - FM = "FM" - AM = "AM" - SSB_USB = "USB" - SSB_LSB = "LSB" - CW = "CW" - DIGITAL = "DIG" - PSK31 = "PSK" - FT8 = "FT8" - - @dataclass class RigState: """Current state of a radio rig.""" frequency: float - mode: RigMode | str + mode: str ptt: bool signal_strength: int = 0 bandwidth: int = 0 @@ -84,7 +72,7 @@ async def _connect_to_daemon(self) -> None: ) self._connected = True logger.info( - "Connected to rigctld at %s:%d", self.daemon_host, self.daemon_port + "Connected to rigctld at {}:{}", self.daemon_host, self.daemon_port ) async def _connect_direct(self) -> None: @@ -96,7 +84,7 @@ async def _connect_direct(self) -> None: ) await asyncio.to_thread(self._sync_connect_direct) self._connected = True - logger.info("Connected to rig via hamlib on %s", self.port) + logger.info("Connected to rig via hamlib on {}", self.port) def _sync_connect_direct(self) -> None: """Synchronous hamlib connect (runs in thread).""" @@ -151,9 +139,9 @@ async def set_ptt(self, state: bool) -> None: self._rig.set_ptt, hamlib.RIG_VFO_CURR, ptt_state ) - async def set_mode(self, mode: RigMode | str) -> None: + async def set_mode(self, mode: str) -> None: """Set radio mode.""" - mode_str = str(mode) + mode_str = hamlib_mode_for(mode) async with self._lock: if self.use_daemon: await self._send_daemon_command(f"M {mode_str} 0") @@ -171,13 +159,9 @@ async def get_state(self) -> RigState: mode_str = await self._query_daemon("m") ptt_str = await self._query_daemon("t") mode_val = mode_str.split()[0] if mode_str else "FM" - try: - mode = RigMode(mode_val) - except ValueError: - mode = mode_val return RigState( frequency=float(freq_str) if freq_str else 0.0, - mode=mode, + mode=mode_val, ptt=ptt_str.strip() == "1" if ptt_str else False, signal_strength=0, bandwidth=0, @@ -190,15 +174,11 @@ async def get_state(self) -> RigState: self._rig.get_mode, hamlib.RIG_VFO_CURR ) mode_str = mode_data[0] if mode_data else "FM" - try: - mode = RigMode(mode_str) - except ValueError: - mode = mode_str return RigState( frequency=freq, - mode=mode, + mode=mode_str, ptt=False, signal_strength=0, bandwidth=0, ) - return RigState(frequency=0.0, mode=RigMode.FM, ptt=False) + return RigState(frequency=0.0, mode="FM", ptt=False) diff --git a/radioshaq/radioshaq/radio/compliance.py b/radioshaq/radioshaq/radio/compliance.py index 6f5c84e..4a66318 100644 --- a/radioshaq/radioshaq/radio/compliance.py +++ b/radioshaq/radioshaq/radio/compliance.py @@ -11,78 +11,9 @@ from radioshaq.radio.bands import BAND_PLANS, BandPlan, get_band_for_frequency -# FCC 47 CFR §15.205 restricted bands (MHz and GHz). Intentional radiation prohibited. -# Source: https://www.ecfr.gov/current/title-47/chapter-I/subchapter-A/part-15/subpart-C/section-15.205 -# Stored as (low_hz, high_hz). -_RESTRICTED_BANDS_FCC_HZ: list[tuple[float, float]] = [ - # MHz ranges (convert to Hz) - (0.090e6, 0.110e6), - (0.495e6, 0.505e6), - (2.1735e6, 2.1905e6), - (4.125e6, 4.128e6), - (4.17725e6, 4.17775e6), - (4.20725e6, 4.20775e6), - (6.215e6, 6.218e6), - (6.26775e6, 6.26825e6), - (6.31175e6, 6.31225e6), - (8.291e6, 8.294e6), - (8.362e6, 8.366e6), - (8.37625e6, 8.38675e6), - (8.41425e6, 8.41475e6), - (12.29e6, 12.293e6), - (12.51975e6, 12.52025e6), - (12.57675e6, 12.57725e6), - (13.36e6, 13.41e6), - (16.42e6, 16.423e6), - (16.69475e6, 16.69525e6), - (16.80425e6, 16.80475e6), - (25.5e6, 25.67e6), - (37.5e6, 38.25e6), - (73e6, 74.6e6), - (74.8e6, 75.2e6), - (108e6, 121.94e6), - (123e6, 138e6), - (149.9e6, 150.05e6), - (156.52475e6, 156.52525e6), - (156.7e6, 156.9e6), - (162.0125e6, 167.17e6), - (167.72e6, 173.2e6), - (240e6, 285e6), - (322e6, 335.4e6), - (399.9e6, 410e6), - (608e6, 614e6), - (960e6, 1240e6), - (1300e6, 1427e6), - (1435e6, 1626.5e6), - (1645.5e6, 1646.5e6), - (1660e6, 1710e6), - (1718.8e6, 1722.2e6), - (2200e6, 2300e6), - (2310e6, 2390e6), - (2483.5e6, 2500e6), - (2690e6, 2900e6), - (3260e6, 3267e6), - (3332e6, 3339e6), - (3345.8e6, 3358e6), - (3600e6, 4400e6), - # GHz ranges (convert to Hz) - (4.5e9, 5.15e9), - (5.35e9, 5.46e9), - (7.25e9, 7.75e9), - (8.025e9, 8.5e9), - (9.0e9, 9.2e9), - (9.3e9, 9.5e9), - (10.6e9, 12.7e9), - (13.25e9, 13.4e9), - (14.47e9, 14.5e9), - (15.35e9, 16.2e9), - (17.7e9, 21.4e9), - (22.01e9, 23.12e9), - (23.6e9, 24.0e9), - (31.2e9, 31.8e9), - (36.43e9, 36.5e9), - (38.6e9, 100e9), # Above 38.6 GHz -] +# Regions that are band-plan-only (no restricted bands). Warn once if used as restricted_bands_region. +_WARNED_RESTRICTED_REGIONS: set[str] = set() +_BAND_PLAN_ONLY_KEYS = frozenset({"ITU_R1", "ITU_R3"}) def is_restricted( @@ -93,10 +24,22 @@ def is_restricted( Return True if the frequency falls in a restricted band (e.g. FCC §15.205). Intentional radiation is prohibited in these bands regardless of power. """ - if region != "FCC": - # Future: CEPT or other region tables + from radioshaq.compliance_plugin import get_backend + + backend = get_backend(region) + if backend is None: return False - for low, high in _RESTRICTED_BANDS_FCC_HZ: + restricted = backend.get_restricted_bands_hz() + # Band-plan-only backends (ITU_R1, ITU_R3) enforce no restrictions; warn once to avoid silent footgun. + if not restricted and getattr(backend, "region_key", None) in _BAND_PLAN_ONLY_KEYS: + if region not in _WARNED_RESTRICTED_REGIONS: + _WARNED_RESTRICTED_REGIONS.add(region) + logger.warning( + "restricted_bands_region={!r} has no restricted bands (band-plan-only). " + "Use band_plan_region for ITU_R1/ITU_R3 and set restricted_bands_region to a country (e.g. CEPT, FR, AU).", + region, + ) + for low, high in restricted: if low <= freq_hz <= high: return True return False @@ -118,13 +61,80 @@ def is_tx_allowed( return False if not allow_tx_only_amateur_bands: return True - plans = band_plan_source if band_plan_source is not None else BAND_PLANS + if band_plan_source is None: + from radioshaq.compliance_plugin import get_backend + + b = get_backend(restricted_region) + if b is not None: + _plans = b.get_band_plans() + band_plan_source = _plans if _plans is not None else BAND_PLANS + else: + band_plan_source = BAND_PLANS + plans = band_plan_source for plan in plans.values(): if plan.freq_start_hz <= freq_hz <= plan.freq_end_hz: return True return False +def is_tx_spectrum_allowed( + center_hz: float, + occupied_bandwidth_hz: float, + *, + band_plan_source: dict[str, BandPlan] | None = None, + allow_tx_only_amateur_bands: bool = True, + restricted_region: str = "FCC", +) -> bool: + """Like is_tx_allowed, but checks the occupied spectrum, not just center. + + We conservatively require the full occupied bandwidth window to be: + - Outside restricted bands, and + - Fully contained within a single allowed band-plan allocation (when allow_tx_only_amateur_bands). + """ + bw = float(occupied_bandwidth_hz) + if bw <= 0: + return is_tx_allowed( + center_hz, + band_plan_source=band_plan_source, + allow_tx_only_amateur_bands=allow_tx_only_amateur_bands, + restricted_region=restricted_region, + ) + low_hz = float(center_hz) - bw / 2.0 + high_hz = float(center_hz) + bw / 2.0 + + # Restricted-band overlap check. Use a single backend lookup for both restricted list and band plans. + from radioshaq.compliance_plugin import get_backend + + backend = get_backend(restricted_region) + restricted = backend.get_restricted_bands_hz() if backend is not None else [] + + # Check center and both band edges against built-in is_restricted (works even when backend is None). + if ( + is_restricted(center_hz, region=restricted_region) + or is_restricted(low_hz, region=restricted_region) + or is_restricted(high_hz, region=restricted_region) + ): + return False + for rlow, rhigh in restricted: + if not (high_hz < rlow or low_hz > rhigh): + return False + + if not allow_tx_only_amateur_bands: + return True + + if band_plan_source is None: + if backend is not None: + _plans = backend.get_band_plans() + band_plan_source = _plans if _plans is not None else BAND_PLANS + else: + band_plan_source = BAND_PLANS + + for plan in band_plan_source.values(): + if plan.freq_start_hz <= low_hz and high_hz <= plan.freq_end_hz: + return True + return False + + def log_tx( frequency_hz: float, duration_sec: float, @@ -162,4 +172,4 @@ def log_tx( with path.open("a", encoding="utf-8") as f: f.write(json.dumps(payload, ensure_ascii=False) + "\n") except OSError as e: - logger.warning("Could not write TX audit log to %s: %s", path, e) + logger.warning("Could not write TX audit log to {}: {}", path, e) diff --git a/radioshaq/radioshaq/radio/digital_modes.py b/radioshaq/radioshaq/radio/digital_modes.py index 683ef7e..1b8c450 100644 --- a/radioshaq/radioshaq/radio/digital_modes.py +++ b/radioshaq/radioshaq/radio/digital_modes.py @@ -39,7 +39,7 @@ async def connect(self) -> None: try: version = await asyncio.to_thread(self._proxy.main.get_version) self._connected = True - logger.info("Connected to FLDIGI at %s (version: %s)", url, version) + logger.info("Connected to FLDIGI at {} (version: {})", url, version) except Exception as e: self._proxy = None raise ConnectionError(f"Failed to connect to FLDIGI at {url}: {e}") from e @@ -49,7 +49,7 @@ async def set_modem(self, mode: str) -> None: if not self._proxy: raise RuntimeError("Not connected to FLDIGI") await asyncio.to_thread(self._proxy.modem.set_by_name, mode) - logger.debug("FLDIGI modem set to %s", mode) + logger.debug("FLDIGI modem set to {}", mode) async def transmit_text(self, text: str, delay: float = 0.5) -> None: """Transmit text in current digital mode.""" diff --git a/radioshaq/radioshaq/radio/fm.py b/radioshaq/radioshaq/radio/fm.py new file mode 100644 index 0000000..fe7a98c --- /dev/null +++ b/radioshaq/radioshaq/radio/fm.py @@ -0,0 +1,88 @@ +"""Analog FM modulation helpers for SDR transmit. + +This is a minimal NFM modulator: audio -> complex baseband FM IQ. +It is intended for demos/testing (2m/70cm analog FM voice). +""" + +from __future__ import annotations + +import numpy as np +try: + from scipy import signal # type: ignore +except Exception: # pragma: no cover + signal = None # type: ignore + + +def _require_scipy() -> None: + if signal is None: + raise RuntimeError("Analog FM modulation requires SciPy. Install project deps (scipy).") + + +def _to_mono_float(audio: np.ndarray) -> np.ndarray: + a = np.asarray(audio) + is_integer = np.issubdtype(a.dtype, np.integer) # check original dtype before mean() + if a.ndim == 2: + a = a.mean(axis=1) + if is_integer: + # Assume int16-like PCM. + a = a.astype(np.float32) / 32768.0 + else: + a = a.astype(np.float32, copy=False) + return np.clip(a, -1.0, 1.0) + + +def nfm_modulate( + audio: np.ndarray, + audio_rate_hz: int, + rf_rate_hz: int, + *, + deviation_hz: float = 2_500.0, + preemphasis_us: float = 75.0, + audio_lpf_hz: float = 3_000.0, + gain: float = 0.8, +) -> np.ndarray: + """Return complex64 FM IQ at rf_rate_hz from audio at audio_rate_hz. + + audio: mono or stereo in [-1,1] float (or PCM-like ints). + """ + fs_a = int(audio_rate_hz) + fs_rf = int(rf_rate_hz) + _require_scipy() + a = _to_mono_float(audio) + if a.size == 0: + return np.zeros(0, dtype=np.complex64) + + # Low-pass audio and apply simple pre-emphasis (inverse of receiver deemphasis). + nyq = 0.5 * fs_a + cutoff = min(max(300.0, float(audio_lpf_hz)), nyq * 0.95) + b, aa = signal.butter(4, cutoff / nyq, btype="low") + a = signal.lfilter(b, aa, a).astype(np.float32) + + tau = float(preemphasis_us) * 1e-6 + if tau > 0: + # Pre-emphasis high-pass: H(s)=1 + s*tau approximated via discrete differentiation + leak. + # Simple approximation: y[n] = x[n] - alpha*x[n-1] + alpha*y[n-1] + alpha = float(np.exp(-1.0 / (fs_a * tau))) + y = np.empty_like(a) + y0 = 0.0 + x1 = a[0] + for i, x0 in enumerate(a): + y0 = (x0 - x1) + alpha * y0 + y[i] = y0 + x1 = x0 + a = y + + a *= float(gain) + a = np.clip(a, -1.0, 1.0) + + # Resample audio to RF sample rate. + a_rf = signal.resample_poly(a, up=fs_rf, down=fs_a).astype(np.float32) + if a_rf.size == 0: + return np.zeros(0, dtype=np.complex64) + + # Integrate frequency deviation to phase. phase[n] = phase[n-1] + 2*pi*dev*x/fs + k = 2.0 * np.pi * float(deviation_hz) / float(fs_rf) + phase = np.cumsum(k * a_rf, dtype=np.float64) + iq = np.exp(1j * phase).astype(np.complex64) + return iq + diff --git a/radioshaq/radioshaq/radio/hackrf_tx_compat.py b/radioshaq/radioshaq/radio/hackrf_tx_compat.py new file mode 100644 index 0000000..fb559e9 --- /dev/null +++ b/radioshaq/radioshaq/radio/hackrf_tx_compat.py @@ -0,0 +1,115 @@ +"""HackRF TX compatibility helpers for pyhackrf2 and test fakes.""" + +from __future__ import annotations + +import inspect +import time +from ctypes import CFUNCTYPE, POINTER, c_int, memmove +from typing import Any + +try: + from pyhackrf2.cinterface import lib_hackrf_transfer, libhackrf +except ImportError: # pragma: no cover - optional dependency + lib_hackrf_transfer = None + libhackrf = None + + +def _stream_via_direct_libhackrf(dev: Any, payload: bytes, duration_sec: float) -> None: + """Use libhackrf directly with a safe TX callback.""" + if lib_hackrf_transfer is None or libhackrf is None: + raise RuntimeError( + "Direct libhackrf TX path requires pyhackrf2.cinterface. " + "Install with: uv sync --extra hackrf (or pip install pyhackrf2)" + ) + + sent = 0 + + @CFUNCTYPE(c_int, POINTER(lib_hackrf_transfer)) + def _tx_cb(transfer_ptr: Any) -> int: + nonlocal sent + transfer = transfer_ptr.contents + remaining = len(payload) - sent + if remaining <= 0: + transfer.valid_length = 0 + return 1 + chunk_len = min(int(transfer.buffer_length), remaining) + chunk = payload[sent : sent + chunk_len] + memmove(transfer.buffer, chunk, chunk_len) + transfer.valid_length = chunk_len + sent += chunk_len + return 1 if sent >= len(payload) else 0 + + dev._check_error(libhackrf.hackrf_start_tx(dev._device_pointer, _tx_cb, None)) + try: + deadline = time.monotonic() + max(duration_sec + 0.5, 0.5) + while time.monotonic() < deadline and sent < len(payload): + time.sleep(0.01) + time.sleep(0.05) + finally: + dev._check_error(libhackrf.hackrf_stop_tx(dev._device_pointer)) + + +def _stream_via_start_tx_buffer(dev: Any, payload: bytes, duration_sec: float) -> None: + """Use pyhackrf2's public start_tx()/buffer API when direct access is unavailable.""" + dev.buffer = bytearray(payload) + dev.start_tx() + try: + time.sleep(duration_sec + 0.5) + finally: + dev.stop_tx() + + +def _stream_via_callback(dev: Any, payload: bytes, duration_sec: float) -> None: + """Legacy callback-driven TX path used by tests and older shims.""" + sent = [0] + start_time = time.monotonic() + + def _tx_cb(transfer: Any) -> int: + blen = getattr(transfer, "buffer_length", None) + if blen is None: + blen = len(transfer.buffer) + start = sent[0] + if start >= len(payload): + return 1 + end = min(start + int(blen), len(payload)) + data = payload[start:end] + target = transfer.buffer + if isinstance(target, (bytearray, memoryview)): + target[: len(data)] = data + else: + memmove(target, data, len(data)) + sent[0] = end + return 1 if end >= len(payload) else 0 + + dev.start_tx(_tx_cb) + deadline = start_time + max(duration_sec + 0.5, 0.5) + try: + while time.monotonic() < deadline and sent[0] < len(payload): + time.sleep(0.01) + finally: + dev.stop_tx() + # Ensure we do not return significantly earlier than the requested duration + elapsed = time.monotonic() - start_time + if duration_sec > 0 and elapsed < duration_sec: + time.sleep(duration_sec - elapsed) + + +def stream_hackrf_iq_bytes(dev: Any, payload: bytes, duration_sec: float) -> None: + """Transmit interleaved int8 IQ bytes through a HackRF-compatible device.""" + if libhackrf is not None and hasattr(dev, "_device_pointer") and hasattr(dev, "_check_error"): + _stream_via_direct_libhackrf(dev, payload, duration_sec) + return + + start_tx = getattr(dev, "start_tx", None) + if start_tx is None: + raise AttributeError("HackRF device does not expose start_tx") + + try: + param_count = len(inspect.signature(start_tx).parameters) + except (ValueError, TypeError): + # C extension or built-in method — cannot inspect signature; use callback path. + param_count = 1 # assume callback-style + if param_count == 0: + _stream_via_start_tx_buffer(dev, payload, duration_sec) + return + _stream_via_callback(dev, payload, duration_sec) diff --git a/radioshaq/radioshaq/radio/injection.py b/radioshaq/radioshaq/radio/injection.py index 389eb1c..beafa39 100644 --- a/radioshaq/radioshaq/radio/injection.py +++ b/radioshaq/radioshaq/radio/injection.py @@ -68,7 +68,12 @@ def inject_message( ) try: self._queue.put_nowait(msg) - logger.debug("Injected message for RX: band=%s freq=%s text=%s", band, frequency_hz, text[:50]) + logger.debug( + "Injected message for RX: band={} freq={} text={}", + band, + frequency_hz, + text[:50], + ) except asyncio.QueueFull: logger.warning("Injection queue full, dropping message") @@ -92,7 +97,10 @@ def put_back_nowait(self, msg: InjectedMessage) -> bool: self._queue.put_nowait(msg) return True except asyncio.QueueFull: - logger.warning("Injection queue full on re-put, dropping message (band=%s)", getattr(msg, "band", None)) + logger.warning( + "Injection queue full on re-put, dropping message (band={})", + getattr(msg, "band", None), + ) return False def qsize(self) -> int: diff --git a/radioshaq/radioshaq/radio/modes.py b/radioshaq/radioshaq/radio/modes.py new file mode 100644 index 0000000..852f531 --- /dev/null +++ b/radioshaq/radioshaq/radio/modes.py @@ -0,0 +1,186 @@ +"""Normalized radio mode model and mappings. + +This module is the single source of truth for how a user-facing mode maps to: +- CAT/hamlib rig modes (FM/AM/USB/LSB/CW/DIG, etc.) +- SDR DSP pipelines (demod/mod choice + default bandwidth/deviation) +- External modem software (FLDIGI / WSJT-X) when applicable +""" + +from __future__ import annotations + +from dataclasses import dataclass +from enum import StrEnum + +from loguru import logger + + +class ModeFamily(StrEnum): + ANALOG = "analog" + DIGITAL_TEXT = "digital_text" + DIGITAL_WEAK_SIGNAL = "digital_weak_signal" + PACKET = "packet" + + +class RadioModeName(StrEnum): + # Analog voice/audio families + NFM = "NFM" # narrow FM voice + AM = "AM" + USB = "USB" + LSB = "LSB" + CW = "CW" # treated as audio-tone pipeline unless separately decoded + + # Digital (external decoders/encoders) + FLDIGI = "FLDIGI" # generic digital-text via FLDIGI + PSK31 = "PSK31" + RTTY = "RTTY" + + FT8 = "FT8" + + AX25 = "AX25" + APRS = "APRS" + + +@dataclass(frozen=True) +class ModeSpec: + """Mode definition with mappings and reasonable defaults.""" + + name: RadioModeName + family: ModeFamily + # CAT/hamlib rig mode string (rigctld uses these names too) + hamlib_mode: str + # For modes that require an external modem, the modem name (e.g. FLDIGI modem string) + external_modem: str | None = None + # Default occupied bandwidth estimate (for compliance and DSP filter defaults) + default_bandwidth_hz: float = 12_500.0 + # FM deviation (Hz) when applicable + fm_deviation_hz: float | None = None + + +MODE_SPECS: dict[RadioModeName, ModeSpec] = { + RadioModeName.NFM: ModeSpec( + name=RadioModeName.NFM, + family=ModeFamily.ANALOG, + hamlib_mode="FM", + default_bandwidth_hz=12_500.0, + fm_deviation_hz=2_500.0, + ), + RadioModeName.AM: ModeSpec( + name=RadioModeName.AM, + family=ModeFamily.ANALOG, + hamlib_mode="AM", + default_bandwidth_hz=10_000.0, + ), + RadioModeName.USB: ModeSpec( + name=RadioModeName.USB, + family=ModeFamily.ANALOG, + hamlib_mode="USB", + default_bandwidth_hz=2_800.0, + ), + RadioModeName.LSB: ModeSpec( + name=RadioModeName.LSB, + family=ModeFamily.ANALOG, + hamlib_mode="LSB", + default_bandwidth_hz=2_800.0, + ), + RadioModeName.CW: ModeSpec( + name=RadioModeName.CW, + family=ModeFamily.ANALOG, + hamlib_mode="CW", + default_bandwidth_hz=500.0, + ), + RadioModeName.FLDIGI: ModeSpec( + name=RadioModeName.FLDIGI, + family=ModeFamily.DIGITAL_TEXT, + hamlib_mode="DIG", + external_modem="FLDIGI", + default_bandwidth_hz=3_000.0, + ), + RadioModeName.PSK31: ModeSpec( + name=RadioModeName.PSK31, + family=ModeFamily.DIGITAL_TEXT, + hamlib_mode="DIG", + external_modem="PSK31", + default_bandwidth_hz=100.0, + ), + RadioModeName.RTTY: ModeSpec( + name=RadioModeName.RTTY, + family=ModeFamily.DIGITAL_TEXT, + hamlib_mode="DIG", + external_modem="RTTY", + default_bandwidth_hz=500.0, + ), + RadioModeName.FT8: ModeSpec( + name=RadioModeName.FT8, + family=ModeFamily.DIGITAL_WEAK_SIGNAL, + hamlib_mode="DIG", + external_modem="FT8", + default_bandwidth_hz=3_000.0, + ), + RadioModeName.AX25: ModeSpec( + name=RadioModeName.AX25, + family=ModeFamily.PACKET, + hamlib_mode="PKTUSB", + external_modem="AX25", + default_bandwidth_hz=3_000.0, + ), + RadioModeName.APRS: ModeSpec( + name=RadioModeName.APRS, + family=ModeFamily.PACKET, + hamlib_mode="PKTUSB", + external_modem="APRS", + default_bandwidth_hz=3_000.0, + ), +} + + +def normalize_mode(value: str | RadioModeName | None, *, default: RadioModeName = RadioModeName.NFM) -> RadioModeName: + """Normalize a user/API mode string into a RadioModeName.""" + if value is None: + return default + if isinstance(value, RadioModeName): + return value + raw = str(value).strip().upper() + if raw in ("FM", "NFM"): + return RadioModeName.NFM + if raw in ("USB", "SSB", "SSB_USB"): + return RadioModeName.USB + if raw in ("LSB", "SSB_LSB"): + return RadioModeName.LSB + if raw in ("AM",): + return RadioModeName.AM + if raw in ("CW",): + return RadioModeName.CW + if raw in ("DIG", "DIGITAL", "FLDIGI"): + return RadioModeName.FLDIGI + if raw in ("PSK31", "PSK"): + return RadioModeName.PSK31 + if raw in ("RTTY",): + return RadioModeName.RTTY + if raw in ("FT8",): + return RadioModeName.FT8 + if raw in ("AX25", "AX.25"): + return RadioModeName.AX25 + if raw in ("APRS",): + return RadioModeName.APRS + logger.warning( + "normalize_mode: unrecognized mode {!r}; falling back to {}", + value, + default.value, + ) + return default + + +def spec_for(mode: str | RadioModeName | None) -> ModeSpec: + m = normalize_mode(mode) + return MODE_SPECS[m] + + +def hamlib_mode_for(mode: str | RadioModeName | None) -> str: + """Return the hamlib/rigctld mode string for a given normalized mode.""" + return spec_for(mode).hamlib_mode + + +def external_modem_for(mode: str | RadioModeName | None) -> str | None: + """Return external modem name (FLDIGI/FT8/etc.) if applicable.""" + return spec_for(mode).external_modem + diff --git a/radioshaq/radioshaq/radio/packet_radio.py b/radioshaq/radioshaq/radio/packet_radio.py index d791088..6484a2e 100644 --- a/radioshaq/radioshaq/radio/packet_radio.py +++ b/radioshaq/radioshaq/radio/packet_radio.py @@ -118,7 +118,7 @@ async def connect(self) -> None: ) self._connected = True self._reader_task = asyncio.create_task(self._frame_reader()) - logger.info("Connected to KISS TNC at %s:%d", self.kiss_host, self.kiss_port) + logger.info("Connected to KISS TNC at {}:{}", self.kiss_host, self.kiss_port) async def disconnect(self) -> None: """Disconnect from KISS TNC.""" @@ -220,9 +220,9 @@ async def _frame_reader(self) -> None: try: handler(frame) except Exception as e: - logger.warning("Frame handler error: %s", e) + logger.warning("Frame handler error: {}", e) except asyncio.CancelledError: break except Exception as e: - logger.warning("KISS frame reader error: %s", e) + logger.warning("KISS frame reader error: {}", e) await asyncio.sleep(1) diff --git a/radioshaq/radioshaq/radio/rig_manager.py b/radioshaq/radioshaq/radio/rig_manager.py index e6cf150..a9fa36e 100644 --- a/radioshaq/radioshaq/radio/rig_manager.py +++ b/radioshaq/radioshaq/radio/rig_manager.py @@ -7,7 +7,7 @@ from loguru import logger -from radioshaq.radio.cat_control import HamlibCATControl, RigMode, RigState +from radioshaq.radio.cat_control import HamlibCATControl, RigState class RigManager: @@ -27,7 +27,7 @@ def register_rig(self, name: str, cat: HamlibCATControl) -> None: self._rigs[name] = cat if self._active_rig is None: self._active_rig = name - logger.debug("Registered rig %s", name) + logger.debug("Registered rig {}", name) def unregister_rig(self, name: str) -> None: """Remove a rig.""" @@ -54,7 +54,7 @@ async def connect_all(self) -> None: try: await rig.connect() except Exception as e: - logger.warning("Failed to connect rig %s: %s", name, e) + logger.warning("Failed to connect rig {}: {}", name, e) async def set_frequency(self, frequency_hz: float, rig_name: str | None = None) -> None: """Set frequency on active or specified rig.""" @@ -72,7 +72,7 @@ async def set_ptt(self, state: bool, rig_name: str | None = None) -> None: async with self._lock: await rig.set_ptt(state) - async def set_mode(self, mode: RigMode | str, rig_name: str | None = None) -> None: + async def set_mode(self, mode: str, rig_name: str | None = None) -> None: """Set mode on active or specified rig.""" rig = self.get_rig(rig_name) if not rig: diff --git a/radioshaq/radioshaq/radio/sdr_tx.py b/radioshaq/radioshaq/radio/sdr_tx.py index 128c145..9f89929 100644 --- a/radioshaq/radioshaq/radio/sdr_tx.py +++ b/radioshaq/radioshaq/radio/sdr_tx.py @@ -3,13 +3,60 @@ from __future__ import annotations import asyncio +import base64 from typing import Any, Protocol +import httpx import numpy as np from loguru import logger -from radioshaq.radio.bands import BAND_PLANS -from radioshaq.radio.compliance import is_restricted, is_tx_allowed, log_tx +from radioshaq.compliance_plugin import get_backend +from radioshaq.radio.bands import BAND_PLANS, BandPlan +from radioshaq.radio.compliance import is_restricted, is_tx_allowed, is_tx_spectrum_allowed, log_tx +from radioshaq.radio.hackrf_tx_compat import stream_hackrf_iq_bytes + + +class _CompatAsyncClient(httpx.AsyncClient): + """ + Backwards-compatible AsyncClient shim that accepts an optional ``app`` kwarg. + + Older tests expect ``httpx.AsyncClient(app=..., base_url=...)`` even though + recent httpx versions removed this parameter in favor of ASGITransport. + This shim converts ``app`` into an appropriate transport and then delegates + to the real AsyncClient implementation. + """ + + def __init__(self, *args: Any, app: Any | None = None, **kwargs: Any) -> None: + if app is not None and "transport" not in kwargs: + from httpx import ASGITransport + + kwargs["transport"] = ASGITransport(app=app) + super().__init__(*args, **kwargs) + + +# Module-local alias; does NOT mutate the global httpx namespace. +# Tests can monkeypatch radioshaq.radio.sdr_tx._AsyncClient to inject a custom client. +_AsyncClient = _CompatAsyncClient + + +def _normalize_iq_for_broker(samples_iq: Any, sample_rate: int) -> tuple[str, float]: + """Normalize I/Q samples to int8 interleaved, then to base64. Runs in thread executor.""" + if isinstance(samples_iq, (bytes, bytearray, memoryview)): + iq = np.frombuffer(samples_iq, dtype=np.int8) + else: + s = np.asarray(samples_iq) + if np.iscomplexobj(s): + i = (np.clip(np.real(s) * 127, -128, 127)).astype(np.int8) + q = (np.clip(np.imag(s) * 127, -128, 127)).astype(np.int8) + iq = np.empty(2 * len(s), dtype=np.int8) + iq[0::2] = i + iq[1::2] = q + else: + iq = np.asarray(s, dtype=np.int8) + duration_sec = len(iq) / (2.0 * sample_rate) + iq_bytes = iq.tobytes() + iq_b64 = base64.b64encode(iq_bytes).decode("ascii") + return iq_b64, duration_sec class SDRTransmitter(Protocol): @@ -29,63 +76,117 @@ async def transmit_iq( frequency_hz: float, samples_iq: Any, sample_rate: int, + occupied_bandwidth_hz: float | None = None, ) -> None: """Transmit I/Q samples (e.g. numpy complex or int8 interleaved).""" ... -class HackRFTransmitter: - """ - HackRF TX with compliance: is_tx_allowed / is_restricted before TX, log_tx after. - Requires python_hackrf (pip install python-hackrf), libhackrf >= 2024.02.1. - """ +class _ComplianceCheckedTransmitter: + """Shared compliance and audit helpers for SDR transmitters.""" def __init__( self, - device_index: int = 0, - serial_number: str | None = None, - max_gain: int = 47, + *, allow_bands_only: bool = True, audit_log_path: str | None = None, restricted_region: str = "FCC", - ): - self.device_index = device_index - self.serial_number = serial_number - self.max_gain = min(47, max(0, max_gain)) + band_plan_source: dict[str, BandPlan] | None = None, + rig_or_sdr: str = "hackrf", + ) -> None: self.allow_bands_only = allow_bands_only self.audit_log_path = audit_log_path self.restricted_region = restricted_region - self._device = None + self._band_plan_source = band_plan_source + self._rig_or_sdr = rig_or_sdr - def _check_compliance(self, frequency_hz: float) -> None: + def _check_compliance(self, frequency_hz: float, occupied_bandwidth_hz: float | None = None) -> None: """Raise ValueError if TX not allowed on this frequency.""" if is_restricted(frequency_hz, region=self.restricted_region): raise ValueError(f"Frequency {frequency_hz} Hz is in a restricted band (no TX allowed)") - if self.allow_bands_only and not is_tx_allowed( - frequency_hz, - band_plan_source=BAND_PLANS, - allow_tx_only_amateur_bands=True, - restricted_region=self.restricted_region, - ): - raise ValueError(f"Frequency {frequency_hz} Hz is not in an allowed band") - - def _audit(self, frequency_hz: float, duration_sec: float, mode: str = "tone") -> None: - """Write TX audit log.""" + plans = self._band_plan_source + if plans is None: + backend = get_backend(self.restricted_region) + if backend is not None: + _plans = backend.get_band_plans() + plans = _plans if _plans is not None else BAND_PLANS + else: + plans = BAND_PLANS + if self.allow_bands_only: + if occupied_bandwidth_hz is not None and occupied_bandwidth_hz > 0: + ok = is_tx_spectrum_allowed( + frequency_hz, + float(occupied_bandwidth_hz), + band_plan_source=plans, + allow_tx_only_amateur_bands=True, + restricted_region=self.restricted_region, + ) + if not ok: + raise ValueError( + f"TX spectrum centered at {frequency_hz} Hz with BW={occupied_bandwidth_hz} Hz is not fully within an allowed band" + ) + else: + if not is_tx_allowed( + frequency_hz, + band_plan_source=plans, + allow_tx_only_amateur_bands=True, + restricted_region=self.restricted_region, + ): + raise ValueError(f"Frequency {frequency_hz} Hz is not in an allowed band") + + def _audit( + self, + frequency_hz: float, + duration_sec: float, + mode: str = "tone", + *, + success: bool = True, + ) -> None: + """Write TX audit log (success or failure).""" log_tx( frequency_hz=frequency_hz, duration_sec=duration_sec, mode=mode, - rig_or_sdr="hackrf", + rig_or_sdr=self._rig_or_sdr, operator_id=None, audit_log_path=self.audit_log_path, + success=success, + ) + + +class HackRFTransmitter(_ComplianceCheckedTransmitter): + """ + HackRF TX with compliance: is_tx_allowed / is_restricted before TX, log_tx after. + Requires pyhackrf2 (pip install pyhackrf2), system libhackrf. + """ + + def __init__( + self, + device_index: int = 0, + serial_number: str | None = None, + max_gain: int = 47, + allow_bands_only: bool = True, + audit_log_path: str | None = None, + restricted_region: str = "FCC", + band_plan_source: dict[str, BandPlan] | None = None, + ): + self.device_index = device_index + self.serial_number = serial_number + self.max_gain = min(47, max(0, max_gain)) + super().__init__( + allow_bands_only=allow_bands_only, + audit_log_path=audit_log_path, + restricted_region=restricted_region, + band_plan_source=band_plan_source, ) + self._device = None def _open(self) -> Any: """Open HackRF device (lazy).""" if self._device is not None: return self._device try: - from hackrf import HackRF + from pyhackrf2 import HackRF if self.serial_number: self._device = HackRF(serial_number=self.serial_number) else: @@ -93,7 +194,7 @@ def _open(self) -> Any: return self._device except ImportError as e: raise RuntimeError( - "HackRF TX requires python_hackrf. Install with: uv sync --extra hackrf (or pip install python-hackrf)" + "HackRF TX requires pyhackrf2. Install with: uv sync --extra hackrf (or pip install pyhackrf2)" ) from e async def transmit_tone( @@ -103,106 +204,235 @@ async def transmit_tone( sample_rate: int = 2_000_000, ) -> None: """Transmit a simple CW-style tone. Compliance checked; audit logged.""" - self._check_compliance(frequency_hz) + # Conservative occupied BW estimate for a tone is small, but we still bound it. + self._check_compliance(frequency_hz, occupied_bandwidth_hz=25_000.0) dev = self._open() loop = asyncio.get_running_loop() - # Generate int8 interleaved I/Q for a tone (e.g. 1 kHz at sample_rate) - import numpy as np - tone_hz = 1000.0 - num_samples = int(duration_sec * sample_rate) - t = np.arange(num_samples, dtype=np.float64) / sample_rate - i = (127 * 0.3 * np.cos(2 * np.pi * tone_hz * t)).astype(np.int8) - q = (127 * 0.3 * np.sin(2 * np.pi * tone_hz * t)).astype(np.int8) - iq = np.empty(2 * num_samples, dtype=np.int8) - iq[0::2] = i - iq[1::2] = q + success = False + def _blocking_tx() -> None: + # Generate int8 interleaved I/Q for a tone (e.g. 1 kHz at sample_rate) + # inside the executor to avoid blocking the event loop. + tone_hz = 1000.0 + num_samples = int(duration_sec * sample_rate) + t = np.arange(num_samples, dtype=np.float64) / sample_rate + i = (127 * 0.3 * np.cos(2 * np.pi * tone_hz * t)).astype(np.int8) + q = (127 * 0.3 * np.sin(2 * np.pi * tone_hz * t)).astype(np.int8) + iq = np.empty(2 * num_samples, dtype=np.int8) + iq[0::2] = i + iq[1::2] = q + try: dev.center_freq = int(frequency_hz) dev.sample_rate = sample_rate dev.txvga_gain = self.max_gain except AttributeError: + # Older or stub objects may not expose these attributes. pass - # python_hackrf TX: library may expose start_tx(callback) with - # callback(transfer) where transfer has .buffer (int8). Fill and return 0/1. try: buf = iq.tobytes() - sent = [0] - def tx_cb(transfer: Any) -> int: - try: - blen = getattr(transfer, "buffer_length", None) or len(transfer.buffer) - start = sent[0] - if start >= len(buf): - return 1 - end = min(start + blen, len(buf)) - data = buf[start:end] - transfer.buffer[:len(data)] = data - sent[0] = end - return 1 if end >= len(buf) else 0 - except Exception: - return 1 - dev.start_tx(tx_cb) - import time - time.sleep(duration_sec + 0.5) - dev.stop_tx() + stream_hackrf_iq_bytes(dev, buf, duration_sec) except (AttributeError, TypeError) as e: - logger.warning("HackRF TX not available (%s); audit only", e) + # Stub/API mismatch: skip hardware TX but still allow audit trail to record attempt. + logger.warning("HackRF TX not available ({}); audit only", repr(e)) + try: await loop.run_in_executor(None, _blocking_tx) + success = True + except RuntimeError as e: + msg = str(e) + if "HACKRF_ERROR_LIBUSB" in msg or "libusb" in msg.lower(): + raise RuntimeError( + "HackRF libusb error (HACKRF_ERROR_LIBUSB). " + "Check that the device is attached to WSL (usbipd-win), " + "not in use by another process, and that libhackrf is installed." + ) from e + raise finally: - self._audit(frequency_hz, duration_sec, "tone") + self._audit(frequency_hz, duration_sec, "tone", success=success) async def transmit_iq( self, frequency_hz: float, samples_iq: Any, sample_rate: int, + occupied_bandwidth_hz: float | None = None, ) -> None: """Transmit I/Q samples. Compliance checked; audit logged.""" - self._check_compliance(frequency_hz) + # Sample rate is not the signal bandwidth; default to a center-frequency-only check. + self._check_compliance(frequency_hz, occupied_bandwidth_hz=occupied_bandwidth_hz) # Convert to int8 interleaved if needed, then same as tone path - s = np.asarray(samples_iq) - if np.iscomplexobj(s): - i = (np.clip(np.real(s) * 127, -128, 127)).astype(np.int8) - q = (np.clip(np.imag(s) * 127, -128, 127)).astype(np.int8) - iq = np.empty(2 * len(s), dtype=np.int8) - iq[0::2] = i - iq[1::2] = q + # Normalize samples into int8 interleaved IQ. Support both numpy arrays and raw bytes. + if isinstance(samples_iq, (bytes, bytearray, memoryview)): + iq = np.frombuffer(samples_iq, dtype=np.int8) else: - iq = np.asarray(s, dtype=np.int8) + s = np.asarray(samples_iq) + if np.iscomplexobj(s): + i = (np.clip(np.real(s) * 127, -128, 127)).astype(np.int8) + q = (np.clip(np.imag(s) * 127, -128, 127)).astype(np.int8) + iq = np.empty(2 * len(s), dtype=np.int8) + iq[0::2] = i + iq[1::2] = q + else: + iq = np.asarray(s, dtype=np.int8) duration_sec = len(iq) / (2.0 * sample_rate) dev = self._open() loop = asyncio.get_running_loop() + success = False + def _blocking_tx() -> None: try: dev.center_freq = int(frequency_hz) dev.sample_rate = sample_rate dev.txvga_gain = self.max_gain except AttributeError: + # Older or stub objects may not expose these attributes. pass try: buf = iq.tobytes() - sent = [0] - def tx_cb(transfer: Any) -> int: - try: - blen = getattr(transfer, "buffer_length", None) or len(transfer.buffer) - start = sent[0] - if start >= len(buf): - return 1 - end = min(start + blen, len(buf)) - data = buf[start:end] - transfer.buffer[:len(data)] = data - sent[0] = end - return 1 if end >= len(buf) else 0 - except Exception: - return 1 - dev.start_tx(tx_cb) - import time - time.sleep(duration_sec + 0.5) - dev.stop_tx() + stream_hackrf_iq_bytes(dev, buf, duration_sec) except (AttributeError, TypeError) as e: - logger.warning("HackRF TX not available (%s); audit only", e) + # Stub/API mismatch: skip hardware TX but still allow audit trail to record attempt. + logger.warning("HackRF TX not available ({}); audit only", repr(e)) + try: await loop.run_in_executor(None, _blocking_tx) + success = True + except RuntimeError as e: + msg = str(e) + if "HACKRF_ERROR_LIBUSB" in msg or "libusb" in msg.lower(): + raise RuntimeError( + "HackRF libusb error (HACKRF_ERROR_LIBUSB). " + "Check that the device is attached to WSL (usbipd-win), " + "not in use by another process, and that libhackrf is installed." + ) from e + raise + finally: + self._audit(frequency_hz, duration_sec, "iq", success=success) + + +class HackRFServiceClient(_ComplianceCheckedTransmitter): + """ + Remote HackRF TX client that delegates to a HackRF broker service over HTTP. + Implements the same SDRTransmitter interface as HackRFTransmitter. + """ + + def __init__( + self, + base_url: str, + *, + auth_token: str | None = None, + request_timeout_sec: float = 15.0, + allow_bands_only: bool = True, + audit_log_path: str | None = None, + restricted_region: str = "FCC", + band_plan_source: dict[str, BandPlan] | None = None, + ) -> None: + super().__init__( + allow_bands_only=allow_bands_only, + audit_log_path=audit_log_path, + restricted_region=restricted_region, + band_plan_source=band_plan_source, + rig_or_sdr="hackrf_broker", + ) + self._base_url = base_url.rstrip("/") + self._auth_token = auth_token + self._timeout = request_timeout_sec + + async def _post(self, path: str, payload: dict[str, Any]) -> dict[str, Any]: + headers: dict[str, str] = {} + if self._auth_token: + headers["Authorization"] = f"Bearer {self._auth_token}" + async with _AsyncClient(base_url=self._base_url, timeout=self._timeout, headers=headers) as client: + try: + response = await client.post(path, json=payload) + except httpx.RequestError as e: + raise RuntimeError( + f"HackRF broker unreachable at {self._base_url}: {e!r}" + ) from e + response.raise_for_status() + try: + data = response.json() + except ValueError: + data = {} + return data + + async def transmit_tone( + self, + frequency_hz: float, + duration_sec: float, + sample_rate: int = 2_000_000, + ) -> None: + """Transmit a simple tone via the remote HackRF broker.""" + # Conservative occupied BW estimate for a tone. + self._check_compliance(frequency_hz, occupied_bandwidth_hz=25_000.0) + success = False + try: + await self._post( + "/tx/tone", + { + "frequency_hz": frequency_hz, + "duration_sec": duration_sec, + "sample_rate": sample_rate, + }, + ) + success = True + except httpx.HTTPStatusError as e: + detail = "" + try: + data = e.response.json() + detail = data.get("detail") or "" + except Exception: + detail = e.response.text + if "HACKRF_ERROR_LIBUSB" in detail or "libusb" in detail.lower(): + raise RuntimeError( + "HackRF libusb error (HACKRF_ERROR_LIBUSB). " + "Check that the device is attached to WSL (usbipd-win), " + "not in use by another process, and that libhackrf is installed." + ) from e + raise + finally: + self._audit(frequency_hz, duration_sec, "tone", success=success) + + async def transmit_iq( + self, + frequency_hz: float, + samples_iq: Any, + sample_rate: int, + occupied_bandwidth_hz: float | None = None, + ) -> None: + """Transmit I/Q samples via the remote HackRF broker.""" + self._check_compliance(frequency_hz, occupied_bandwidth_hz=occupied_bandwidth_hz) + # Offload heavy numpy/base64 work to a thread to avoid blocking the event loop. + loop = asyncio.get_running_loop() + iq_b64, duration_sec = await loop.run_in_executor( + None, _normalize_iq_for_broker, samples_iq, sample_rate + ) + success = False + try: + await self._post( + "/tx/iq", + { + "frequency_hz": frequency_hz, + "sample_rate": sample_rate, + "iq_b64": iq_b64, + "occupied_bandwidth_hz": occupied_bandwidth_hz, + }, + ) + success = True + except httpx.HTTPStatusError as e: + detail = "" + try: + data = e.response.json() + detail = data.get("detail") or "" + except Exception: + detail = e.response.text + if "HACKRF_ERROR_LIBUSB" in detail or "libusb" in detail.lower(): + raise RuntimeError( + "HackRF libusb error (HACKRF_ERROR_LIBUSB). " + "Check that the device is attached to WSL (usbipd-win), " + "not in use by another process, and that libhackrf is installed." + ) from e + raise finally: - self._audit(frequency_hz, duration_sec, "iq") + self._audit(frequency_hz, duration_sec, "iq", success=success) diff --git a/radioshaq/radioshaq/relay/service.py b/radioshaq/radioshaq/relay/service.py index ab717e4..c5e510d 100644 --- a/radioshaq/radioshaq/relay/service.py +++ b/radioshaq/radioshaq/relay/service.py @@ -1,15 +1,19 @@ -"""Shared relay service: store source + relayed transcripts; optional inject and radio_tx. +"""Shared relay service: store source + relayed transcripts; optional inject, radio_tx, or SMS/WhatsApp. Used by POST /messages/relay and by the relay_message_between_bands tool. -Default is store-only; recipient polls GET /transcripts. Inject and TX only when config enables. +When target_channel is sms or whatsapp, the relayed message is delivered via the outbound bus +(relay_delivery worker publishes to message_bus; outbound dispatcher sends via SMS/WhatsApp). """ from __future__ import annotations +import uuid +from datetime import datetime, timezone from typing import Any from loguru import logger +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.radio.bands import BAND_PLANS @@ -31,35 +35,116 @@ async def relay_message_between_bands_service( source_audio_path: str | None = None, target_audio_path: str | None = None, store_only_relayed: bool = False, + target_channel: str = "radio", + destination_phone: str | None = None, + emergency: bool = False, + message_bus: Any = None, ) -> dict[str, Any]: """ - Relay a message from source band to target band: store two transcripts, - optionally inject and/or TX on target band when deliver_at is not set. - - Returns dict with ok, source_transcript_id, relayed_transcript_id (when stored), - source_band, target_band, session_id, deliver_at. When no storage, returns - ok=True, relay="no_storage" and band/freq/callsign info only. + Relay a message from source band to target (radio band, sms, or whatsapp). + - target_channel "radio": store and optionally inject/TX on target_band (existing behavior). + - target_channel "sms" or "whatsapp": store relayed row with delivery_channel and + destination_phone in metadata; relay_delivery worker will publish to bus for outbound delivery. + - emergency=True and target_channel sms/whatsapp: check region allowlist; if approval_required, + create coordination_events row (status=pending) and return queued_for_approval (no immediate delivery). """ - if source_band not in BAND_PLANS or target_band not in BAND_PLANS: - return { - "ok": False, - "error": "Unknown band; use e.g. 40m, 2m, 20m", - "source_band": source_band, - "target_band": target_band, - } + is_sms_whatsapp = target_channel in ("sms", "whatsapp") + if emergency and is_sms_whatsapp: + from radioshaq.messaging_compliance import emergency_messaging_allowed + region = getattr(getattr(config, "radio", None) or config, "restricted_bands_region", None) or "" + ec_cfg = getattr(config, "emergency_contact", None) + if not emergency_messaging_allowed(region, ec_cfg): + return { + "ok": False, + "error": "Emergency SMS/WhatsApp not allowed in this region", + "target_channel": target_channel, + } + if getattr(ec_cfg, "approval_required", True): + if not storage or storage.db is None: + return { + "ok": False, + "error": "Emergency approval required but database is unavailable", + "target_channel": target_channel, + } + db = storage.db + if not hasattr(db, "store_coordination_event"): + return { + "ok": False, + "error": "Emergency approval required but store_coordination_event not available", + "target_channel": target_channel, + } + dest_phone = (destination_phone or "").strip() + if not dest_phone: + return {"ok": False, "error": "destination_phone required for emergency relay", "target_channel": target_channel} + event_id = await db.store_coordination_event( + event_type="emergency", + initiator_callsign=source_callsign or "UNKNOWN", + target_callsign=destination_callsign, + status="pending", + priority=1, + notes=message[:500] if message else None, + extra_data={ + "emergency_contact_phone": dest_phone, + "emergency_contact_channel": target_channel, + "message": message, + }, + ) + return { + "ok": True, + "queued_for_approval": True, + "event_id": event_id, + "target_channel": target_channel, + } + if config is not None: + radio_cfg = getattr(config, "radio", config) + band_plans = get_band_plan_source_for_config( + getattr(radio_cfg, "restricted_bands_region", "FCC"), + getattr(radio_cfg, "band_plan_region", None), + ) + else: + band_plans = BAND_PLANS - source_plan = BAND_PLANS[source_band] - target_plan = BAND_PLANS[target_band] - source_freq = source_frequency_hz or ( - source_plan.freq_start_hz + (source_plan.freq_end_hz - source_plan.freq_start_hz) / 2 - ) - target_freq = target_frequency_hz or ( - target_plan.freq_start_hz + (target_plan.freq_end_hz - target_plan.freq_start_hz) / 2 - ) - mode = (source_plan.modes or ["SSB"])[0] - target_mode = (target_plan.modes or ["FM"])[0] + if is_sms_whatsapp: + if not destination_phone or not str(destination_phone).strip(): + return { + "ok": False, + "error": "destination_phone required when target_channel is sms or whatsapp", + "target_channel": target_channel, + } + destination_phone = str(destination_phone).strip() + if source_band not in band_plans: + return { + "ok": False, + "error": "Unknown source_band; use e.g. 40m, 2m, 20m", + "source_band": source_band, + } + source_plan = band_plans[source_band] + source_freq = source_frequency_hz or ( + source_plan.freq_start_hz + (source_plan.freq_end_hz - source_plan.freq_start_hz) / 2 + ) + target_freq = 0.0 + target_mode = "n/a" + mode = (source_plan.modes or ["SSB"])[0] + else: + if source_band not in band_plans or target_band not in band_plans: + return { + "ok": False, + "error": "Unknown band; use e.g. 40m, 2m, 20m", + "source_band": source_band, + "target_band": target_band, + } + source_plan = band_plans[source_band] + target_plan = band_plans[target_band] + source_freq = source_frequency_hz or ( + source_plan.freq_start_hz + (source_plan.freq_end_hz - source_plan.freq_start_hz) / 2 + ) + target_freq = target_frequency_hz or ( + target_plan.freq_start_hz + (target_plan.freq_end_hz - target_plan.freq_start_hz) / 2 + ) + mode = (source_plan.modes or ["SSB"])[0] + target_mode = (target_plan.modes or ["FM"])[0] - if not storage or not getattr(storage, "_db", None): + if not storage or getattr(storage, "db", None) is None: return { "ok": True, "relay": "no_storage", @@ -72,9 +157,9 @@ async def relay_message_between_bands_service( "destination_callsign": destination_callsign, "session_id": session_id or "relay-no-storage", "deliver_at": deliver_at, + "target_channel": target_channel, } - import uuid sid = session_id or f"relay-{uuid.uuid4().hex[:12]}" orig_id: int | None = None @@ -92,14 +177,22 @@ async def relay_message_between_bands_service( ) relay_metadata = { - "band": target_band, + "band": target_band if not is_sms_whatsapp else target_channel, "relay_role": "relayed", "relay_from_transcript_id": orig_id, "relay_from_band": source_band, "relay_from_frequency_hz": source_freq, } - if deliver_at: + if is_sms_whatsapp: + relay_metadata["delivery_channel"] = target_channel + relay_metadata["destination_phone"] = destination_phone + if deliver_at: + relay_metadata["deliver_at"] = deliver_at + else: + relay_metadata["deliver_at"] = datetime.now(timezone.utc).isoformat() + elif deliver_at: relay_metadata["deliver_at"] = deliver_at + relay_id = await storage.store( session_id=sid, source_callsign=source_callsign, @@ -111,11 +204,34 @@ async def relay_message_between_bands_service( raw_audio_path=target_audio_path, ) - immediate = not deliver_at + # Immediate SMS/WhatsApp dispatch when no deliver_at and bus available (avoids up-to-60s worker delay) + if is_sms_whatsapp and not deliver_at and message_bus and hasattr(message_bus, "publish_outbound"): + try: + from radioshaq.vendor.nanobot.bus.events import OutboundMessage + ok = await message_bus.publish_outbound( + OutboundMessage( + channel=target_channel, + chat_id=destination_phone or "", + content=message, + reply_to=None, + media=[], + metadata={"relay_transcript_id": relay_id, "source_callsign": source_callsign}, + ) + ) + if ok and storage and storage.db and hasattr(storage.db, "mark_transcript_delivery_done"): + await storage.db.mark_transcript_delivery_done(relay_id) + except Exception as e: + logger.warning( + "Relay immediate publish_outbound failed for transcript {} ({}); worker will retry", + relay_id, + e, + ) + + immediate = not deliver_at and not is_sms_whatsapp radio_cfg = getattr(config, "radio", None) if config else None if not radio_cfg: radio_cfg = config - if immediate and radio_cfg: + if immediate and radio_cfg and not is_sms_whatsapp: if getattr(radio_cfg, "relay_inject_target_band", False) and injection_queue: injection_queue.inject_message( text=message, @@ -134,7 +250,7 @@ async def relay_message_between_bands_service( "mode": target_mode, }) except Exception as e: - logger.warning("Relay radio_tx on target band failed: %s", e) + logger.warning("Relay radio_tx on target band failed: {}", e) return { "ok": True, @@ -146,4 +262,5 @@ async def relay_message_between_bands_service( "target_frequency_hz": target_freq, "session_id": sid, "deliver_at": deliver_at, + "target_channel": target_channel, } diff --git a/radioshaq/radioshaq/remote_receiver/__init__.py b/radioshaq/radioshaq/remote_receiver/__init__.py index 743a46c..6bfca17 100644 --- a/radioshaq/radioshaq/remote_receiver/__init__.py +++ b/radioshaq/radioshaq/remote_receiver/__init__.py @@ -1 +1 @@ -"""SHAKODS remote receiver: auth, SDR, signal processing, HQ client (bundled in radioshaq).""" +"""RadioShaq remote receiver: auth, SDR, signal processing, HQ client (bundled in radioshaq).""" diff --git a/radioshaq/radioshaq/remote_receiver/auth.py b/radioshaq/radioshaq/remote_receiver/auth.py index 6695156..584029f 100644 --- a/radioshaq/radioshaq/remote_receiver/auth.py +++ b/radioshaq/radioshaq/remote_receiver/auth.py @@ -19,7 +19,7 @@ class ReceiverTokenPayload(BaseModel): class JWTReceiverAuth: - """Verify JWT tokens issued by SHAKODS HQ for receiver stations.""" + """Verify JWT tokens issued by RadioShaq HQ for receiver stations.""" def __init__(self, secret: str | None = None, algorithm: str = "HS256"): self.secret = secret or os.environ.get("JWT_SECRET", "") diff --git a/radioshaq/radioshaq/remote_receiver/backends/base.py b/radioshaq/radioshaq/remote_receiver/backends/base.py index 4587f58..fbf464d 100644 --- a/radioshaq/radioshaq/remote_receiver/backends/base.py +++ b/radioshaq/radioshaq/remote_receiver/backends/base.py @@ -32,3 +32,17 @@ async def receive(self, duration_seconds: float) -> AsyncIterator[SignalSample]: async def close(self) -> None: """Release device.""" ... + + async def configure( + self, + *, + mode: str | None = None, + audio_rate_hz: int | None = None, + bfo_hz: float | None = None, + ) -> None: + """Optional: configure demod settings for this backend. + + Backends that support analog demod may implement this to change mode or audio rate + per stream connection. + """ + _ = mode, audio_rate_hz, bfo_hz diff --git a/radioshaq/radioshaq/remote_receiver/backends/hackrf_backend.py b/radioshaq/radioshaq/remote_receiver/backends/hackrf_backend.py index e19c684..d080f8f 100644 --- a/radioshaq/radioshaq/remote_receiver/backends/hackrf_backend.py +++ b/radioshaq/radioshaq/remote_receiver/backends/hackrf_backend.py @@ -1,8 +1,9 @@ -"""HackRF RX backend (python_hackrf).""" +"""HackRF RX backend (pyhackrf2).""" from __future__ import annotations import asyncio +import os from datetime import datetime, timezone from typing import AsyncIterator @@ -11,27 +12,60 @@ from radioshaq.remote_receiver.backends.base import SDRBackend from radioshaq.remote_receiver.radio_interface import SignalSample +from radioshaq.remote_receiver.dsp.nfm import NfmConfig, NfmDemodulator, float_to_pcm16 +from radioshaq.remote_receiver.dsp.analog import ( + AnalogConfig, + AmDemodulator, + CwAudioDemodulator, + SsbDemodulator, +) class HackRFBackend(SDRBackend): - """HackRF receive backend using python_hackrf. Pip: python-hackrf (libhackrf 2024.02.1+).""" + """HackRF receive backend using pyhackrf2. Pip: pyhackrf2 (requires system libhackrf).""" def __init__( self, device_index: int = 0, serial_number: str | None = None, sample_rate: int = 10_000_000, + device_manager: object | None = None, + broker: object | None = None, ): self.device_index = device_index self.serial_number = serial_number self.sample_rate = sample_rate + # When provided, device manager owns the single HackRF instance and + # coordinates access across RX and TX. When absent, this backend + # manages its own device (legacy mode). + self._device_manager = device_manager + # Optional broker for RX/TX scheduling flags (e.g. should_stop_rx, rx_active). + self._broker = broker self._frequency_hz: float = 0.0 self._device = None + self._rx_mode = os.environ.get("RECEIVER_MODE", "none").strip().lower() + self._audio_rate = int(os.environ.get("RECEIVER_AUDIO_RATE", "48000")) + self._bfo_hz = float(os.environ.get("RECEIVER_BFO_HZ", "1500")) + self._nfm: NfmDemodulator | None = None + self._am: AmDemodulator | None = None + self._ssb: SsbDemodulator | None = None + self._cw: CwAudioDemodulator | None = None async def initialize(self) -> None: - """Open HackRF device by index or serial.""" + """Open HackRF device by index or serial (legacy mode only). + + When a shared device manager is provided, it is responsible for + opening the underlying device so this method becomes a no-op. + """ + if self._device_manager is not None: + logger.info( + "HackRFBackend using shared HackRFDeviceManager (index={}, serial={})", + self.device_index, + self.serial_number or "default", + ) + return try: - from hackrf import HackRF + from pyhackrf2 import HackRF if self.serial_number: self._device = HackRF(serial_number=self.serial_number) @@ -39,52 +73,184 @@ async def initialize(self) -> None: self._device = HackRF(device_index=self.device_index) self._device.sample_rate = self.sample_rate logger.info( - "HackRF initialized (index=%s, serial=%s)", + "HackRF initialized (index={}, serial={})", self.device_index, self.serial_number or "default", ) except ImportError: logger.warning( - "python_hackrf not installed; install with: uv sync --extra hackrf" + "pyhackrf2 not installed; install with: uv sync --extra hackrf" ) except Exception as e: - logger.warning("HackRF init failed: %s", e) + logger.warning("HackRF init failed: {}", repr(e)) async def set_frequency(self, frequency_hz: float) -> None: """Tune to frequency in Hz (1 MHz–6 GHz).""" self._frequency_hz = frequency_hz - if self._device: + if self._device_manager is not None: + loop = asyncio.get_running_loop() + + async def _set(dev) -> None: + await loop.run_in_executor( + None, lambda: setattr(dev, "center_freq", int(frequency_hz)) + ) + + try: + await self._device_manager.with_device(_set) + except Exception as e: + logger.warning("HackRF set_frequency via manager failed: {}", repr(e)) + elif self._device: self._device.center_freq = int(frequency_hz) + # Reset demod state on retune. + self._nfm = None + self._am = None + self._ssb = None + self._cw = None + + async def configure( + self, + *, + mode: str | None = None, + audio_rate_hz: int | None = None, + bfo_hz: float | None = None, + ) -> None: + if mode is not None: + self._rx_mode = str(mode).strip().lower() + if audio_rate_hz is not None: + self._audio_rate = int(audio_rate_hz) + if bfo_hz is not None: + self._bfo_hz = float(bfo_hz) + # Reset state so settings take effect cleanly + self._nfm = None + self._am = None + self._ssb = None + self._cw = None async def receive(self, duration_seconds: float) -> AsyncIterator[SignalSample]: """Stream signal samples: read I/Q via read_samples, compute power -> dB.""" loop = asyncio.get_running_loop() end = loop.time() + duration_seconds num_samples = 8192 - while loop.time() < end: - if self._device: - try: - samples = await loop.run_in_executor( - None, - lambda: self._device.read_samples(num_samples), - ) - if hasattr(samples, "dtype") and np.iscomplexobj(samples): - power = np.mean(np.abs(samples) ** 2) - else: - power = np.mean(np.abs(samples) ** 2) - strength_db = 10.0 * np.log10(power + 1e-30) if power > 0 else -120.0 - except Exception as e: - logger.debug("HackRF read failed: %s", e) + broker = getattr(self, "_broker", None) + if broker is not None: + broker.rx_active.set() + try: + while loop.time() < end: + if broker is not None and broker.should_stop_rx: + break + audio_pcm: bytes | None = None + s = None + strength_db = -120.0 + if self._device_manager is not None: + + async def _read(dev): + return await loop.run_in_executor( + None, lambda: dev.read_samples(num_samples) + ) + + try: + samples = await self._device_manager.with_device(_read) + s = np.asarray(samples) + power = np.mean(np.abs(s) ** 2) + strength_db = ( + 10.0 * np.log10(power + 1e-30) if power > 0 else -120.0 + ) + except Exception as e: + logger.warning( + "HackRF read_samples() via manager failed: {}", repr(e) + ) + strength_db = -100.0 + s = None + elif self._device: + try: + samples = await loop.run_in_executor( + None, + lambda: self._device.read_samples(num_samples), + ) + # pyhackrf2 returns complex64 IQ. + s = np.asarray(samples) + power = np.mean(np.abs(s) ** 2) + strength_db = ( + 10.0 * np.log10(power + 1e-30) if power > 0 else -120.0 + ) + except Exception as e: + logger.warning("HackRF read_samples() failed: {}", repr(e)) + strength_db = -100.0 + s = None + else: strength_db = -100.0 - else: - strength_db = -100.0 - yield SignalSample( - timestamp=datetime.now(timezone.utc), - frequency_hz=self._frequency_hz, - strength_db=float(strength_db), - decoded_data=None, - ) - await asyncio.sleep(0.1) + if s is not None: + try: + # Run CPU-heavy demod (resample_poly / FIR) in executor to avoid blocking the event loop. + if self._rx_mode in {"nfm", "fm"}: + if self._nfm is None: + self._nfm = NfmDemodulator( + NfmConfig(audio_rate_hz=self._audio_rate), + rf_rate_hz=self.sample_rate, + ) + nfm = self._nfm + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(nfm.demod(s)) + ) + elif self._rx_mode in {"am"}: + if self._am is None: + self._am = AmDemodulator( + AnalogConfig( + audio_rate_hz=self._audio_rate, + bfo_hz=self._bfo_hz, + ), + rf_rate_hz=self.sample_rate, + ) + am = self._am + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(am.demod(s)) + ) + elif self._rx_mode in {"usb", "lsb"}: + if self._ssb is None: + self._ssb = SsbDemodulator( + AnalogConfig( + audio_rate_hz=self._audio_rate, + bfo_hz=self._bfo_hz, + ), + rf_rate_hz=self.sample_rate, + sideband=self._rx_mode.upper(), + ) + ssb = self._ssb + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(ssb.demod(s)) + ) + elif self._rx_mode in {"cw"}: + if self._cw is None: + self._cw = CwAudioDemodulator( + AnalogConfig( + audio_rate_hz=self._audio_rate, + bfo_hz=self._bfo_hz, + ), + rf_rate_hz=self.sample_rate, + ) + cw = self._cw + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(cw.demod(s)) + ) + except Exception as e: + logger.warning( + "HackRF demodulation failed (mode={}): {}", + self._rx_mode, + repr(e), + ) + audio_pcm = None + yield SignalSample( + timestamp=datetime.now(timezone.utc), + frequency_hz=self._frequency_hz, + strength_db=float(strength_db), + decoded_data=None, + raw_data=audio_pcm, + mode=self._rx_mode if self._rx_mode != "none" else "", + ) + await asyncio.sleep(0.1) + finally: + if broker is not None: + broker.rx_active.clear() async def close(self) -> None: """Release HackRF.""" @@ -92,5 +258,5 @@ async def close(self) -> None: try: self._device.close() except Exception as e: - logger.warning("HackRF close: %s", e) + logger.warning("HackRF close: {}", e) self._device = None diff --git a/radioshaq/radioshaq/remote_receiver/backends/rtlsdr_backend.py b/radioshaq/radioshaq/remote_receiver/backends/rtlsdr_backend.py index c907769..bc32507 100644 --- a/radioshaq/radioshaq/remote_receiver/backends/rtlsdr_backend.py +++ b/radioshaq/radioshaq/remote_receiver/backends/rtlsdr_backend.py @@ -4,6 +4,7 @@ import asyncio from datetime import datetime, timezone +import os from typing import AsyncIterator import numpy as np @@ -11,6 +12,13 @@ from radioshaq.remote_receiver.backends.base import SDRBackend from radioshaq.remote_receiver.radio_interface import SignalSample +from radioshaq.remote_receiver.dsp.nfm import NfmConfig, NfmDemodulator, float_to_pcm16 +from radioshaq.remote_receiver.dsp.analog import ( + AnalogConfig, + AmDemodulator, + CwAudioDemodulator, + SsbDemodulator, +) class RtlSdrBackend(SDRBackend): @@ -21,6 +29,13 @@ def __init__(self, device_index: int = 0, sample_rate: int = 2_400_000): self.sample_rate = sample_rate self._frequency_hz: float = 0.0 self._rtl = None + self._rx_mode = os.environ.get("RECEIVER_MODE", "none").strip().lower() + self._audio_rate = int(os.environ.get("RECEIVER_AUDIO_RATE", "48000")) + self._bfo_hz = float(os.environ.get("RECEIVER_BFO_HZ", "1500")) + self._nfm: NfmDemodulator | None = None + self._am: AmDemodulator | None = None + self._ssb: SsbDemodulator | None = None + self._cw: CwAudioDemodulator | None = None async def initialize(self) -> None: """Open RTL-SDR device.""" @@ -29,17 +44,39 @@ async def initialize(self) -> None: self._rtl = rtlsdr.RtlSdr(self.device_index) self._rtl.sample_rate = self.sample_rate - logger.info("RTL-SDR initialized (device_index=%s)", self.device_index) + logger.info("RTL-SDR initialized (device_index={})", self.device_index) except ImportError: logger.warning("pyrtlsdr not installed; RTL-SDR backend will yield stub samples") except Exception as e: - logger.warning("RTL-SDR init failed: %s", e) + logger.warning("RTL-SDR init failed: {}", e) async def set_frequency(self, frequency_hz: float) -> None: """Tune to frequency in Hz.""" self._frequency_hz = frequency_hz if self._rtl: self._rtl.center_freq = int(frequency_hz) + self._nfm = None + self._am = None + self._ssb = None + self._cw = None + + async def configure( + self, + *, + mode: str | None = None, + audio_rate_hz: int | None = None, + bfo_hz: float | None = None, + ) -> None: + if mode is not None: + self._rx_mode = str(mode).strip().lower() + if audio_rate_hz is not None: + self._audio_rate = int(audio_rate_hz) + if bfo_hz is not None: + self._bfo_hz = float(bfo_hz) + self._nfm = None + self._am = None + self._ssb = None + self._cw = None async def receive(self, duration_seconds: float) -> AsyncIterator[SignalSample]: """Stream signal samples: read I/Q in chunks, compute power (dB), yield SignalSample.""" @@ -47,16 +84,60 @@ async def receive(self, duration_seconds: float) -> AsyncIterator[SignalSample]: end = loop.time() + duration_seconds chunk_size = 8192 while loop.time() < end: + audio_pcm: bytes | None = None if self._rtl: try: samples = await loop.run_in_executor( None, lambda: self._rtl.read_samples(chunk_size), ) - power = np.mean(np.abs(samples) ** 2) + s = np.asarray(samples) + power = np.mean(np.abs(s) ** 2) strength_db = 10.0 * np.log10(power + 1e-30) if power > 0 else -120.0 + # Run CPU-heavy demod in executor to avoid blocking the event loop (same as hackrf_backend). + if self._rx_mode in {"nfm", "fm"}: + if self._nfm is None: + self._nfm = NfmDemodulator( + NfmConfig(audio_rate_hz=self._audio_rate), + rf_rate_hz=self.sample_rate, + ) + nfm = self._nfm + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(nfm.demod(s)) + ) + elif self._rx_mode in {"am"}: + if self._am is None: + self._am = AmDemodulator( + AnalogConfig(audio_rate_hz=self._audio_rate, bfo_hz=self._bfo_hz), + rf_rate_hz=self.sample_rate, + ) + am = self._am + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(am.demod(s)) + ) + elif self._rx_mode in {"usb", "lsb"}: + if self._ssb is None: + self._ssb = SsbDemodulator( + AnalogConfig(audio_rate_hz=self._audio_rate, bfo_hz=self._bfo_hz), + rf_rate_hz=self.sample_rate, + sideband=self._rx_mode.upper(), + ) + ssb = self._ssb + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(ssb.demod(s)) + ) + elif self._rx_mode in {"cw"}: + if self._cw is None: + self._cw = CwAudioDemodulator( + AnalogConfig(audio_rate_hz=self._audio_rate, bfo_hz=self._bfo_hz), + rf_rate_hz=self.sample_rate, + ) + cw = self._cw + audio_pcm = await loop.run_in_executor( + None, lambda: float_to_pcm16(cw.demod(s)) + ) except Exception as e: - logger.debug("RTL-SDR read failed: %s", e) + logger.debug("RTL-SDR read failed: {}", e) strength_db = -100.0 else: strength_db = -100.0 @@ -65,6 +146,8 @@ async def receive(self, duration_seconds: float) -> AsyncIterator[SignalSample]: frequency_hz=self._frequency_hz, strength_db=float(strength_db), decoded_data=None, + raw_data=audio_pcm, + mode=self._rx_mode if self._rx_mode != "none" else "", ) await asyncio.sleep(0.1) @@ -74,5 +157,5 @@ async def close(self) -> None: try: self._rtl.close() except Exception as e: - logger.warning("RTL-SDR close: %s", e) + logger.warning("RTL-SDR close: {}", e) self._rtl = None diff --git a/radioshaq/radioshaq/remote_receiver/dsp/__init__.py b/radioshaq/radioshaq/remote_receiver/dsp/__init__.py new file mode 100644 index 0000000..efd3711 --- /dev/null +++ b/radioshaq/radioshaq/remote_receiver/dsp/__init__.py @@ -0,0 +1,2 @@ +"""DSP helpers for the remote receiver (demodulators, filters).""" + diff --git a/radioshaq/radioshaq/remote_receiver/dsp/analog.py b/radioshaq/radioshaq/remote_receiver/dsp/analog.py new file mode 100644 index 0000000..83da51e --- /dev/null +++ b/radioshaq/radioshaq/remote_receiver/dsp/analog.py @@ -0,0 +1,135 @@ +"""Analog demodulators for the remote SDR receiver (AM/SSB/CW-audio). + +These are pragmatic, dependency-light implementations intended for demos and +field utility rather than lab-grade performance. +""" + +from __future__ import annotations + +from dataclasses import dataclass + +import numpy as np +try: + from scipy import signal # type: ignore +except Exception: # pragma: no cover + signal = None # type: ignore + + +def _require_scipy() -> None: + if signal is None: + raise RuntimeError("Analog demod requires SciPy. Install project deps (scipy).") + + +@dataclass +class AnalogConfig: + audio_rate_hz: int = 48_000 + audio_lpf_hz: float = 3_000.0 + audio_gain: float = 2.0 + bfo_hz: float = 1_500.0 # SSB/CW beat frequency oscillator + + +def _resample_to_audio(x: np.ndarray, rf_rate_hz: int, audio_rate_hz: int) -> np.ndarray: + _require_scipy() + if x.size == 0: + return np.zeros(0, dtype=np.float32) + return signal.resample_poly(x, up=int(audio_rate_hz), down=int(rf_rate_hz)).astype(np.float32) + + +def _lpf_audio(audio: np.ndarray, fs: int, cutoff_hz: float) -> np.ndarray: + _require_scipy() + if audio.size == 0: + return audio.astype(np.float32) + nyq = 0.5 * fs + cutoff = min(max(200.0, float(cutoff_hz)), nyq * 0.95) + b, a = signal.butter(4, cutoff / nyq, btype="low") + # For block-based processing, prefer zero-phase filtering to avoid large group delay + # that breaks simple correlation-based sanity checks. + try: + if audio.size > max(len(a), len(b)) * 9: + return signal.filtfilt(b, a, audio).astype(np.float32) + except Exception: + pass + return signal.lfilter(b, a, audio).astype(np.float32) + + +class AmDemodulator: + """AM envelope demod (magnitude) with DC removal and audio LPF.""" + + def __init__(self, cfg: AnalogConfig, rf_rate_hz: int): + self.cfg = cfg + self.rf_rate_hz = int(rf_rate_hz) + self._dc: float = 0.0 + + def demod(self, iq: np.ndarray) -> np.ndarray: + x = np.asarray(iq) + if x.size == 0: + return np.zeros(0, dtype=np.float32) + if not np.iscomplexobj(x): + x = x.astype(np.complex64) + env = np.abs(x).astype(np.float32) + # For AM DSB-LC, envelope is proportional to (1 + m*x). Normalize by the carrier level + # so gain changes don't dominate and the recovered audio is centered near 0. + self._dc = float(np.mean(env)) + dc = float(self._dc) if self._dc != 0.0 else 1e-9 + baseband = (env / dc) - 1.0 + audio = _resample_to_audio(baseband.astype(np.float32), self.rf_rate_hz, self.cfg.audio_rate_hz) + audio = _lpf_audio(audio, self.cfg.audio_rate_hz, self.cfg.audio_lpf_hz) + audio *= float(self.cfg.audio_gain) + return np.clip(audio, -1.0, 1.0) + + +class SsbDemodulator: + """SSB demod (baseband) with resample + audio low-pass. + + This expects complex baseband SSB IQ (analytic), matching `radioshaq.radio.analog_mod.ssb_modulate`. + """ + + def __init__(self, cfg: AnalogConfig, rf_rate_hz: int, sideband: str = "USB"): + self.cfg = cfg + self.rf_rate_hz = int(rf_rate_hz) + self.sideband = sideband.upper() + + def demod(self, iq: np.ndarray) -> np.ndarray: + x = np.asarray(iq) + if x.size == 0: + return np.zeros(0, dtype=np.float32) + if not np.iscomplexobj(x): + x = x.astype(np.complex64) + if self.sideband == "LSB": + x = np.conj(x) + # Resample complex baseband to audio rate and take the in-phase component. + _require_scipy() + x_audio = signal.resample_poly(x, up=int(self.cfg.audio_rate_hz), down=int(self.rf_rate_hz)).astype(np.complex64) + audio = np.real(x_audio).astype(np.float32) + audio = _lpf_audio(audio, self.cfg.audio_rate_hz, self.cfg.audio_lpf_hz) + audio *= float(self.cfg.audio_gain) + return np.clip(audio, -1.0, 1.0) + + +class CwAudioDemodulator: + """CW as audible tone: narrow LPF around BFO after mixing.""" + + def __init__(self, cfg: AnalogConfig, rf_rate_hz: int): + self.cfg = cfg + self.rf_rate_hz = int(rf_rate_hz) + self._phase: float = 0.0 + + def demod(self, iq: np.ndarray) -> np.ndarray: + x = np.asarray(iq) + if x.size == 0: + return np.zeros(0, dtype=np.float32) + if not np.iscomplexobj(x): + x = x.astype(np.complex64) + n = np.arange(x.size, dtype=np.float64) + w = 2.0 * np.pi * float(self.cfg.bfo_hz) / float(self.rf_rate_hz) + phase = self._phase + w * n + osc = np.exp(1j * phase).astype(np.complex64) + self._phase = float((self._phase + w * x.size) % (2.0 * np.pi)) + mixed = x * osc + tone = np.real(mixed).astype(np.float32) + audio = _resample_to_audio(tone, self.rf_rate_hz, self.cfg.audio_rate_hz) + # CW: narrower filter than voice + audio = _lpf_audio(audio, self.cfg.audio_rate_hz, min(800.0, self.cfg.audio_lpf_hz)) + audio *= float(self.cfg.audio_gain) + return np.clip(audio, -1.0, 1.0) + diff --git a/radioshaq/radioshaq/remote_receiver/dsp/nfm.py b/radioshaq/radioshaq/remote_receiver/dsp/nfm.py new file mode 100644 index 0000000..664f018 --- /dev/null +++ b/radioshaq/radioshaq/remote_receiver/dsp/nfm.py @@ -0,0 +1,103 @@ +"""Narrowband FM (NFM) demodulator utilities for SDR receiver backends. + +This is intentionally lightweight: numpy/scipy only, no GNU Radio dependency. +It is good enough for typical ham 2m/70cm analog FM voice channels. +""" + +from __future__ import annotations + +from dataclasses import dataclass + +import numpy as np +try: + from scipy import signal # type: ignore +except Exception: # pragma: no cover + signal = None # type: ignore + + +def _require_scipy() -> None: + if signal is None: + raise RuntimeError("NFM demod requires SciPy. Install project deps (scipy).") + + +@dataclass +class NfmConfig: + """NFM demod configuration.""" + + audio_rate_hz: int = 48_000 + # De-emphasis time constant (US): 75 for US, 50 for many EU regions. + deemphasis_us: float = 75.0 + # Audio low-pass cutoff (voice): ~3 kHz is typical. + audio_lpf_hz: float = 3_000.0 + # Additional post-demod gain. Keep conservative to avoid clipping. + audio_gain: float = 2.0 + + +class NfmDemodulator: + """Stateful NFM demodulator (keeps discriminator history + deemphasis filter state).""" + + def __init__(self, cfg: NfmConfig, rf_rate_hz: int): + self.cfg = cfg + self.rf_rate_hz = int(rf_rate_hz) + self._prev: complex | None = None + self._de_z: np.ndarray | None = None + + def demod(self, iq: np.ndarray) -> np.ndarray: + """Demod a chunk of complex IQ into float32 audio in [-1, 1] (approx).""" + _require_scipy() + x = np.asarray(iq) + if x.size < 2: + return np.zeros(0, dtype=np.float32) + if not np.iscomplexobj(x): + x = x.astype(np.complex64) + + # Quadrature discriminator: angle of conjugate product. + if self._prev is None: + prev = x[0] + else: + prev = self._prev + y = np.empty(x.size, dtype=np.complex64) + y[0] = prev + y[1:] = x[:-1] + self._prev = x[-1] + discr = np.angle(x * np.conj(y)).astype(np.float32) + + # Resample discriminator output down to audio rate. + # Use rational polyphase resampling; works for arbitrary rates. + audio = signal.resample_poly(discr, up=self.cfg.audio_rate_hz, down=self.rf_rate_hz).astype( + np.float32 + ) + + # Audio low-pass and deemphasis. + if audio.size == 0: + return audio + nyq = 0.5 * self.cfg.audio_rate_hz + cutoff = min(max(300.0, float(self.cfg.audio_lpf_hz)), nyq * 0.95) + b, a = signal.butter(4, cutoff / nyq, btype="low") + audio = signal.lfilter(b, a, audio).astype(np.float32) + + # De-emphasis: simple 1-pole IIR matching RC low-pass with time constant tau. + tau = float(self.cfg.deemphasis_us) * 1e-6 + if tau > 0: + # H(z) ~ (1 - alpha) / (1 - alpha z^-1), alpha = exp(-1/(fs*tau)) + alpha = float(np.exp(-1.0 / (self.cfg.audio_rate_hz * tau))) + b2 = np.array([1.0 - alpha], dtype=np.float32) + a2 = np.array([1.0, -alpha], dtype=np.float32) + if self._de_z is None: + self._de_z = signal.lfilter_zi(b2, a2).astype(np.float32) * 0.0 + audio, self._de_z = signal.lfilter(b2, a2, audio, zi=self._de_z) + audio = audio.astype(np.float32) + + audio *= float(self.cfg.audio_gain) + audio = np.clip(audio, -1.0, 1.0) + return audio + + +def float_to_pcm16(audio: np.ndarray) -> bytes: + """Convert float audio [-1,1] to little-endian signed 16-bit PCM bytes.""" + a = np.asarray(audio, dtype=np.float32) + if a.size == 0: + return b"" + pcm = (a * 32767.0).astype(np.int16) + return pcm.tobytes() + diff --git a/radioshaq/radioshaq/remote_receiver/hq_client.py b/radioshaq/radioshaq/remote_receiver/hq_client.py index cace6ab..609baa5 100644 --- a/radioshaq/radioshaq/remote_receiver/hq_client.py +++ b/radioshaq/radioshaq/remote_receiver/hq_client.py @@ -10,7 +10,7 @@ class HQClient: - """Upload receiver data to SHAKODS HQ with JWT.""" + """Upload receiver data to RadioShaq HQ with JWT.""" def __init__( self, @@ -34,7 +34,7 @@ async def connect(self) -> None: if r.status_code == 200: logger.info("HQ connection OK") except Exception as e: - logger.warning("HQ connect check failed: %s", e) + logger.warning("HQ connect check failed: {}", e) async def upload(self, packet: dict[str, Any]) -> bool: """Upload a single packet to HQ.""" @@ -48,5 +48,5 @@ async def upload(self, packet: dict[str, Any]) -> bool: ) return r.status_code in (200, 201) except Exception as e: - logger.warning("HQ upload failed: %s", e) + logger.warning("HQ upload failed: {}", e) return False diff --git a/radioshaq/radioshaq/remote_receiver/radio_interface.py b/radioshaq/radioshaq/remote_receiver/radio_interface.py index 46010dc..8a0116b 100644 --- a/radioshaq/radioshaq/remote_receiver/radio_interface.py +++ b/radioshaq/radioshaq/remote_receiver/radio_interface.py @@ -6,7 +6,7 @@ import os from dataclasses import dataclass from datetime import datetime, timezone -from typing import TYPE_CHECKING, AsyncIterator +from typing import TYPE_CHECKING, Any, AsyncIterator from loguru import logger @@ -28,7 +28,11 @@ def is_interesting(self) -> bool: return self.strength_db >= -90.0 -def create_sdr_from_env() -> SDRInterface: +def create_sdr_from_env( + *, + device_manager: Any | None = None, + broker: Any | None = None, +) -> "SDRInterface": """ Build SDR from environment. SDR_TYPE=rtlsdr (default) or hackrf. RTL-SDR: RTLSDR_INDEX (default 0). @@ -45,6 +49,8 @@ def create_sdr_from_env() -> SDRInterface: device_index=index, serial_number=serial, sample_rate=sample_rate, + device_manager=device_manager, + broker=broker, ) else: from radioshaq.remote_receiver.backends.rtlsdr_backend import RtlSdrBackend @@ -82,6 +88,17 @@ async def set_frequency(self, frequency_hz: float) -> None: """Tune to frequency in Hz.""" await self._backend.set_frequency(frequency_hz) + async def configure( + self, + *, + mode: str | None = None, + audio_rate_hz: int | None = None, + bfo_hz: float | None = None, + ) -> None: + """Configure backend demod settings (if supported).""" + if hasattr(self._backend, "configure"): + await self._backend.configure(mode=mode, audio_rate_hz=audio_rate_hz, bfo_hz=bfo_hz) + async def receive(self, duration_seconds: float) -> AsyncIterator[SignalSample]: """Stream signal samples for duration.""" async for sample in self._backend.receive(duration_seconds): diff --git a/radioshaq/radioshaq/remote_receiver/server.py b/radioshaq/radioshaq/remote_receiver/server.py index f3889d6..e3bd823 100644 --- a/radioshaq/radioshaq/remote_receiver/server.py +++ b/radioshaq/radioshaq/remote_receiver/server.py @@ -3,13 +3,23 @@ from __future__ import annotations import asyncio +import base64 import os from contextlib import asynccontextmanager -from typing import Any +from typing import Any, Awaitable, Callable -from fastapi import FastAPI, WebSocket, WebSocketDisconnect +import numpy as np +from fastapi import FastAPI, HTTPException, Request, WebSocket, WebSocketDisconnect from loguru import logger +from radioshaq.radio.compliance import ( + is_restricted, + is_tx_allowed, + is_tx_spectrum_allowed, + log_tx, +) +from radioshaq.radio.bands import BAND_PLANS +from radioshaq.radio.hackrf_tx_compat import stream_hackrf_iq_bytes from radioshaq.remote_receiver.auth import JWTReceiverAuth from radioshaq.remote_receiver.hq_client import HQClient from radioshaq.remote_receiver.radio_interface import ( @@ -18,6 +28,15 @@ create_sdr_from_env, ) +# Env var for TX audit log: prefer pydantic config convention so YAML/config-driven deployments work. +TX_AUDIT_LOG_PATH_ENV = "RADIOSHAQ_RADIO__TX_AUDIT_LOG_PATH" +TX_AUDIT_LOG_PATH_ENV_LEGACY = "TX_AUDIT_LOG_PATH" + + +def _broker_tx_audit_log_path() -> str | None: + """Path for broker TX audit log; matches config.radio.tx_audit_log_path env override.""" + return os.environ.get(TX_AUDIT_LOG_PATH_ENV) or os.environ.get(TX_AUDIT_LOG_PATH_ENV_LEGACY) or None + class ReceiverService: """Remote receiver: SDR, JWT auth, HQ upload.""" @@ -38,12 +57,12 @@ def __init__( ) @classmethod - def from_env(cls) -> ReceiverService: + def from_env(cls, *, broker: Any | None = None, device_manager: Any | None = None) -> "ReceiverService": """Build from environment.""" secret = os.environ.get("JWT_SECRET", "") station_id = os.environ.get("STATION_ID", "RECEIVER") jwt_auth = JWTReceiverAuth(secret=secret) - radio = create_sdr_from_env() + radio = create_sdr_from_env(device_manager=device_manager, broker=broker) hq_url = os.environ.get("HQ_URL") hq_token = os.environ.get("HQ_TOKEN", "") hq_client = HQClient(hq_url or "http://localhost:8000", hq_token, station_id) if hq_url else None @@ -67,7 +86,7 @@ async def _upload_loop(self) -> None: except asyncio.CancelledError: break except Exception as e: - logger.warning("Upload loop error: %s", e) + logger.warning("Upload loop error: {}", e) await asyncio.sleep(0.1) async def _queue_for_hq(self, signal: SignalSample, operator_id: str) -> None: @@ -85,46 +104,426 @@ async def _queue_for_hq(self, signal: SignalSample, operator_id: str) -> None: async def stream_frequency( self, + *, frequency_hz: float, duration_seconds: int, websocket: WebSocket, token: str, + mode: str | None, + audio_rate_hz: int | None, + bfo_hz: float | None, ) -> None: - """WebSocket: verify JWT, tune, stream samples.""" - payload = await self.jwt_auth.verify_token(token) + """ + Stream SDR samples to a WebSocket client with optional demodulated audio. + + Messages follow a simple JSON contract: + - Signal frames: {"type": "signal", "timestamp": ..., "frequency_hz": ..., "signal_strength_db": ..., "decoded_text": ..., "mode": ...} + - Audio frames: {"type": "audio", "sample_rate_hz": ..., "audio_b64": ...} + - Error frames: {"type": "error", "message": "..."} + """ + # Authenticate the provided token and extract operator identity. + try: + claims = await self.jwt_auth.verify_token(token) + except Exception as e: + # On auth failure, accept the socket (if not already accepted), send an error frame, and close. + try: + await websocket.accept() + except Exception: + # If accept fails, just give up on this connection. + return + await websocket.send_json( + { + "type": "error", + "message": f"Unauthorized: {e}", + } + ) + await websocket.close(code=4401) + return + await websocket.accept() + + # Configure SDR for the requested frequency and demodulation settings. try: await self.radio.set_frequency(frequency_hz) - async for signal in self.radio.receive(float(duration_seconds)): - await websocket.send_json({ + await self.radio.configure(mode=mode, audio_rate_hz=audio_rate_hz, bfo_hz=bfo_hz) + except Exception as e: + logger.warning("Receiver configuration failed: {}", e) + await websocket.send_json( + { + "type": "error", + "message": "Receiver configuration failed", + } + ) + await websocket.close(code=1011) + return + + loop = asyncio.get_running_loop() + deadline = loop.time() + max(float(duration_seconds), 0.0) + operator_id = getattr(claims, "sub", "UNKNOWN") or "UNKNOWN" + default_audio_rate = audio_rate_hz or 48_000 + + try: + async for sample in self.radio.receive(float(duration_seconds)): + now = loop.time() + if now >= deadline: + break + + # Signal message + signal_payload: dict[str, Any] = { "type": "signal", - "timestamp": signal.timestamp.isoformat(), - "signal_strength": signal.strength_db, - "decoded": signal.decoded_data, - }) - await self._queue_for_hq(signal, payload.sub) + "timestamp": sample.timestamp.isoformat(), + "frequency_hz": sample.frequency_hz, + "signal_strength_db": sample.strength_db, + "decoded_text": sample.decoded_data, + "mode": sample.mode, + } + await websocket.send_json(signal_payload) + + # Optional audio message when raw demodulated audio is present. + if sample.raw_data: + try: + audio_b64 = base64.b64encode(sample.raw_data).decode("ascii") + except Exception as e: + logger.warning("Failed to encode audio frame: {}", e) + else: + await websocket.send_json( + { + "type": "audio", + "sample_rate_hz": default_audio_rate, + "audio_b64": audio_b64, + } + ) + + # Queue interesting signals for HQ upload when configured. + if sample.is_interesting: + try: + await self._queue_for_hq(sample, operator_id=operator_id) + except Exception as e: + logger.warning("Failed to queue signal for HQ upload: {}", e) + + # Re-check deadline after work in the loop body. + if loop.time() >= deadline: + break + # After streaming samples, send a terminal frame and close so clients + # (including tests) do not hang waiting for additional messages. + try: + await websocket.send_json({"type": "done"}) + except Exception: + pass + try: + await websocket.close(code=1000) + except Exception: + pass except WebSocketDisconnect: - pass + # Client disconnected; nothing more to do. + return except Exception as e: - await websocket.send_json({"type": "error", "message": str(e)}) - finally: + logger.warning("WebSocket stream error: {}", e) + try: + await websocket.send_json( + { + "type": "error", + "message": "Internal receiver error during stream", + } + ) + except Exception: + pass try: - await websocket.close() + await websocket.close(code=1011) except Exception: pass +class HackRFDeviceManager: + """ + Own a single pyhackrf2.HackRF instance within the receiver process. + + All direct calls into libhackrf should happen underneath this manager, + coordinated via an asyncio.Lock to ensure single-owner semantics. + """ + + def __init__( + self, + device_index: int, + serial_number: str | None, + max_gain: int, + restricted_region: str, + ): + self._device_index = device_index + self._serial_number = serial_number + self._max_gain = max(0, min(47, max_gain)) + self._restricted_region = restricted_region + self._device: Any | None = None + self._lock = asyncio.Lock() + + @property + def max_gain(self) -> int: + return self._max_gain + + @property + def restricted_region(self) -> str: + return self._restricted_region + + def _open_device(self) -> Any: + """Lazily open the underlying HackRF device.""" + if self._device is not None: + return self._device + try: + from pyhackrf2 import HackRF + + if self._serial_number: + self._device = HackRF(serial_number=self._serial_number) + else: + self._device = HackRF(device_index=self._device_index) + return self._device + except ImportError as e: + raise RuntimeError( + "HackRF device manager requires pyhackrf2. " + "Install with: uv sync --extra hackrf (or pip install pyhackrf2)" + ) from e + except Exception as e: # pragma: no cover - hardware/driver dependent + raise RuntimeError(f"HackRF open failed: {e!r}") from e + + async def with_device(self, fn: Callable[[Any], Awaitable[Any]]) -> Any: + """ + Run an async function with exclusive access to the HackRF device. + + The callback receives the underlying pyhackrf2.HackRF instance. + """ + async with self._lock: + dev = self._open_device() + return await fn(dev) + + async def close(self) -> None: + """Release the underlying HackRF device handle.""" + async with self._lock: + if self._device is not None: + try: + self._device.close() + except Exception as e: + logger.warning("HackRF device manager close failed: {}", repr(e)) + finally: + self._device = None + + +class HackRFBroker: + """Coordinate HackRF TX (and later RX) behind a single async interface.""" + + def __init__(self, device_manager: HackRFDeviceManager | None) -> None: + self._dm = device_manager + self._lock = asyncio.Lock() + self._stop_rx = asyncio.Event() + self._rx_active = asyncio.Event() + + @property + def lock(self) -> asyncio.Lock: + """Lock used to serialize TX operations. RX uses device_manager._lock when accessing the device.""" + return self._lock + + @property + def available(self) -> bool: + return self._dm is not None + + def request_tx(self) -> None: + """Signal that TX work is about to start; RX loops should wind down.""" + self._stop_rx.set() + + def clear_tx(self) -> None: + """Clear TX request flag after TX has completed.""" + self._stop_rx.clear() + + @property + def should_stop_rx(self) -> bool: + """True when RX loops should stop promptly to yield to TX.""" + return self._stop_rx.is_set() + + @property + def rx_active(self) -> asyncio.Event: + """Event that is set while an RX loop is active (backend-controlled).""" + return self._rx_active + + async def tx_tone( + self, + *, + frequency_hz: float, + duration_sec: float, + sample_rate: int = 2_000_000, + ) -> None: + if self._dm is None: + raise RuntimeError("HackRF TX is not configured on this receiver") + loop = asyncio.get_running_loop() + + async def _fn(dev: Any) -> None: + def _blocking_tx() -> None: + # Generate tone and run TX in thread pool to avoid blocking the event loop + # (NumPy ops for up to ~60M samples can take hundreds of ms). + tone_hz = 1000.0 + num_samples = int(duration_sec * sample_rate) + t = np.arange(num_samples, dtype=np.float64) / sample_rate + i = (127 * 0.3 * np.cos(2 * np.pi * tone_hz * t)).astype(np.int8) + q = (127 * 0.3 * np.sin(2 * np.pi * tone_hz * t)).astype(np.int8) + iq = np.empty(2 * num_samples, dtype=np.int8) + iq[0::2] = i + iq[1::2] = q + buf = iq.tobytes() + try: + dev.center_freq = int(frequency_hz) + dev.sample_rate = sample_rate + try: + dev.txvga_gain = self._dm.max_gain + except AttributeError: + pass + except AttributeError: + # Older or stub objects may not expose these attributes. + pass + try: + stream_hackrf_iq_bytes(dev, buf, duration_sec) + except (AttributeError, TypeError) as e: + # Stub/test device or API mismatch — warn and skip (non-fatal for audit). + logger.warning("HackRF TX not available ({}); tone TX skipped", repr(e)) + except RuntimeError: + # Hardware error — propagate so the broker endpoint can return 503. + raise + + await loop.run_in_executor(None, _blocking_tx) + + async with self._lock: + await self._dm.with_device(_fn) + + async def tx_iq( + self, + *, + frequency_hz: float, + sample_rate: int, + iq_bytes: bytes, + occupied_bandwidth_hz: float | None = None, + ) -> None: + if self._dm is None: + raise RuntimeError("HackRF TX is not configured on this receiver") + loop = asyncio.get_running_loop() + iq = np.frombuffer(iq_bytes, dtype=np.int8) + + async def _fn(dev: Any) -> None: + def _blocking_tx() -> None: + try: + dev.center_freq = int(frequency_hz) + dev.sample_rate = sample_rate + try: + dev.txvga_gain = self._dm.max_gain + except AttributeError: + pass + except AttributeError: + # Older or stub objects may not expose these attributes. + pass + try: + buf = iq.tobytes() + duration_sec = len(buf) / (2.0 * sample_rate) + stream_hackrf_iq_bytes(dev, buf, duration_sec) + except (AttributeError, TypeError) as e: + # Stub/test device or API mismatch — warn and skip (non-fatal for audit). + logger.warning("HackRF TX not available ({}); IQ TX skipped", repr(e)) + except RuntimeError: + # Hardware error — propagate so the broker endpoint can return 503. + raise + + await loop.run_in_executor(None, _blocking_tx) + + async with self._lock: + await self._dm.with_device(_fn) + + @asynccontextmanager async def lifespan(app: FastAPI): """Start receiver service.""" - service = ReceiverService.from_env() + # Optional HackRF TX broker and device manager: only when SDR_TYPE=hackrf. + broker: HackRFBroker | None + device_manager: HackRFDeviceManager | None = None + sdr_type = os.environ.get("SDR_TYPE", "rtlsdr").strip().lower() + if sdr_type != "hackrf": + broker = None + else: + try: + index = int(os.environ.get("HACKRF_INDEX", "0")) + serial = os.environ.get("HACKRF_SERIAL") or None + max_gain = int(os.environ.get("HACKRF_MAX_GAIN", "47")) + restricted_region = os.environ.get("RESTRICTED_BANDS_REGION", "FCC") + device_manager = HackRFDeviceManager( + device_index=index, + serial_number=serial, + max_gain=max_gain, + restricted_region=restricted_region, + ) + broker = HackRFBroker(device_manager=device_manager) + except Exception as e: + logger.warning("HackRF broker TX not available: {}", repr(e)) + broker = HackRFBroker(device_manager=None) + device_manager = None + service = ReceiverService.from_env(broker=broker, device_manager=device_manager) app.state.receiver = service + app.state.hackrf_broker = broker + # Expose device manager for compliance region lookup in TX endpoints. + app.state.hackrf_broker_device_manager = device_manager await service.start() yield - # Shutdown + # Shutdown: close radio backend and release HackRF USB handle so hot-restart can + # re-open the device without the process needing to fully exit. + await service.radio.close() + if device_manager is not None: + await device_manager.close() + + +app = FastAPI(title="RadioShaq Remote Receiver", lifespan=lifespan) + + +def ensure_test_state(app: FastAPI) -> None: + """ + Ensure app.state has receiver / hackrf_broker attributes for tests that + import the module-level `app` without running the lifespan context. + + Must be called explicitly from test fixtures (e.g. remote_receiver conftest). + Production uses the lifespan context only; this is opt-in for tests. + """ + + if not hasattr(app.state, "receiver"): + class _DummyJWTAuth: + async def verify_token(self, token: str) -> Any: # pragma: no cover - test shim + return object() + + class _DummyRadio(SDRInterface): # pragma: no cover - test shim + async def initialize(self) -> None: + return None + async def set_frequency(self, frequency_hz: float) -> None: + self._frequency = frequency_hz -app = FastAPI(title="SHAKODS Remote Receiver", lifespan=lifespan) + async def configure( + self, + *, + mode: str | None = None, + audio_rate_hz: int | None = None, + bfo_hz: float | None = None, + ) -> None: + self._mode = mode + self._audio_rate_hz = audio_rate_hz + self._bfo_hz = bfo_hz + + async def receive(self, duration_seconds: float): + if False: + yield # pragma: no cover - empty async iterator + return + + app.state.receiver = ReceiverService( + station_id="TEST-RECEIVER", + jwt_auth=_DummyJWTAuth(), + radio=_DummyRadio(), + hq_client=None, + ) + + if not hasattr(app.state, "hackrf_broker"): + app.state.hackrf_broker = HackRFBroker(device_manager=None) + + if not hasattr(app.state, "hackrf_broker_device_manager"): + app.state.hackrf_broker_device_manager = None @app.websocket("/ws/stream") @@ -133,20 +532,301 @@ async def websocket_stream(websocket: WebSocket): token = websocket.query_params.get("token", "") frequency_hz = float(websocket.query_params.get("frequency_hz", "145000000")) duration_seconds = int(websocket.query_params.get("duration_seconds", "60")) - receiver: ReceiverService = websocket.app.state.receiver + mode = websocket.query_params.get("mode") + audio_rate_hz = websocket.query_params.get("audio_rate_hz") + bfo_hz = websocket.query_params.get("bfo_hz") + audio_rate = int(audio_rate_hz) if audio_rate_hz else None + bfo = float(bfo_hz) if bfo_hz else None + receiver: ReceiverService | None = getattr(websocket.app.state, "receiver", None) + if receiver is None: + await websocket.accept() + await websocket.send_json( + {"type": "error", "message": "Receiver not initialized"}, + ) + await websocket.close(code=1011) + return + # HackRF RX/TX scheduling is now managed inside the backend via the shared + # HackRFDeviceManager and HackRFBroker flags; no broad websocket lock. await receiver.stream_frequency( frequency_hz=frequency_hz, duration_seconds=duration_seconds, websocket=websocket, token=token, + mode=mode, + audio_rate_hz=audio_rate, + bfo_hz=bfo, ) +async def _require_broker_auth(request: Request) -> None: + """Simple auth for TX endpoints: reuse JWTReceiverAuth with bearer or query token.""" + token = "" + auth_header = request.headers.get("Authorization") + if auth_header and auth_header.lower().startswith("bearer "): + token = auth_header.split(" ", 1)[1].strip() + if not token: + token = request.query_params.get("token", "") + receiver = getattr(request.app.state, "receiver", None) + if not token: + raise HTTPException(status_code=401, detail="Missing authorization token for HackRF TX") + if receiver is None: + raise HTTPException(status_code=503, detail="Receiver not initialized") + try: + await receiver.jwt_auth.verify_token(token) + except Exception: + raise HTTPException(status_code=401, detail="Invalid authorization token for HackRF TX") + + +# Upper bound on tone duration to avoid OOM / event-loop blockage (e.g. 3600 s @ 2 MHz → ~57 GB). +MAX_TONE_SEC = 30.0 + + +@app.post("/tx/tone") +async def tx_tone(request: Request) -> dict[str, Any]: + """Transmit a short tone via the HackRF broker. + + Body: {\"frequency_hz\": float, \"duration_sec\": float, \"sample_rate\": int } + """ + await _require_broker_auth(request) + body = await request.json() + try: + frequency_hz = float(body.get("frequency_hz")) + duration_sec = float(body.get("duration_sec", 1.0)) + sample_rate = int(body.get("sample_rate", 2_000_000)) + except Exception as e: + raise HTTPException(status_code=400, detail=f"Invalid TX tone payload: {e}") + if sample_rate <= 0: + raise HTTPException( + status_code=400, + detail=f"sample_rate must be a positive integer, got {sample_rate}", + ) + if duration_sec <= 0 or duration_sec > MAX_TONE_SEC: + raise HTTPException( + status_code=400, + detail=f"duration_sec must be in (0, {MAX_TONE_SEC}], got {duration_sec}", + ) + broker: HackRFBroker | None = getattr(request.app.state, "hackrf_broker", None) + + # Determine restricted-region key from device manager when available; fall back to env. + restricted_region = os.environ.get("RESTRICTED_BANDS_REGION", "FCC") + try: + device_manager = getattr(request.app.state, "hackrf_broker_device_manager", None) + if device_manager is not None and getattr(device_manager, "restricted_region", None): + restricted_region = device_manager.restricted_region # type: ignore[assignment] + except Exception: + pass + + # Resolve band plans from compliance backend; fall back to built-in BAND_PLANS. + try: + from radioshaq.compliance_plugin import get_backend + + backend = get_backend(restricted_region) + plans = backend.get_band_plans() if backend is not None else None + except Exception: + plans = None + band_plans = plans if plans else BAND_PLANS + + occupied_bw = 25_000.0 + if is_restricted(frequency_hz, region=restricted_region): + raise HTTPException( + status_code=403, + detail=f"Frequency {frequency_hz} Hz is in a restricted band", + ) + if not is_tx_spectrum_allowed( + center_hz=frequency_hz, + occupied_bandwidth_hz=occupied_bw, + restricted_region=restricted_region, + band_plan_source=band_plans, + allow_tx_only_amateur_bands=True, + ): + raise HTTPException( + status_code=403, + detail="TX not allowed on this frequency (band plan)", + ) + + # Only after compliance checks do we require a broker; this allows tests and + # deployments without HackRF hardware to still exercise the 403 paths. + # Reject when broker is absent or has no hardware so we don't call request_tx() + # and interrupt active RX for a TX that would fail anyway. + if broker is None or not broker.available: + raise HTTPException(status_code=503, detail="HackRF TX broker not available") + + tx_succeeded = False + try: + broker.request_tx() + await broker.tx_tone( + frequency_hz=frequency_hz, + duration_sec=duration_sec, + sample_rate=sample_rate, + ) + tx_succeeded = True + except RuntimeError as e: + raise HTTPException(status_code=503, detail=str(e)) + except Exception as e: + logger.warning("HackRF TX tone error: {}", repr(e)) + raise HTTPException(status_code=500, detail="HackRF TX tone failed") + finally: + broker.clear_tx() + operator_id: str | None = None + receiver_for_audit = getattr(request.app.state, "receiver", None) + if receiver_for_audit is not None: + operator_id = receiver_for_audit.station_id + audit_log_path = _broker_tx_audit_log_path() + log_tx( + frequency_hz=frequency_hz, + duration_sec=duration_sec, + mode="tone", + rig_or_sdr="hackrf_broker", + operator_id=operator_id, + success=tx_succeeded, + audit_log_path=audit_log_path, + ) + return {"success": True, "notes": "HackRF tone transmitted via remote receiver"} + + +# Max decoded IQ size: int8 interleaved = 2 bytes/sample → duration_sec = bytes / (sample_rate * 2). +# At 2 MHz: 4 MB/s. Capped at ~16 s of IQ to limit peak memory (see below). +# RAM: full request body (base64 ~4/3 × this), decoded iq_bytes, and a copy during TX are all in memory; +# disk is not used. Peak per request ≈ 2.33× decoded size (raw JSON + decode + TX copy). +# content-length check is hint-only (spoofable); consider LimitUploadSize-style middleware for hard cap. +MAX_IQ_BYTES = 64 * 1024 * 1024 # 64 MB (~16 s @ 2 MHz int8 interleaved; ~150 MB peak per request) +# Approximate max JSON body size for that IQ (base64 ~4/3 × decoded + JSON overhead). +MAX_IQ_BODY_BYTES = int(MAX_IQ_BYTES * 4 / 3) + 1024 + + +@app.post("/tx/iq") +async def tx_iq(request: Request) -> dict[str, Any]: + """Transmit I/Q samples via the HackRF broker. + + Body: {\"frequency_hz\": float, \"sample_rate\": int, \"iq_b64\": str, \"occupied_bandwidth_hz\": float | null } + """ + await _require_broker_auth(request) + content_length = request.headers.get("content-length") + if content_length is not None: + try: + if int(content_length) > MAX_IQ_BODY_BYTES: + raise HTTPException( + status_code=413, + detail="IQ payload too large", + ) + except ValueError: + pass # Invalid content-length; let body parse handle it + body = await request.json() + try: + frequency_hz = float(body.get("frequency_hz")) + sample_rate = int(body.get("sample_rate")) + if sample_rate <= 0: + raise ValueError(f"sample_rate must be a positive integer, got {sample_rate}") + iq_b64 = body.get("iq_b64") + if not isinstance(iq_b64, str): + raise ValueError("iq_b64 must be a base64-encoded string") + occupied_bandwidth_hz_raw = body.get("occupied_bandwidth_hz") + occupied_bandwidth_hz = ( + float(occupied_bandwidth_hz_raw) if occupied_bandwidth_hz_raw is not None else None + ) + iq_bytes = base64.b64decode(iq_b64.encode("ascii")) + if len(iq_bytes) > MAX_IQ_BYTES: + raise HTTPException(status_code=413, detail="IQ payload too large") + except HTTPException: + raise + except Exception as e: + raise HTTPException(status_code=400, detail=f"Invalid TX IQ payload: {e}") + broker: HackRFBroker | None = getattr(request.app.state, "hackrf_broker", None) + + # Resolve restricted-region key and band plans for compliance decisions. + restricted_region = os.environ.get("RESTRICTED_BANDS_REGION", "FCC") + try: + device_manager = getattr(request.app.state, "hackrf_broker_device_manager", None) + if device_manager is not None and getattr(device_manager, "restricted_region", None): + restricted_region = device_manager.restricted_region # type: ignore[assignment] + except Exception: + pass + try: + from radioshaq.compliance_plugin import get_backend + + backend = get_backend(restricted_region) + plans = backend.get_band_plans() if backend is not None else None + except Exception: + plans = None + band_plans = plans if plans else BAND_PLANS + + # Compliance: either spectrum-based check when bandwidth supplied, or center-frequency check. + if occupied_bandwidth_hz is not None and occupied_bandwidth_hz > 0: + allowed = is_tx_spectrum_allowed( + center_hz=frequency_hz, + occupied_bandwidth_hz=occupied_bandwidth_hz, + restricted_region=restricted_region, + band_plan_source=band_plans, + allow_tx_only_amateur_bands=True, + ) + else: + allowed = is_tx_allowed( + freq_hz=frequency_hz, + restricted_region=restricted_region, + band_plan_source=band_plans, + allow_tx_only_amateur_bands=True, + ) + if not allowed: + raise HTTPException( + status_code=403, + detail="TX not allowed on this frequency/spectrum", + ) + + # Reject when broker is absent or has no hardware so we don't call request_tx() + # and interrupt active RX for a TX that would fail anyway. + if broker is None or not broker.available: + raise HTTPException(status_code=503, detail="HackRF TX broker not available") + + duration_sec = len(iq_bytes) / (2.0 * sample_rate) if sample_rate > 0 else 0.0 + tx_succeeded = False + try: + broker.request_tx() + await broker.tx_iq( + frequency_hz=frequency_hz, + sample_rate=sample_rate, + iq_bytes=iq_bytes, + occupied_bandwidth_hz=occupied_bandwidth_hz, + ) + tx_succeeded = True + except RuntimeError as e: + raise HTTPException(status_code=503, detail=str(e)) + except Exception as e: + logger.warning("HackRF TX IQ error: {}", repr(e)) + raise HTTPException(status_code=500, detail="HackRF TX IQ failed") + finally: + broker.clear_tx() + operator_id: str | None = None + receiver_for_audit = getattr(request.app.state, "receiver", None) + if receiver_for_audit is not None: + operator_id = receiver_for_audit.station_id + audit_log_path = _broker_tx_audit_log_path() + log_tx( + frequency_hz=frequency_hz, + duration_sec=duration_sec, + mode="iq", + rig_or_sdr="hackrf_broker", + operator_id=operator_id, + occupied_bandwidth_hz=occupied_bandwidth_hz, + success=tx_succeeded, + audit_log_path=audit_log_path, + ) + return {"success": True, "notes": "HackRF IQ transmitted via remote receiver"} + + @app.get("/health") async def health() -> dict[str, str]: return {"status": "ok"} +@app.get("/tx/status") +async def tx_status(request: Request) -> dict[str, Any]: + """Report HackRF TX broker availability for this receiver process.""" + broker: HackRFBroker | None = getattr(request.app.state, "hackrf_broker", None) + if broker is None or not broker.available: + return {"available": False, "reason": "hackrf_tx_not_configured"} + return {"available": True, "reason": "ok"} + + def main() -> None: """Entry point for `radioshaq run-receiver` (uvicorn).""" import uvicorn diff --git a/radioshaq/radioshaq/scripts/__init__.py b/radioshaq/radioshaq/scripts/__init__.py index b8edc30..f83e641 100644 --- a/radioshaq/radioshaq/scripts/__init__.py +++ b/radioshaq/radioshaq/scripts/__init__.py @@ -1 +1 @@ -"""SHAKODS utility scripts.""" +"""RadioShaq utility scripts.""" diff --git a/radioshaq/radioshaq/scripts/alembic_runner.py b/radioshaq/radioshaq/scripts/alembic_runner.py index 1b9ab9c..a292234 100644 --- a/radioshaq/radioshaq/scripts/alembic_runner.py +++ b/radioshaq/radioshaq/scripts/alembic_runner.py @@ -9,7 +9,8 @@ # Project root (radioshaq/) ROOT = Path(__file__).resolve().parent.parent.parent -ALEMBIC_INI = ROOT / "infrastructure" / "local" / "alembic.ini" +# Use root Alembic config by default +ALEMBIC_INI = ROOT / "alembic.ini" def _run(args: list[str]) -> int: @@ -19,17 +20,17 @@ def _run(args: list[str]) -> int: def upgrade() -> int: - """Run: alembic -c infrastructure/local/alembic.ini upgrade head.""" + """Run: alembic -c alembic.ini upgrade head.""" return _run(["upgrade", "head"]) def upgrade_sql() -> int: - """Run: alembic -c infrastructure/local/alembic.ini upgrade head --sql.""" + """Run: alembic -c alembic.ini upgrade head --sql.""" return _run(["upgrade", "head", "--sql"]) def current() -> int: - """Run: alembic -c infrastructure/local/alembic.ini current.""" + """Run: alembic -c alembic.ini current.""" return _run(["current"]) diff --git a/radioshaq/radioshaq/setup.py b/radioshaq/radioshaq/setup.py index a32b8e9..12ca3f4 100644 --- a/radioshaq/radioshaq/setup.py +++ b/radioshaq/radioshaq/setup.py @@ -32,7 +32,7 @@ DB_CHOICE_URL = "url" DB_CHOICE_SKIP = "skip" COMPOSE_PATH = "infrastructure/local/docker-compose.yml" -ALEMBIC_INI = "infrastructure/local/alembic.ini" +ALEMBIC_INI = "alembic.ini" POSTGRES_PORT_DEFAULT = 5434 # Roles that can have per-role LLM overrides (orchestrator, judge, whitelist, daily_summary) @@ -73,6 +73,29 @@ def detect_existing( return dotenv, config_yaml, radioshaq_config +def _read_env_value(env_path: Path, key: str) -> Optional[str]: + """Read a single key's value from .env (first occurrence). Used to preserve secrets when merging.""" + if not env_path.exists(): + return None + try: + text = env_path.read_text(encoding="utf-8") + except OSError: + return None + for line in text.splitlines(): + line = line.strip() + if line.startswith("#") or "=" not in line: + continue + k, _, v = line.partition("=") + if k.strip() == key: + val = v.strip() + if val.startswith('"') and val.endswith('"'): + val = val[1:-1] + elif val.startswith("'") and val.endswith("'"): + val = val[1:-1] + return val if val else None + return None + + def _parse_postgres_url(url: str) -> dict[str, str]: """Parse postgresql[+asyncpg]://user:pass@host:port/db into POSTGRES_* components.""" url = url.replace("postgresql+asyncpg://", "postgresql://") @@ -109,8 +132,14 @@ def write_env( llm_provider: Optional[str] = None, llm_api_key: Optional[str] = None, merge: bool = False, + twilio_account_sid: Optional[str] = None, + twilio_auth_token: Optional[str] = None, + twilio_from_number: Optional[str] = None, + twilio_whatsapp_from: Optional[str] = None, + tts_provider: Optional[str] = None, + elevenlabs_api_key: Optional[str] = None, ) -> None: - """Write or merge .env with POSTGRES_*, RADIOSHAQ_MODE, JWT, LLM, and optional RADIOSHAQ_*.""" + """Write or merge .env with POSTGRES_*, RADIOSHAQ_MODE, JWT, LLM, Twilio, TTS, and optional RADIOSHAQ_*.""" env_path = project_root / ENV_FILENAME url = db_url or DEFAULT_POSTGRES_URL.replace("+asyncpg", "") if "postgresql://" not in url and "postgresql+asyncpg://" in url: @@ -120,8 +149,11 @@ def write_env( override_keys = { "POSTGRES_HOST", "POSTGRES_PORT", "POSTGRES_DB", "POSTGRES_USER", "POSTGRES_PASSWORD", "RADIOSHAQ_MODE", "RADIOSHAQ_JWT__SECRET_KEY", "RADIOSHAQ_LLM__PROVIDER", - "MISTRAL_API_KEY", "OPENAI_API_KEY", "ANTHROPIC_API_KEY", - "RADIOSHAQ_LLM__MISTRAL_API_KEY", "RADIOSHAQ_LLM__OPENAI_API_KEY", "RADIOSHAQ_LLM__ANTHROPIC_API_KEY", + "MISTRAL_API_KEY", "OPENAI_API_KEY", "ANTHROPIC_API_KEY", "GEMINI_API_KEY", + "RADIOSHAQ_LLM__MISTRAL_API_KEY", "RADIOSHAQ_LLM__OPENAI_API_KEY", "RADIOSHAQ_LLM__ANTHROPIC_API_KEY", "RADIOSHAQ_LLM__GEMINI_API_KEY", + "RADIOSHAQ_TWILIO__ACCOUNT_SID", "RADIOSHAQ_TWILIO__AUTH_TOKEN", + "RADIOSHAQ_TWILIO__FROM_NUMBER", "RADIOSHAQ_TWILIO__WHATSAPP_FROM", + "RADIOSHAQ_TTS__PROVIDER", "ELEVENLABS_API_KEY", } lines: list[str] = [] if merge and env_path.exists(): @@ -157,10 +189,32 @@ def write_env( "mistral": "MISTRAL_API_KEY", "openai": "OPENAI_API_KEY", "anthropic": "ANTHROPIC_API_KEY", + "huggingface": "HF_TOKEN", + "gemini": "GEMINI_API_KEY", }.get(llm_provider.lower()) if key_var: lines.append(_env_line(key_var, llm_api_key)) + if twilio_account_sid is not None or twilio_auth_token is not None or twilio_from_number is not None or twilio_whatsapp_from is not None: + lines.append("") + lines.append("# Twilio (SMS / WhatsApp)") + if twilio_account_sid is not None: + lines.append(_env_line("RADIOSHAQ_TWILIO__ACCOUNT_SID", twilio_account_sid)) + if twilio_auth_token is not None: + lines.append(_env_line("RADIOSHAQ_TWILIO__AUTH_TOKEN", twilio_auth_token)) + if twilio_from_number is not None: + lines.append(_env_line("RADIOSHAQ_TWILIO__FROM_NUMBER", twilio_from_number)) + if twilio_whatsapp_from is not None: + lines.append(_env_line("RADIOSHAQ_TWILIO__WHATSAPP_FROM", twilio_whatsapp_from)) + + if tts_provider is not None or elevenlabs_api_key is not None: + lines.append("") + lines.append("# TTS") + if tts_provider is not None: + lines.append(_env_line("RADIOSHAQ_TTS__PROVIDER", tts_provider)) + if elevenlabs_api_key is not None: + lines.append(_env_line("ELEVENLABS_API_KEY", elevenlabs_api_key)) + env_path.parent.mkdir(parents=True, exist_ok=True) env_path.write_text("\n".join(lines) + "\n", encoding="utf-8", newline="\n") @@ -286,33 +340,50 @@ def _prompt_jwt_secret() -> str: return secret -def _prompt_llm() -> tuple[str, Optional[str], Optional[str], Optional[str]]: - """Prompt for LLM provider, model, optional custom API base, and optional API key. Returns (provider, api_key_or_none, model_or_none, custom_api_base_or_none).""" +def _prompt_llm() -> tuple[str, Optional[str], Optional[str], Optional[str], Optional[str]]: + """Prompt for LLM provider, model, optional API bases, and optional API key. + Returns (provider, api_key_or_none, model_or_none, custom_api_base_or_none, huggingface_api_base_or_none). + """ provider = typer.prompt( - "LLM provider (mistral / openai / anthropic / custom)", + "LLM provider (mistral / openai / anthropic / custom / huggingface / gemini)", default="mistral", show_default=True, ).strip().lower() or "mistral" - if provider not in ("mistral", "openai", "anthropic", "custom"): + if provider not in ("mistral", "openai", "anthropic", "custom", "huggingface", "gemini"): provider = "mistral" + model_default = "mistral-large-latest" + if provider == "custom": + model_default = "ollama/llama2" + elif provider == "huggingface": + model_default = "Qwen/Qwen2.5-7B-Instruct-1M" + elif provider == "gemini": + model_default = "gemini-2.5-flash" model: Optional[str] = typer.prompt( - "LLM model (e.g. mistral-large-latest, ollama/llama2)", - default="mistral-large-latest" if provider != "custom" else "ollama/llama2", + "LLM model (e.g. mistral-large-latest, ollama/llama2, Qwen/Qwen2.5-7B-Instruct-1M)", + default=model_default, show_default=True, ).strip() or None custom_base: Optional[str] = None + hf_base: Optional[str] = None if provider == "custom": custom_base = typer.prompt( "Custom API base URL (e.g. http://localhost:11434 for Ollama)", default="http://localhost:11434", show_default=True, ).strip() or None + elif provider == "huggingface": + typer.echo("Hugging Face: set HF_TOKEN or paste token when prompted. Token needs 'Inference Providers' permission.") + hf_base = typer.prompt( + "Hugging Face API base (optional; Enter for default https://router.huggingface.co/v1)", + default="", + show_default=False, + ).strip() or None key = typer.prompt( "LLM API key (optional; press Enter to skip and set later in .env)", default="", show_default=False, ).strip() or None - return provider, key, model, custom_base + return provider, key, model, custom_base, hf_base def _run_interactive_prompts_core( @@ -321,8 +392,8 @@ def _run_interactive_prompts_core( has_config: bool, force: bool, reconfigure: bool, -) -> tuple[Optional[Config], str, str, Optional[str], str, str, Optional[str], Optional[str], Optional[str], bool, bool]: - """Run core interactive prompts. Returns (base_config, mode, db_choice, db_url, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base, merge_env, merge_config).""" +) -> tuple[Optional[Config], str, str, Optional[str], str, str, Optional[str], Optional[str], Optional[str], Optional[str], bool, bool]: + """Run core interactive prompts. Returns (base_config, mode, db_choice, db_url, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base, huggingface_api_base, merge_env, merge_config).""" base_config: Optional[Config] = None merge_env = False merge_config = False @@ -342,7 +413,7 @@ def _run_interactive_prompts_core( mode = _prompt_mode() db_choice, db_choice_url = _prompt_database() jwt_secret = _prompt_jwt_secret() - llm_provider, llm_key, llm_model, custom_api_base = _prompt_llm() + llm_provider, llm_key, llm_model, custom_api_base, huggingface_api_base = _prompt_llm() if db_choice == DB_CHOICE_SKIP: db_url: Optional[str] = None @@ -351,7 +422,7 @@ def _run_interactive_prompts_core( else: db_url = db_choice_url - return base_config, mode, db_choice, db_url, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base, merge_env, merge_config + return base_config, mode, db_choice, db_url, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base, huggingface_api_base, merge_env, merge_config def _run_quick_prompts() -> tuple[str, str, Optional[str]]: @@ -431,6 +502,27 @@ def _prompt_field_hq(mode_val: str) -> tuple[Optional[str], Optional[str], Optio return station_id, hq_base_url, hq_auth_token, hq_host, hq_port +def _prompt_compliance_region( + default_restricted: str = "FCC", + default_band_plan: Optional[str] = None, +) -> tuple[str, Optional[str]]: + """Prompt for compliance region/country (restricted bands + band plan). Returns (restricted_bands_region, band_plan_region).""" + typer.echo( + "Compliance region: where you operate (restricted bands + band plan). " + "Examples: FCC (US), CA (Canada), CEPT or FR (EU), AU (Australia), ZA (South Africa), NZ/JP/IN, or country code (AR, MX, NG…). " + "See docs/compliance-regulatory.md." + ) + restricted = typer.prompt( + "Restricted bands region (country/region code)", + default=default_restricted or "FCC", + ).strip() or "FCC" + override = typer.prompt( + "Band plan override (ITU_R1, ITU_R3, or leave blank to use region default)", + default=default_band_plan or "", + ).strip() or None + return restricted, override + + def _prompt_station_callsign_trigger() -> tuple[Optional[str], list[str]]: """Prompt for station callsign and trigger phrases. Returns (station_callsign, trigger_phrases).""" station_callsign = typer.prompt( @@ -459,10 +551,10 @@ def _prompt_llm_overrides() -> dict[str, Any]: if not typer.confirm(f"Override LLM for role '{role}'?", default=False): continue provider = typer.prompt( - f" [{role}] LLM provider (mistral / openai / anthropic / custom)", + f" [{role}] LLM provider (mistral / openai / anthropic / custom / huggingface)", default="mistral", ).strip().lower() or "mistral" - if provider not in ("mistral", "openai", "anthropic", "custom"): + if provider not in ("mistral", "openai", "anthropic", "custom", "huggingface"): provider = "mistral" model = typer.prompt( f" [{role}] Model (e.g. mistral-large-latest, ollama/llama2)", @@ -476,12 +568,54 @@ def _prompt_llm_overrides() -> dict[str, Any]: ).strip() if custom_base: entry["custom_api_base"] = custom_base + elif provider == "huggingface": + hf_base = typer.prompt( + f" [{role}] Hugging Face API base (optional; Enter for default)", + default="", + ).strip() + if hf_base: + entry["huggingface_api_base"] = hf_base overrides[role] = entry return overrides -def _run_reconfigure_prompts(project_root: Path, existing_config: Config) -> tuple[Config, str, str, Optional[str], str, str, Optional[str], Optional[str], Optional[str]]: - """Reconfigure: prompt which sections to change. Returns (config, mode, db_choice, db_url, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base).""" +def _prompt_twilio() -> tuple[Optional[str], Optional[str], Optional[str], Optional[str]]: + """Prompt for Twilio (SMS/WhatsApp). Returns (account_sid, auth_token, from_number, whatsapp_from).""" + if not typer.confirm("Configure Twilio (SMS / WhatsApp)?", default=False): + return None, None, None, None + typer.echo("See docs/twilio-sms-whatsapp.md. Use E.164 for phone numbers.") + account_sid = typer.prompt("Twilio Account SID (optional; leave blank to set later in .env)", default="").strip() or None + auth_token = typer.prompt("Twilio Auth Token (optional; leave blank to set later in .env)", default="", show_default=False).strip() or None + from_number = typer.prompt("SMS From number (E.164, optional)", default="").strip() or None + whatsapp_from = typer.prompt("WhatsApp From number (E.164, optional; must be WhatsApp-enabled in Twilio)", default="").strip() or None + return account_sid, auth_token, from_number, whatsapp_from + + +def _prompt_tts() -> tuple[str | None, Optional[str]]: + """Prompt for TTS provider and optional API key. Returns (None, None) when user declines.""" + if not typer.confirm("Configure TTS (for outbound radio/relay voice)?", default=False): + return None, None + provider = typer.prompt( + "TTS provider (elevenlabs / kokoro)", + default="elevenlabs", + show_default=True, + ).strip().lower() or "elevenlabs" + if provider not in ("elevenlabs", "kokoro"): + provider = "elevenlabs" + elevenlabs_key: Optional[str] = None + if provider == "elevenlabs": + elevenlabs_key = typer.prompt( + "ElevenLabs API key (optional; leave blank to set ELEVENLABS_API_KEY in .env later)", + default="", + show_default=False, + ).strip() or None + else: + typer.echo("Kokoro: run 'uv sync --extra tts_kokoro' for local TTS.") + return provider, elevenlabs_key + + +def _run_reconfigure_prompts(project_root: Path, existing_config: Config) -> tuple[Config, str, str, Optional[str], str, str, Optional[str], Optional[str], Optional[str], Optional[str], Optional[str]]: + """Reconfigure: prompt which sections to change. Returns (config, mode, db_choice, db_url, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base, huggingface_api_base, elevenlabs_key).""" config = existing_config mode_val = config.mode.value db_choice = DB_CHOICE_URL @@ -493,11 +627,13 @@ def _run_reconfigure_prompts(project_root: Path, existing_config: Config) -> tup llm_key: Optional[str] = None llm_model_val: Optional[str] = getattr(config.llm, "model", None) custom_api_base_val: Optional[str] = getattr(config.llm, "custom_api_base", None) + huggingface_api_base_val: Optional[str] = getattr(config.llm, "huggingface_api_base", None) + elevenlabs_key_reconfigure: Optional[str] = None - sections = ["mode", "database", "jwt", "llm", "memory", "overrides", "done"] + sections = ["mode", "database", "jwt", "llm", "memory", "radio", "twilio", "tts", "overrides", "done"] while True: choice = typer.prompt( - "What to change? (mode / database / jwt / llm / memory / overrides / done)", + "What to change? (mode / database / jwt / llm / memory / radio / twilio / tts / overrides / done)", default="done", ).strip().lower() or "done" if choice == "done": @@ -513,17 +649,42 @@ def _run_reconfigure_prompts(project_root: Path, existing_config: Config) -> tup elif choice == "jwt": jwt_secret = _prompt_jwt_secret() elif choice == "llm": - llm_provider, llm_key, llm_model_val, custom_api_base_val = _prompt_llm() + llm_provider, llm_key, llm_model_val, custom_api_base_val, huggingface_api_base_val = _prompt_llm() elif choice == "memory": memory_enabled, hindsight_url = _prompt_memory() config.memory.enabled = memory_enabled if hindsight_url: config.memory.hindsight_base_url = hindsight_url + elif choice == "radio": + current_restricted = getattr(config.radio, "restricted_bands_region", "FCC") or "FCC" + current_band_plan = getattr(config.radio, "band_plan_region", None) or "" + restricted_region, band_plan_region = _prompt_compliance_region( + default_restricted=current_restricted, + default_band_plan=current_band_plan, + ) + config.radio.restricted_bands_region = restricted_region + config.radio.band_plan_region = band_plan_region if band_plan_region else None + elif choice == "twilio": + sid, token, from_num, whatsapp = _prompt_twilio() + if sid is not None: + config.twilio.account_sid = sid + if token is not None: + config.twilio.auth_token = token + if from_num is not None: + config.twilio.from_number = from_num + if whatsapp is not None: + config.twilio.whatsapp_from = whatsapp + elif choice == "tts": + tts_provider, elevenlabs_key_tts = _prompt_tts() + if tts_provider is not None: + config.tts.provider = tts_provider + if elevenlabs_key_tts is not None: + elevenlabs_key_reconfigure = elevenlabs_key_tts elif choice == "overrides": overrides = _prompt_llm_overrides() config.llm_overrides = overrides if overrides else None - return config, mode_val, db_choice, db_url_val, jwt_secret, llm_provider, llm_key, llm_model_val, custom_api_base_val + return config, mode_val, db_choice, db_url_val, jwt_secret, llm_provider, llm_key, llm_model_val, custom_api_base_val, huggingface_api_base_val, elevenlabs_key_reconfigure def run_setup( @@ -540,10 +701,13 @@ def run_setup( llm_provider: Optional[str] = None, llm_model: Optional[str] = None, custom_api_base: Optional[str] = None, + huggingface_api_base: Optional[str] = None, hindsight_url: Optional[str] = None, memory_enabled: Optional[bool] = None, radio_reply_tx_enabled: Optional[bool] = None, radio_reply_use_tts: Optional[bool] = None, + restricted_bands_region: Optional[str] = None, + band_plan_region: Optional[str] = None, llm_overrides: Optional[str] = None, ) -> int: """Run setup: non-interactive writes .env + config.yaml; interactive will prompt (Phase 2+). @@ -583,12 +747,14 @@ def run_setup( config.audio.trigger_phrases = [p.strip() for p in trigger_phrases if p and str(p).strip()] if config.audio.trigger_phrases: config.audio.audio_activation_phrase = config.audio.trigger_phrases[0] - if llm_provider and llm_provider.strip().lower() in ("mistral", "openai", "anthropic", "custom"): + if llm_provider and llm_provider.strip().lower() in ("mistral", "openai", "anthropic", "custom", "huggingface"): config.llm.provider = LLMProvider(llm_provider.strip().lower()) if llm_model and llm_model.strip(): config.llm.model = llm_model.strip() if custom_api_base and custom_api_base.strip(): config.llm.custom_api_base = custom_api_base.strip() + if huggingface_api_base and huggingface_api_base.strip(): + config.llm.huggingface_api_base = huggingface_api_base.strip() if memory_enabled is not None: config.memory.enabled = memory_enabled if hindsight_url and hindsight_url.strip(): @@ -597,6 +763,10 @@ def run_setup( config.radio.radio_reply_tx_enabled = radio_reply_tx_enabled if radio_reply_use_tts is not None: config.radio.radio_reply_use_tts = radio_reply_use_tts + if restricted_bands_region and restricted_bands_region.strip(): + config.radio.restricted_bands_region = restricted_bands_region.strip() + if band_plan_region is not None and str(band_plan_region).strip(): + config.radio.band_plan_region = str(band_plan_region).strip() if llm_overrides and llm_overrides.strip(): try: parsed = json.loads(llm_overrides.strip()) @@ -632,6 +802,7 @@ def run_setup( shutil.copy(project_root / RADIOSHAQ_CONFIG_DIR / CONFIG_FILENAME, project_root / CONFIG_FILENAME) has_config = True + reconfigure_elevenlabs_key: Optional[str] = None if quick: mode_val, db_choice, db_url_val = _run_quick_prompts() jwt_secret = DEFAULT_JWT_SECRET @@ -647,16 +818,31 @@ def run_setup( existing = load_config(project_root / CONFIG_FILENAME) except Exception: existing = Config() - base_config, mode_val, db_choice, db_url_val, jwt_secret, llm_provider, llm_key, llm_model_val, custom_api_base_val = _run_reconfigure_prompts(project_root, existing) + base_config, mode_val, db_choice, db_url_val, jwt_secret, llm_provider, llm_key, llm_model_val, custom_api_base_val, huggingface_api_base_val, elevenlabs_key_reconfigure = _run_reconfigure_prompts(project_root, existing) merge_env = True merge_config = True llm_model = llm_model_val custom_api_base = custom_api_base_val + huggingface_api_base = huggingface_api_base_val + reconfigure_elevenlabs_key = elevenlabs_key_reconfigure + # When user skipped TTS section, preserve existing ELEVENLABS_API_KEY from .env so merge does not strip it + if reconfigure_elevenlabs_key is None and merge_env: + env_path = project_root / ENV_FILENAME + existing_key = _read_env_value(env_path, "ELEVENLABS_API_KEY") + if existing_key: + reconfigure_elevenlabs_key = existing_key else: - base_config, mode_val, db_choice, db_url_val, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base, merge_env, merge_config = _run_interactive_prompts_core( + base_config, mode_val, db_choice, db_url_val, jwt_secret, llm_provider, llm_key, llm_model, custom_api_base, huggingface_api_base, merge_env, merge_config = _run_interactive_prompts_core( project_root, has_dotenv, has_config, force, reconfigure ) + twilio_sid: Optional[str] = None + twilio_token: Optional[str] = None + twilio_from: Optional[str] = None + twilio_whatsapp: Optional[str] = None + tts_provider = "elevenlabs" + elevenlabs_key: Optional[str] = reconfigure_elevenlabs_key + config = base_config if (base_config and merge_config) else Config() config.mode = Mode(mode_val) if db_url_val: @@ -668,6 +854,8 @@ def run_setup( config.llm.model = llm_model.strip() if custom_api_base and str(custom_api_base).strip(): config.llm.custom_api_base = custom_api_base.strip() + if huggingface_api_base and str(huggingface_api_base).strip(): + config.llm.huggingface_api_base = huggingface_api_base.strip() # Phase 6: radio, audio, memory, field/HQ (full interactive only) if not quick: @@ -709,15 +897,42 @@ def run_setup( if trigger_phrases: config.audio.trigger_phrases = trigger_phrases config.audio.audio_activation_phrase = trigger_phrases[0] + restricted_region, band_plan_region = _prompt_compliance_region() + config.radio.restricted_bands_region = restricted_region + if band_plan_region: + config.radio.band_plan_region = band_plan_region overrides = _prompt_llm_overrides() if overrides: config.llm_overrides = overrides + sid, token, from_num, whatsapp = _prompt_twilio() + twilio_sid, twilio_token, twilio_from, twilio_whatsapp = sid, token, from_num, whatsapp + if twilio_sid is not None: + config.twilio.account_sid = twilio_sid + if twilio_token is not None: + config.twilio.auth_token = twilio_token + if twilio_from is not None: + config.twilio.from_number = twilio_from + if twilio_whatsapp is not None: + config.twilio.whatsapp_from = twilio_whatsapp + tts_provider, elevenlabs_key = _prompt_tts() + if tts_provider is not None: + config.tts.provider = tts_provider + + # Capture Twilio secrets from config before clearing (reconfigure path sets them on config only) + twilio_sid = twilio_sid or getattr(config.twilio, "account_sid", None) + twilio_token = twilio_token or getattr(config.twilio, "auth_token", None) + twilio_from = twilio_from or getattr(config.twilio, "from_number", None) + twilio_whatsapp = twilio_whatsapp or getattr(config.twilio, "whatsapp_from", None) # Save config file without secrets (they go in .env only) config.jwt.secret_key = "(set via RADIOSHAQ_JWT__SECRET_KEY)" config.llm.mistral_api_key = None config.llm.openai_api_key = None config.llm.anthropic_api_key = None + config.llm.custom_api_key = None + config.llm.huggingface_api_key = None + config.llm.gemini_api_key = None + config.twilio.auth_token = None config_path = project_root / CONFIG_FILENAME try: @@ -730,6 +945,12 @@ def run_setup( llm_provider=llm_provider, llm_api_key=llm_key, merge=merge_env, + twilio_account_sid=twilio_sid, + twilio_auth_token=twilio_token, + twilio_from_number=twilio_from, + twilio_whatsapp_from=twilio_whatsapp, + tts_provider=str(config.tts.provider) if getattr(config.tts, "provider", None) else None, + elevenlabs_api_key=elevenlabs_key, ) except OSError as e: typer.echo(f"Cannot write config or .env: {e}", err=True) @@ -741,20 +962,20 @@ def run_setup( do_docker = quick or typer.confirm("Start Docker Postgres and run migrations now?", default=True) if do_docker: if not _start_docker_postgres(project_root): - typer.echo("Setup wrote config but Docker failed. Start Postgres manually and run: alembic -c infrastructure/local/alembic.ini upgrade head", err=True) + typer.echo("Setup wrote config but Docker failed. Start Postgres manually and run: alembic -c alembic.ini upgrade head", err=True) return 1 typer.echo("Waiting for Postgres on port 5434...") if not _wait_for_port("127.0.0.1", POSTGRES_PORT_DEFAULT): typer.echo("Postgres did not become ready. Check: docker compose -f infrastructure/local/docker-compose.yml logs postgres", err=True) return 1 if not _run_alembic_upgrade(project_root): - typer.echo("Migrations failed. Run manually: alembic -c infrastructure/local/alembic.ini upgrade head", err=True) + typer.echo("Migrations failed. Run manually: alembic -c alembic.ini upgrade head", err=True) return 1 typer.echo("Migrations complete.") migrations_done = True if db_url_val and not migrations_done and (not quick) and typer.confirm("Run migrations now?", default=True): if not _run_alembic_upgrade(project_root): - typer.echo("Migrations failed. Run manually: alembic -c infrastructure/local/alembic.ini upgrade head", err=True) + typer.echo("Migrations failed. Run manually: alembic -c alembic.ini upgrade head", err=True) else: typer.echo("Migrations complete.") @@ -780,5 +1001,5 @@ def run_setup( typer.echo("Setup complete.") typer.echo("Start dependencies + API: radioshaq launch docker (or radioshaq launch docker --hindsight)") typer.echo("Then start API: radioshaq run-api (or: radioshaq launch pm2 / radioshaq launch pm2 --hindsight)") - typer.echo("See docs/quick-start.md and docs/configuration.md") + typer.echo("For all configurable options see .env.example and docs/configuration.md") return 0 diff --git a/radioshaq/radioshaq/specialized/gis_agent.py b/radioshaq/radioshaq/specialized/gis_agent.py index ee4f531..f4a6403 100644 --- a/radioshaq/radioshaq/specialized/gis_agent.py +++ b/radioshaq/radioshaq/specialized/gis_agent.py @@ -32,6 +32,7 @@ class GISAgent(SpecializedAgent): capabilities = [ "operators_nearby", "operator_location", + "set_operator_location", "propagation_prediction", ] @@ -44,13 +45,15 @@ async def execute( task: dict[str, Any], upstream_callback: Any = None, ) -> dict[str, Any]: - """Execute GIS task: operators_nearby, get_location, propagation_prediction.""" + """Execute GIS task: operators_nearby, get_location, set_location, propagation_prediction.""" action = task.get("action", "operators_nearby") if action == "operators_nearby": return await self._operators_nearby(task, upstream_callback) if action == "get_location": return await self._get_location(task, upstream_callback) + if action == "set_location": + return await self._set_location(task, upstream_callback) if action == "propagation_prediction": return await self._propagation_prediction(task, upstream_callback) raise ValueError(f"Unknown GIS action: {action}") @@ -58,9 +61,19 @@ async def execute( async def _operators_nearby( self, task: dict[str, Any], upstream_callback: Any ) -> dict[str, Any]: - """Find operators within radius of a point.""" - lat = float(task.get("latitude", 0)) - lon = float(task.get("longitude", 0)) + """Find operators within radius of a point. Falls back to stored location when center coords omitted.""" + lat_raw = task.get("latitude") + lon_raw = task.get("longitude") + center_provided = lat_raw is not None and lon_raw is not None + if not center_provided and self.db: + callsign = (task.get("callsign") or "").strip().upper() + if callsign: + stored = await self.db.get_latest_location_decoded(callsign) + if stored: + lat_raw = lat_raw if lat_raw is not None else stored["latitude"] + lon_raw = lon_raw if lon_raw is not None else stored["longitude"] + lat = float(lat_raw) if lat_raw is not None else 0.0 + lon = float(lon_raw) if lon_raw is not None else 0.0 radius_meters = float(task.get("radius_meters", 50000)) max_results = int(task.get("max_results", 50)) recent_hours = int(task.get("recent_hours", 24)) @@ -89,7 +102,7 @@ async def _operators_nearby( longitude=lon, radius_meters=radius_meters, max_results=max_results, - recent_only=True, + recent_only=recent_hours > 0, recent_hours=recent_hours, ) await self.emit_result(upstream_callback, {"operators": operators}) @@ -102,14 +115,14 @@ async def _operators_nearby( "count": len(operators), } except Exception as e: - logger.exception("GIS find_operators_nearby failed: %s", e) + logger.exception("GIS find_operators_nearby failed: {}", e) await self.emit_error(upstream_callback, str(e)) raise async def _get_location( self, task: dict[str, Any], upstream_callback: Any ) -> dict[str, Any]: - """Get latest location for a callsign.""" + """Get latest location for a callsign (decoded lat/lon, JSON-serializable).""" callsign = (task.get("callsign") or "").strip().upper() if not callsign: return {"success": False, "error": "callsign is required"} @@ -125,7 +138,7 @@ async def _get_location( } try: - location = await self.db.get_latest_location(callsign) + location = await self.db.get_latest_location_decoded(callsign) await self.emit_result(upstream_callback, {"location": location}) return { "success": True, @@ -133,19 +146,74 @@ async def _get_location( "location": location, } except Exception as e: - logger.exception("GIS get_latest_location failed: %s", e) + logger.exception("GIS get_latest_location failed: {}", e) + await self.emit_error(upstream_callback, str(e)) + raise + + async def _set_location( + self, task: dict[str, Any], upstream_callback: Any + ) -> dict[str, Any]: + """Store operator location (source=user_disclosed) for reuse in propagation/nearby.""" + callsign = (task.get("callsign") or "").strip().upper() + if not callsign: + return {"success": False, "error": "callsign is required"} + try: + lat = float(task.get("latitude")) + lon = float(task.get("longitude")) + except (TypeError, ValueError): + return {"success": False, "error": "latitude and longitude are required (numeric)"} + + await self.emit_progress(upstream_callback, "storing", callsign=callsign, latitude=lat, longitude=lon) + + if not self.db: + return {"success": False, "error": "Database not configured"} + + try: + loc = await self.db.store_operator_location( + callsign=callsign, + latitude=lat, + longitude=lon, + altitude_meters=task.get("altitude_meters"), + accuracy_meters=task.get("accuracy_meters"), + source="user_disclosed", + ) + await self.emit_result(upstream_callback, {"location": loc}) + return { + "success": True, + "id": loc["id"], + "callsign": loc["callsign"], + "latitude": loc["latitude"], + "longitude": loc["longitude"], + "source": loc["source"], + "timestamp": loc["timestamp"], + } + except Exception as e: + logger.exception("GIS set_location failed: {}", e) await self.emit_error(upstream_callback, str(e)) raise async def _propagation_prediction( self, task: dict[str, Any], upstream_callback: Any ) -> dict[str, Any]: - """Simple propagation: distance between two points and band suggestion.""" - lat1 = float(task.get("latitude_origin", 0)) - lon1 = float(task.get("longitude_origin", 0)) + """Simple propagation: distance between two points and band suggestion. Uses stored location as origin only when origin not provided.""" + # Use explicit sentinel: only fall back when origin keys are missing (not when 0.0, which is valid) + lat1_raw = task.get("latitude_origin") + lon1_raw = task.get("longitude_origin") + origin_provided = lat1_raw is not None and lon1_raw is not None + lat1 = float(lat1_raw) if lat1_raw is not None else 0.0 + lon1 = float(lon1_raw) if lon1_raw is not None else 0.0 lat2 = float(task.get("latitude_destination", 0)) lon2 = float(task.get("longitude_destination", 0)) + # Fallback: use stored operator location as origin only when caller did not provide origin coords + if not origin_provided and self.db: + callsign = (task.get("callsign") or "").strip().upper() + if callsign: + stored = await self.db.get_latest_location_decoded(callsign) + if stored: + lat1 = stored["latitude"] + lon1 = stored["longitude"] + await self.emit_progress( upstream_callback, "computing", diff --git a/radioshaq/radioshaq/specialized/gis_tools.py b/radioshaq/radioshaq/specialized/gis_tools.py new file mode 100644 index 0000000..e9e38ba --- /dev/null +++ b/radioshaq/radioshaq/specialized/gis_tools.py @@ -0,0 +1,211 @@ +"""GIS tools for orchestrator (LLM-callable): set/get operator location, operators nearby.""" + +from __future__ import annotations + +import json +from typing import Any + +LAT_MIN, LAT_MAX = -90.0, 90.0 +LON_MIN, LON_MAX = -180.0, 180.0 + + +class SetOperatorLocationTool: + """Store the operator's current location (latitude, longitude) for later use in propagation and nearby queries.""" + + name = "set_operator_location" + description = "Store the operator's current location (latitude, longitude) for later use in propagation and nearby queries." + + def __init__(self, db: Any = None) -> None: + self.db = db + + def to_schema(self) -> dict[str, Any]: + return { + "type": "function", + "function": { + "name": self.name, + "description": self.description, + "parameters": { + "type": "object", + "properties": { + "callsign": {"type": "string", "description": "Operator callsign"}, + "latitude": {"type": "number", "description": "Latitude (WGS 84)"}, + "longitude": {"type": "number", "description": "Longitude (WGS 84)"}, + "altitude_meters": {"type": "number", "description": "Optional altitude in meters"}, + "accuracy_meters": {"type": "number", "description": "Optional accuracy estimate in meters"}, + }, + "required": ["callsign", "latitude", "longitude"], + }, + }, + } + + def validate_params(self, params: dict[str, Any]) -> list[str]: + errors = [] + if not (params.get("callsign") or "").strip(): + errors.append("callsign is required") + lat = params.get("latitude") + lon = params.get("longitude") + if lat is None: + errors.append("latitude is required") + elif not isinstance(lat, (int, float)) or lat < LAT_MIN or lat > LAT_MAX: + errors.append("latitude must be between -90 and 90") + if lon is None: + errors.append("longitude is required") + elif not isinstance(lon, (int, float)) or lon < LON_MIN or lon > LON_MAX: + errors.append("longitude must be between -180 and 180") + return errors + + async def execute( + self, + callsign: str, + latitude: float, + longitude: float, + altitude_meters: float | None = None, + accuracy_meters: float | None = None, + **kwargs: Any, + ) -> str: + if self.db is None: + return json.dumps({"error": "Database not available"}) + cs = str(callsign).strip().upper() + if not cs: + return json.dumps({"error": "callsign is required"}) + try: + loc = await self.db.store_operator_location( + callsign=cs, + latitude=float(latitude), + longitude=float(longitude), + altitude_meters=altitude_meters, + accuracy_meters=accuracy_meters, + source="user_disclosed", + ) + return json.dumps({ + "id": loc["id"], + "callsign": loc["callsign"], + "latitude": loc["latitude"], + "longitude": loc["longitude"], + "source": loc["source"], + "timestamp": loc["timestamp"], + }) + except Exception as e: + return json.dumps({"error": str(e)}) + + +class GetOperatorLocationTool: + """Get the latest stored location for a callsign.""" + + name = "get_operator_location" + description = "Get the latest stored location for a callsign (latitude, longitude, source, timestamp)." + + def __init__(self, db: Any = None) -> None: + self.db = db + + def to_schema(self) -> dict[str, Any]: + return { + "type": "function", + "function": { + "name": self.name, + "description": self.description, + "parameters": { + "type": "object", + "properties": { + "callsign": {"type": "string", "description": "Operator callsign"}, + }, + "required": ["callsign"], + }, + }, + } + + def validate_params(self, params: dict[str, Any]) -> list[str]: + if not (params.get("callsign") or "").strip(): + return ["callsign is required"] + return [] + + async def execute(self, callsign: str, **kwargs: Any) -> str: + if self.db is None: + return json.dumps({"error": "Database not available"}) + cs = str(callsign).strip().upper() + if not cs: + return json.dumps({"error": "callsign is required"}) + loc = await self.db.get_latest_location_decoded(cs) + if loc is None: + return json.dumps({"callsign": cs, "location": None, "message": "No location stored for this callsign"}) + return json.dumps({ + "callsign": loc["callsign"], + "latitude": loc["latitude"], + "longitude": loc["longitude"], + "source": loc["source"], + "timestamp": loc["timestamp"], + }) + + +class OperatorsNearbyTool: + """Find operators within a radius of a point (latitude, longitude).""" + + name = "operators_nearby" + description = "Find operators within a radius of a point (latitude, longitude). Returns list with distance_meters." + + def __init__(self, db: Any = None) -> None: + self.db = db + + def to_schema(self) -> dict[str, Any]: + return { + "type": "function", + "function": { + "name": self.name, + "description": self.description, + "parameters": { + "type": "object", + "properties": { + "latitude": {"type": "number", "description": "Center latitude"}, + "longitude": {"type": "number", "description": "Center longitude"}, + "radius_meters": {"type": "number", "description": "Search radius in meters", "default": 50000}, + "recent_hours": {"type": "integer", "description": "Only include locations from last N hours", "default": 24}, + "max_results": {"type": "integer", "description": "Maximum number of results", "default": 50}, + }, + "required": ["latitude", "longitude"], + }, + }, + } + + def validate_params(self, params: dict[str, Any]) -> list[str]: + errors = [] + lat = params.get("latitude") + lon = params.get("longitude") + if lat is None: + errors.append("latitude is required") + elif not isinstance(lat, (int, float)) or lat < LAT_MIN or lat > LAT_MAX: + errors.append("latitude must be between -90 and 90") + if lon is None: + errors.append("longitude is required") + elif not isinstance(lon, (int, float)) or lon < LON_MIN or lon > LON_MAX: + errors.append("longitude must be between -180 and 180") + return errors + + async def execute( + self, + latitude: float, + longitude: float, + radius_meters: float = 50000, + recent_hours: int = 24, + max_results: int = 50, + **kwargs: Any, + ) -> str: + if self.db is None: + return json.dumps({"operators": [], "notes": "Database not available"}) + try: + operators = await self.db.find_operators_nearby( + latitude=float(latitude), + longitude=float(longitude), + radius_meters=float(radius_meters), + max_results=int(max_results), + recent_only=int(recent_hours) > 0, + recent_hours=int(recent_hours), + ) + return json.dumps({ + "latitude": latitude, + "longitude": longitude, + "radius_meters": radius_meters, + "operators": operators, + "count": len(operators), + }) + except Exception as e: + return json.dumps({"error": str(e), "operators": []}) diff --git a/radioshaq/radioshaq/specialized/radio_rx.py b/radioshaq/radioshaq/specialized/radio_rx.py index fec4a1f..4fcb435 100644 --- a/radioshaq/radioshaq/specialized/radio_rx.py +++ b/radioshaq/radioshaq/specialized/radio_rx.py @@ -8,6 +8,7 @@ from loguru import logger from radioshaq.specialized.base import SpecializedAgent +from radioshaq.radio.modes import external_modem_for, normalize_mode class RadioReceptionAgent(SpecializedAgent): @@ -70,7 +71,7 @@ async def monitor_frequency( if self.rig_manager: await self.rig_manager.set_frequency(frequency) - await self.rig_manager.set_mode(mode) + await self.rig_manager.set_mode(str(normalize_mode(mode))) try: from radioshaq.radio.injection import get_injection_queue @@ -79,7 +80,8 @@ async def monitor_frequency( injection_queue = None while (loop.time() - start) < duration_seconds: - if self.digital_modes and mode in ("PSK31", "FT8", "RTTY", "CW"): + modem = external_modem_for(mode) + if self.digital_modes and modem and modem not in ("AX25", "APRS"): try: text = await asyncio.wait_for( self.digital_modes.receive_text(timeout=1.0), diff --git a/radioshaq/radioshaq/specialized/radio_rx_audio.py b/radioshaq/radioshaq/specialized/radio_rx_audio.py index 0bd4d61..94bf29f 100644 --- a/radioshaq/radioshaq/specialized/radio_rx_audio.py +++ b/radioshaq/radioshaq/specialized/radio_rx_audio.py @@ -205,7 +205,7 @@ async def _notify_change(self, pending: PendingResponse) -> None: try: await callback(pending) except Exception as e: - logger.warning("Confirmation callback error: %s", e) + logger.warning("Confirmation callback error: {}", e) class RadioAudioReceptionAgent(SpecializedAgent): @@ -257,6 +257,28 @@ def __init__( if self.stream_processor: self.stream_processor.set_segment_callback(self._on_segment_ready) + self._metrics_callback: Callable[[dict[str, Any]], None] | None = None + + def set_metrics_callback(self, callback: Callable[[dict[str, Any]], None] | None) -> None: + """Set callback for live VAD/metrics (vad_active, snr_db, state). Used by API to feed websocket.""" + self._metrics_callback = callback + if self.stream_processor: + if callback is not None: + def forward(vad_active: bool, snr_db: float | None, state: str) -> None: + callback({ + "type": "metrics", + "vad_active": vad_active, + "snr_db": snr_db, + "state": state, + }) + self.stream_processor.set_metrics_callback(forward) + else: + self.stream_processor.set_metrics_callback(None) + elif callback is not None: + logger.warning( + "set_metrics_callback called but stream_processor is None; " + "callback will not be forwarded until stream_processor is set." + ) async def execute( self, @@ -370,7 +392,7 @@ async def _on_segment_ready(self, segment: Any) -> None: return # Optionally store voice transcript (band, source=voice_listener) for GET /transcripts and relay - if getattr(self._radio_config, "voice_store_transcript", False) and self._transcript_storage and getattr(self._transcript_storage, "_db", None): + if getattr(self._radio_config, "voice_store_transcript", False) and self._transcript_storage and getattr(self._transcript_storage, "db", None): min_len = getattr(self._radio_config, "voice_store_min_length", 0) or 0 keywords = getattr(self._radio_config, "voice_store_keywords", None) or [] stripped = transcript.strip() @@ -393,7 +415,7 @@ async def _on_segment_ready(self, segment: Any) -> None: metadata={"band": self._current_band or "unknown", "source": "voice_listener"}, ) except Exception as e: - logger.warning("Voice transcript store failed: %s", e) + logger.warning("Voice transcript store failed: {}", e) # Publish to MessageBus so orchestrator can process (default capture path) if getattr(self.config, "voice_publish_to_bus", True) and self._message_bus and hasattr(self._message_bus, "publish_inbound"): @@ -412,7 +434,7 @@ async def _on_segment_ready(self, segment: Any) -> None: if not ok: logger.debug("Voice segment dropped (bus full)") except Exception as e: - logger.warning("Voice publish_inbound failed: %s", e) + logger.warning("Voice publish_inbound failed: {}", e) response_text = await self._generate_response_text(transcript) if self.config.response_mode == ResponseMode.LISTEN_ONLY: @@ -525,20 +547,18 @@ async def _transcribe_segment(self, segment: Any) -> str | None: sf.write(f.name, segment.audio, segment.sample_rate) temp_path = f.name try: - if self.config.asr_model == "voxtral": - from radioshaq.audio.asr import transcribe_audio_voxtral - out = transcribe_audio_voxtral( - temp_path, language=self.config.asr_language - ) - else: - model = self._get_whisper_model() - result = model.transcribe(temp_path) - out = result.get("text", "") + from radioshaq.audio.asr_plugin import transcribe_audio + out = await asyncio.to_thread( + transcribe_audio, + temp_path, + self.config.asr_model, + language=self.config.asr_language, + ) return (out or "").strip() or None finally: Path(temp_path).unlink(missing_ok=True) except Exception as e: - logger.exception("ASR failed: %s", e) + logger.exception("ASR failed: {}", e) return None async def _generate_response_text(self, incoming_message: str) -> str: @@ -571,7 +591,7 @@ async def _send_response( result = await self.response_agent.execute(task) return result.get("success", False) except Exception as e: - logger.exception("Response send failed: %s", e) + logger.exception("Response send failed: {}", e) return False async def _action_transcribe_file( @@ -584,15 +604,13 @@ async def _action_transcribe_file( return {"error": "audio_path required"} await self.emit_progress(upstream_callback, "transcribing", audio_path=audio_path) try: - if self.config.asr_model == "voxtral": - from radioshaq.audio.asr import transcribe_audio_voxtral - transcript = transcribe_audio_voxtral( - audio_path, language=self.config.asr_language - ) - else: - model = self._get_whisper_model() - result = model.transcribe(audio_path) - transcript = result.get("text", "").strip() + from radioshaq.audio.asr_plugin import transcribe_audio + transcript = await asyncio.to_thread( + transcribe_audio, + audio_path, + self.config.asr_model, + language=self.config.asr_language, + ) await self.emit_result( upstream_callback, {"type": "transcription", "transcript": transcript, "audio_path": audio_path}, @@ -603,7 +621,7 @@ async def _action_transcribe_file( "model": self.config.asr_model, } except Exception as e: - logger.exception("ASR failed: %s", e) + logger.exception("ASR failed: {}", e) await self.emit_error(upstream_callback, str(e)) return {"error": str(e), "audio_path": audio_path} diff --git a/radioshaq/radioshaq/specialized/radio_tools.py b/radioshaq/radioshaq/specialized/radio_tools.py index f840966..c6261d9 100644 --- a/radioshaq/radioshaq/specialized/radio_tools.py +++ b/radioshaq/radioshaq/specialized/radio_tools.py @@ -109,5 +109,5 @@ async def execute( ) return f"Error: {result.get('notes', result.get('error', 'Unknown failure'))}" except Exception as e: - logger.exception("send_audio_over_radio failed: %s", e) + logger.exception("send_audio_over_radio failed: {}", e) return f"Error: {e}" \ No newline at end of file diff --git a/radioshaq/radioshaq/specialized/radio_tx.py b/radioshaq/radioshaq/specialized/radio_tx.py index a638ba3..e53602a 100644 --- a/radioshaq/radioshaq/specialized/radio_tx.py +++ b/radioshaq/radioshaq/specialized/radio_tx.py @@ -9,9 +9,12 @@ from loguru import logger +from radioshaq.compliance_plugin import get_band_plan_source_for_config from radioshaq.middleware.upstream import UpstreamEvent -from radioshaq.radio.compliance import is_tx_allowed, log_tx -from radioshaq.radio.bands import BAND_PLANS +from radioshaq.radio.compliance import is_tx_allowed, is_tx_spectrum_allowed, log_tx +from radioshaq.radio.modes import normalize_mode +from radioshaq.radio.analog_mod import am_modulate, cw_tone_iq, ssb_modulate +from radioshaq.radio.fm import nfm_modulate from radioshaq.specialized.base import SpecializedAgent @@ -73,11 +76,16 @@ async def execute( # Compliance: do not TX on restricted or out-of-band frequencies radio_cfg = getattr(self.config, "radio", None) if self.config else None if frequency and radio_cfg and getattr(radio_cfg, "tx_allowed_bands_only", True): + restricted_region = getattr(radio_cfg, "restricted_bands_region", "FCC") + band_plan_source = get_band_plan_source_for_config( + restricted_region, + getattr(radio_cfg, "band_plan_region", None), + ) if not is_tx_allowed( frequency, - band_plan_source=BAND_PLANS, + band_plan_source=band_plan_source, allow_tx_only_amateur_bands=True, - restricted_region=getattr(radio_cfg, "restricted_bands_region", "FCC"), + restricted_region=restricted_region, ): err = { "success": False, @@ -114,7 +122,7 @@ async def execute( await self.emit_result(upstream_callback, result) return result except Exception as e: - logger.exception("Radio TX failed: %s", e) + logger.exception("Radio TX failed: {}", e) await self.emit_error(upstream_callback, str(e)) raise @@ -134,11 +142,87 @@ async def _transmit_voice( False, ) if use_sdr and self.sdr_transmitter: - # SDR path: transmit tone (voice-over-SDR would need modulation later) + # SDR path: transmit NFM if we have an audio file, else a short tone. try: - await self.sdr_transmitter.transmit_tone( - frequency_hz, duration_sec=0.5, sample_rate=2_000_000 - ) + mode_name = normalize_mode(mode) + if audio_path: + try: + import soundfile as sf + except Exception as e: + raise RuntimeError( + "SDR voice TX from file requires optional deps: uv sync --extra voice_tx" + ) from e + tx_rate = 2_000_000 + loop = asyncio.get_running_loop() + # Run blocking file I/O in executor to avoid stalling the event loop. + data, sr = await loop.run_in_executor( + None, + lambda: sf.read(str(audio_path), dtype="float32", always_2d=False), + ) + # Modulate to 2 MHz (HackRF-friendly); run CPU-heavy scipy in executor to avoid blocking event loop. + if mode_name.value in ("NFM",): + iq = await loop.run_in_executor( + None, lambda: nfm_modulate(data, int(sr), tx_rate, deviation_hz=2_500.0) + ) + elif mode_name.value == "AM": + iq = await loop.run_in_executor( + None, lambda: am_modulate(data, int(sr), tx_rate) + ) + elif mode_name.value in ("USB", "LSB"): + iq = await loop.run_in_executor( + None, lambda: ssb_modulate(data, int(sr), tx_rate, sideband=mode_name.value) + ) + elif mode_name.value == "CW": + # CW audio TX is typically keying; as a minimal baseline, emit a short carrier. + iq = await loop.run_in_executor( + None, lambda: cw_tone_iq(duration_sec=0.5, rf_rate_hz=tx_rate) + ) + else: + iq = await loop.run_in_executor( + None, lambda: nfm_modulate(data, int(sr), tx_rate, deviation_hz=2_500.0) + ) + # Compliance for SDR TX should consider occupied bandwidth (not just center). + bw = { + "NFM": 12_500.0, + "AM": 10_000.0, + "USB": 3_000.0, + "LSB": 3_000.0, + "CW": 500.0, + }.get(mode_name.value, 12_500.0) + radio_cfg = getattr(self.config, "radio", None) if self.config else None + if radio_cfg and getattr(radio_cfg, "tx_allowed_bands_only", True): + restricted_region = getattr(radio_cfg, "restricted_bands_region", "FCC") + band_plan_source = get_band_plan_source_for_config( + restricted_region, + getattr(radio_cfg, "band_plan_region", None), + ) + if not is_tx_spectrum_allowed( + frequency_hz, + bw, + band_plan_source=band_plan_source, + allow_tx_only_amateur_bands=True, + restricted_region=restricted_region, + ): + return { + "success": False, + "frequency": frequency_hz, + "mode": mode, + "transmission_type": "voice", + "message_sent": message[:100], + "timestamp": datetime.now(timezone.utc).isoformat(), + "notes": f"TX spectrum not allowed for BW={bw} Hz (mode={mode_name.value})", + } + + await self.sdr_transmitter.transmit_iq( + frequency_hz, + iq, + sample_rate=tx_rate, + occupied_bandwidth_hz=bw, + ) + else: + await self.sdr_transmitter.transmit_tone( + frequency_hz, duration_sec=0.5, sample_rate=2_000_000 + ) return { "success": True, "frequency": frequency_hz, @@ -146,7 +230,7 @@ async def _transmit_voice( "transmission_type": "voice", "message_sent": message[:100], "timestamp": datetime.now(timezone.utc).isoformat(), - "notes": "SDR tone (HackRF)", + "notes": f"SDR {mode_name.value} voice (HackRF)" if audio_path else "SDR tone (HackRF)", } except ValueError as e: return { @@ -167,6 +251,7 @@ async def _transmit_voice( "message_sent": message[:100], "timestamp": datetime.now(timezone.utc).isoformat(), "notes": "Rig manager not configured", + "error": "Rig manager not configured; enable radio.enabled for CAT control or configure SDR TX (radio.sdr_tx_enabled=true).", } play_path: str | Path | None = None @@ -188,14 +273,29 @@ async def _transmit_voice( "notes": f"Audio file not found: {play_path}", } elif (((use_tts is True) or (use_tts is None and voice_use_tts)) and message): + play_path: str | None = None try: - from radioshaq.audio.tts import text_to_speech_elevenlabs import tempfile - with tempfile.NamedTemporaryFile(suffix=".mp3", delete=False) as f: - text_to_speech_elevenlabs(message, output_path=f.name) + from radioshaq.audio.tts_plugin import synthesize_speech + tts_cfg = getattr(self.config, "tts", None) if self.config else None + provider = getattr(tts_cfg, "provider", "elevenlabs") if tts_cfg else "elevenlabs" + suffix = ".wav" if provider == "kokoro" else ".mp3" + with tempfile.NamedTemporaryFile(suffix=suffix, delete=False) as f: play_path = f.name + kwargs: dict[str, Any] = {} + if tts_cfg and provider == "elevenlabs": + kwargs["voice"] = getattr(tts_cfg, "elevenlabs_voice_id", None) + kwargs["model_id"] = getattr(tts_cfg, "elevenlabs_model_id", None) + kwargs["output_format"] = getattr(tts_cfg, "elevenlabs_output_format", None) + elif tts_cfg and provider == "kokoro": + kwargs["voice"] = getattr(tts_cfg, "kokoro_voice", None) + kwargs["lang_code"] = getattr(tts_cfg, "kokoro_lang_code", None) + kwargs["speed"] = getattr(tts_cfg, "kokoro_speed", None) + synthesize_speech(message, provider, output_path=play_path, **kwargs) except Exception as e: - logger.warning("TTS failed for voice TX: %s", e) + if play_path: + Path(play_path).unlink(missing_ok=True) + logger.warning("TTS failed for voice TX: {}", e) return { "success": False, "frequency": frequency_hz, diff --git a/radioshaq/radioshaq/specialized/relay_tools.py b/radioshaq/radioshaq/specialized/relay_tools.py index 76d7582..359b40e 100644 --- a/radioshaq/radioshaq/specialized/relay_tools.py +++ b/radioshaq/radioshaq/specialized/relay_tools.py @@ -5,8 +5,11 @@ import re from typing import Any +from radioshaq.compliance_plugin import get_band_plan_source_for_config +from radioshaq.constants import E164_PATTERN from radioshaq.radio.bands import BAND_PLANS from radioshaq.relay.service import relay_message_between_bands_service +from radioshaq.utils.phone import normalize_e164 # Optional ISO datetime for deliver_at (lenient) DELIVER_AT_PATTERN = re.compile( @@ -19,10 +22,11 @@ class RelayMessageTool: name = "relay_message_between_bands" description = ( - "Relay a message from one band to another. Stores the message so the destination callsign can poll for it " - "(GET /transcripts?callsign=&destination_only=true&band=). " + "Relay a message from one band to another, or to SMS/WhatsApp. " + "For radio: stores the message so the destination callsign can poll (GET /transcripts?callsign=&destination_only=true&band=). " + "For sms/whatsapp: set target_channel to 'sms' or 'whatsapp' and destination_phone (E.164); message is delivered via Twilio. " "Does not broadcast or transmit unless site config enables it. " - "Use when the user or a radio contact asks to pass a message to another band or callsign." + "Use when the user or a radio contact asks to pass a message to another band, callsign, or phone." ) def __init__( @@ -32,12 +36,14 @@ def __init__( get_radio_tx: Any = None, config: Any = None, callsign_repository: Any = None, + message_bus: Any = None, ) -> None: self._storage = storage self._injection_queue = injection_queue self._get_radio_tx = get_radio_tx self._config = config self._callsign_repository = callsign_repository + self._message_bus = message_bus def to_schema(self) -> dict[str, Any]: return { @@ -58,7 +64,7 @@ def to_schema(self) -> dict[str, Any]: }, "target_band": { "type": "string", - "description": "Band to deliver the message to (e.g. 2m, 40m).", + "description": "Band to deliver the message to (e.g. 2m, 40m). Required for radio; omit or use placeholder for sms/whatsapp.", }, "source_callsign": { "type": "string", @@ -81,8 +87,22 @@ def to_schema(self) -> dict[str, Any]: "type": "string", "description": "ISO datetime for scheduled delivery (optional).", }, + "target_channel": { + "type": "string", + "description": "Delivery channel: 'radio' (default), 'sms', or 'whatsapp'. If sms/whatsapp, destination_phone is required.", + "default": "radio", + }, + "destination_phone": { + "type": "string", + "description": "E.164 phone number for SMS/WhatsApp delivery when target_channel is sms or whatsapp.", + }, + "emergency": { + "type": "boolean", + "description": "If true and target_channel is sms/whatsapp, message is queued for human approval (emergency contact flow). Only allowed when emergency_contact is enabled for this region.", + "default": False, + }, }, - "required": ["message", "source_band", "target_band"], + "required": ["message", "source_band"], }, }, } @@ -93,13 +113,38 @@ def validate_params(self, params: dict[str, Any]) -> list[str]: errors.append("message is required") if not params.get("source_band") or not isinstance(params.get("source_band"), str): errors.append("source_band is required") - if not params.get("target_band") or not isinstance(params.get("target_band"), str): - errors.append("target_band is required") source_band = (params.get("source_band") or "").strip() target_band = (params.get("target_band") or "").strip() - if source_band and source_band not in BAND_PLANS: + target_channel = (params.get("target_channel") or "radio").strip().lower() + if target_channel == "radio" and (not target_band or not isinstance(params.get("target_band"), str)): + errors.append("target_band is required when target_channel is radio") + destination_phone = (params.get("destination_phone") or "").strip() + if target_channel not in ("radio", "sms", "whatsapp"): + errors.append("target_channel must be radio, sms, or whatsapp") + if target_channel in ("sms", "whatsapp") and not destination_phone: + errors.append("destination_phone is required when target_channel is sms or whatsapp") + if target_channel in ("sms", "whatsapp") and destination_phone: + normalised = normalize_e164(destination_phone) + if not E164_PATTERN.match(normalised): + errors.append( + f"destination_phone must be E.164 (e.g. +14155552671); got: {destination_phone!r}" + ) + if params.get("emergency") is True and target_channel not in ("sms", "whatsapp"): + errors.append("emergency only applies when target_channel is sms or whatsapp") + config = self._config + radio = getattr(config, "radio", None) if config else None + if not radio: + radio = config + if radio: + band_plans = get_band_plan_source_for_config( + getattr(radio, "restricted_bands_region", "FCC"), + getattr(radio, "band_plan_region", None), + ) + else: + band_plans = BAND_PLANS + if source_band and source_band not in band_plans: errors.append(f"Unknown source_band: {source_band}; use e.g. 40m, 2m, 20m") - if target_band and target_band not in BAND_PLANS: + if target_channel == "radio" and target_band and target_band not in band_plans: errors.append(f"Unknown target_band: {target_band}; use e.g. 40m, 2m, 20m") if params.get("source_frequency_hz") is not None and not isinstance( params.get("source_frequency_hz"), (int, float) @@ -125,9 +170,12 @@ async def execute( source_frequency_hz: float | None = None, target_frequency_hz: float | None = None, deliver_at: str | None = None, + target_channel: str = "radio", + destination_phone: str | None = None, + emergency: bool = False, **kwargs: Any, ) -> str: - if self._storage is None or not getattr(self._storage, "_db", None): + if self._storage is None or getattr(self._storage, "db", None) is None: return "Error: Relay not available (no storage)." config = self._config @@ -168,10 +216,16 @@ async def execute( if callable(self._get_radio_tx): tx_agent = self._get_radio_tx() + # Normalize destination_phone once, after validation, so stored metadata and downstream + # delivery use consistent E.164 formatting. + normalized_destination_phone: str | None = None + if destination_phone and (target_channel or "radio").strip().lower() in ("sms", "whatsapp"): + normalized_destination_phone = normalize_e164((destination_phone or "").strip()) or None + result = await relay_message_between_bands_service( message=message, source_band=source_band.strip(), - target_band=target_band.strip(), + target_band=target_band.strip() if (target_channel or "radio") == "radio" else (target_channel or "radio"), source_frequency_hz=source_frequency_hz, target_frequency_hz=target_frequency_hz, source_callsign=source_callsign or "UNKNOWN", @@ -180,13 +234,23 @@ async def execute( storage=self._storage, injection_queue=self._injection_queue, radio_tx_agent=tx_agent, - config=radio_cfg, + config=config, store_only_relayed=getattr(radio_cfg, "relay_store_only_relayed", False), + target_channel=(target_channel or "radio").strip().lower(), + destination_phone=normalized_destination_phone if normalized_destination_phone is not None else (destination_phone or "").strip() or None, + emergency=emergency, + message_bus=self._message_bus, ) if not result.get("ok"): return f"Error: {result.get('error', 'relay failed')}" + if result.get("queued_for_approval"): + return ( + f"Emergency relay queued for human approval (event_id={result.get('event_id')}). " + "An operator must approve via POST /emergency/events/{id}/approve before the message is sent." + ) + if result.get("relay") == "no_storage": return ( "Relay accepted (no DB to store). " @@ -197,6 +261,12 @@ async def execute( rid = result.get("relayed_transcript_id") dest = result.get("target_band") dest_cs = destination_callsign or "recipient" + tch = (target_channel or "radio").strip().lower() + if tch in ("sms", "whatsapp"): + return ( + f"Relayed to {tch}. Source transcript ID: {sid}, relayed ID: {rid}. " + f"Message will be delivered via {tch} (relay_delivery worker + outbound handler)." + ) return ( f"Relayed. Source transcript ID: {sid}, relayed ID: {rid}. " f"Recipient can poll GET /transcripts?callsign={dest_cs}&destination_only=true&band={dest}." diff --git a/radioshaq/radioshaq/specialized/scheduler_agent.py b/radioshaq/radioshaq/specialized/scheduler_agent.py index 4c639e9..93b027c 100644 --- a/radioshaq/radioshaq/specialized/scheduler_agent.py +++ b/radioshaq/radioshaq/specialized/scheduler_agent.py @@ -106,7 +106,7 @@ async def _schedule_call( "status": "pending", } except Exception as e: - logger.exception("Scheduler store_coordination_event failed: %s", e) + logger.exception("Scheduler store_coordination_event failed: {}", e) await self.emit_error(upstream_callback, str(e)) raise diff --git a/radioshaq/radioshaq/specialized/sms_agent.py b/radioshaq/radioshaq/specialized/sms_agent.py index 65ad538..6d79a61 100644 --- a/radioshaq/radioshaq/specialized/sms_agent.py +++ b/radioshaq/radioshaq/specialized/sms_agent.py @@ -1,12 +1,15 @@ -"""SMS specialized agent (Twilio integration).""" +"""SMS specialized agent (Twilio integration). Twilio expects E.164 for phone numbers.""" from __future__ import annotations +import asyncio from typing import Any from loguru import logger +from radioshaq.constants import E164_PATTERN from radioshaq.specialized.base import SpecializedAgent +from radioshaq.utils.phone import normalize_e164 class SMSAgent(SpecializedAgent): @@ -47,25 +50,42 @@ async def execute( async def _send( self, task: dict[str, Any], upstream_callback: Any ) -> dict[str, Any]: - """Send SMS via Twilio.""" - to = (task.get("to") or "").strip() + """Send SMS via Twilio. Phone numbers are normalized to E.164.""" + to = normalize_e164(task.get("to") or "") body = task.get("message", "") or task.get("body", "") await self.emit_progress(upstream_callback, "sending", to=to) if not to: return {"success": False, "error": "to (phone number) is required"} + if not E164_PATTERN.match(to): + return { + "success": False, + "error": "to must be E.164 (10–15 digits)", + "to": to, + "reason": "invalid_e164", + } if not self.twilio_client or not self.from_number: return { "success": False, "to": to, "notes": "Twilio client or from_number not configured", + "reason": "twilio_not_configured", + } + from_e164 = normalize_e164(self.from_number) + if not from_e164: + return { + "success": False, + "to": to, + "notes": "from_number normalizes to empty string; check Twilio sender config", + "reason": "invalid_from", } try: - msg = self.twilio_client.messages.create( + msg = await asyncio.to_thread( + self.twilio_client.messages.create, body=body, - from_=self.from_number, + from_=from_e164, to=to, ) result = { @@ -77,6 +97,6 @@ async def _send( await self.emit_result(upstream_callback, result) return result except Exception as e: - logger.exception("SMS send failed: %s", e) + logger.exception("SMS send failed: {}", e) await self.emit_error(upstream_callback, str(e)) raise diff --git a/radioshaq/radioshaq/specialized/whatsapp_agent.py b/radioshaq/radioshaq/specialized/whatsapp_agent.py index c5334d1..7566676 100644 --- a/radioshaq/radioshaq/specialized/whatsapp_agent.py +++ b/radioshaq/radioshaq/specialized/whatsapp_agent.py @@ -1,28 +1,33 @@ -"""WhatsApp specialized agent (adapt from nanobot).""" +"""WhatsApp specialized agent (Twilio WhatsApp Business API).""" from __future__ import annotations +import asyncio from typing import Any +from loguru import logger + from radioshaq.specialized.base import SpecializedAgent +from radioshaq.utils.phone import normalize_e164 class WhatsAppAgent(SpecializedAgent): """ - Specialized agent for WhatsApp message send/receive. - Intended to wrap nanobot WhatsApp channel logic when integrated. + Specialized agent for WhatsApp message send/receive via Twilio. + Uses same Twilio client as SMS with from_/to as whatsapp:+E.164. """ name = "whatsapp" - description = "Sends and receives messages via WhatsApp" + description = "Sends and receives messages via WhatsApp (Twilio)" capabilities = [ "whatsapp_send", "whatsapp_receive", ] - def __init__(self, client: Any = None): - """Optional: nanobot WhatsApp client or similar when integrated.""" + def __init__(self, client: Any = None, from_number: str | None = None): + """Twilio REST client and WhatsApp sender number (E.164); both required for send.""" self.client = client + self.from_number = from_number async def execute( self, @@ -45,34 +50,59 @@ async def execute( async def _send_message( self, task: dict[str, Any], upstream_callback: Any ) -> dict[str, Any]: - """Send a WhatsApp message to a chat/phone.""" + """Send a WhatsApp message to a chat/phone (E.164).""" to = task.get("to") or task.get("chat_id") or "" message = task.get("message", "") await self.emit_progress(upstream_callback, "sending", to=to) - if not self.client: + if not self.client or not self.from_number: return { "success": False, "to": to, - "message_sent": message[:100], - "notes": "WhatsApp client not configured; integrate nanobot for full support.", + "message_sent": (message or "")[:100], + "notes": "Twilio WhatsApp not configured (client or whatsapp_from missing).", } try: - # Placeholder: when nanobot is integrated, call client.send(...) result = await self._do_send(to, message) await self.emit_result(upstream_callback, result) return result except Exception as e: + logger.exception("WhatsApp send failed: {}", e) await self.emit_error(upstream_callback, str(e)) raise async def _do_send(self, to: str, message: str) -> dict[str, Any]: - """Override in integration to use nanobot client.""" - return { - "success": True, - "to": to, - "message_sent": message[:100], - "notes": "No WhatsApp client configured", - } + """Send via Twilio WhatsApp: from_ and to use whatsapp:+E.164.""" + to_e164 = normalize_e164(to) + from_e164 = normalize_e164(self.from_number) + if not to_e164: + return { + "success": False, + "to": to, + "notes": "to (phone number) is required", + } + if not from_e164: + return { + "success": False, + "to": to, + "notes": "from_number normalizes to empty string; check Twilio WhatsApp sender config", + } + try: + msg = await asyncio.to_thread( + self.client.messages.create, + body=message or "", + from_="whatsapp:" + from_e164, + to="whatsapp:" + to_e164, + ) + return { + "success": True, + "to": to_e164, + "sid": msg.sid, + "status": getattr(msg, "status", None), + "message_sent": (message or "")[:100], + } + except Exception as e: + logger.exception("WhatsApp _do_send failed: {}", e) + raise diff --git a/radioshaq/radioshaq/specialized/whitelist_agent.py b/radioshaq/radioshaq/specialized/whitelist_agent.py index 9930405..62557d7 100644 --- a/radioshaq/radioshaq/specialized/whitelist_agent.py +++ b/radioshaq/radioshaq/specialized/whitelist_agent.py @@ -93,7 +93,7 @@ async def execute( ) content = (response.content or "").strip() except Exception as e: - logger.exception("Whitelist LLM call failed: %s", e) + logger.exception("Whitelist LLM call failed: {}", e) return { "approved": False, "reason": "Evaluation failed.", @@ -129,7 +129,7 @@ async def execute( if raw_callsign and isinstance(raw_callsign, str): callsign_from_llm = raw_callsign.strip().upper() except (json.JSONDecodeError, TypeError) as e: - logger.warning("Whitelist LLM response not valid JSON: %s", e) + logger.warning("Whitelist LLM response not valid JSON: {}", e) reason = "Evaluation could not be parsed." callsign_to_register = stated_callsign or callsign_from_llm @@ -144,7 +144,7 @@ async def execute( await self.repository.register(callsign_to_register, source="whitelist") callsign_registered = callsign_to_register except Exception as e: - logger.warning("Whitelist register failed: %s", e) + logger.warning("Whitelist register failed: {}", e) reason = f"{reason} Registration failed." callsign_registered = None diff --git a/radioshaq/radioshaq/utils/__init__.py b/radioshaq/radioshaq/utils/__init__.py new file mode 100644 index 0000000..13f6535 --- /dev/null +++ b/radioshaq/radioshaq/utils/__init__.py @@ -0,0 +1 @@ +"""Utility modules for RadioShaq.""" diff --git a/radioshaq/radioshaq/utils/phone.py b/radioshaq/radioshaq/utils/phone.py new file mode 100644 index 0000000..644887f --- /dev/null +++ b/radioshaq/radioshaq/utils/phone.py @@ -0,0 +1,11 @@ +"""Phone number normalization (E.164).""" + +from __future__ import annotations + +import re + + +def normalize_e164(phone: str) -> str: + """Normalize to E.164: optional +, digits only (10–15 chars).""" + digits = re.sub(r"\D", "", (phone or "").strip()) + return "+" + digits if digits else "" diff --git a/radioshaq/radioshaq/vendor/nanobot/bus/__init__.py b/radioshaq/radioshaq/vendor/nanobot/bus/__init__.py index 08e7cde..67c0f23 100644 --- a/radioshaq/radioshaq/vendor/nanobot/bus/__init__.py +++ b/radioshaq/radioshaq/vendor/nanobot/bus/__init__.py @@ -1,4 +1,4 @@ -"""Message bus system for SHAKODS (vendored from nanobot).""" +"""Message bus system for RadioShaq (vendored from nanobot).""" from radioshaq.vendor.nanobot.bus.events import InboundMessage, OutboundMessage from radioshaq.vendor.nanobot.bus.queue import MessageBus diff --git a/radioshaq/radioshaq/vendor/vibe/middleware.py b/radioshaq/radioshaq/vendor/vibe/middleware.py index 856586e..c090b47 100644 --- a/radioshaq/radioshaq/vendor/vibe/middleware.py +++ b/radioshaq/radioshaq/vendor/vibe/middleware.py @@ -1,4 +1,4 @@ -"""Middleware system for SHAKODS (adapted from vibe). +"""Middleware system for RadioShaq (adapted from vibe). Provides a middleware pipeline for processing conversations through the REACT loop with support for context management, limits, and diff --git a/radioshaq/radioshaq/web_ui/assets/index-BsmUas16.js b/radioshaq/radioshaq/web_ui/assets/index-BsmUas16.js deleted file mode 100644 index 286a01b..0000000 --- a/radioshaq/radioshaq/web_ui/assets/index-BsmUas16.js +++ /dev/null @@ -1,51 +0,0 @@ -(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const o of i.addedNodes)o.tagName==="LINK"&&o.rel==="modulepreload"&&r(o)}).observe(document,{childList:!0,subtree:!0});function n(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=n(l);fetch(l.href,i)}})();function kf(e){return e&&e.__esModule&&Object.prototype.hasOwnProperty.call(e,"default")?e.default:e}var Ha={exports:{}},zl={},Va={exports:{}},I={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var vr=Symbol.for("react.element"),Ef=Symbol.for("react.portal"),Cf=Symbol.for("react.fragment"),Rf=Symbol.for("react.strict_mode"),_f=Symbol.for("react.profiler"),jf=Symbol.for("react.provider"),Pf=Symbol.for("react.context"),Nf=Symbol.for("react.forward_ref"),Lf=Symbol.for("react.suspense"),Tf=Symbol.for("react.memo"),zf=Symbol.for("react.lazy"),Eu=Symbol.iterator;function Of(e){return e===null||typeof e!="object"?null:(e=Eu&&e[Eu]||e["@@iterator"],typeof e=="function"?e:null)}var Qa={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},Ka=Object.assign,Ya={};function Cn(e,t,n){this.props=e,this.context=t,this.refs=Ya,this.updater=n||Qa}Cn.prototype.isReactComponent={};Cn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};Cn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function Ga(){}Ga.prototype=Cn.prototype;function ko(e,t,n){this.props=e,this.context=t,this.refs=Ya,this.updater=n||Qa}var Eo=ko.prototype=new Ga;Eo.constructor=ko;Ka(Eo,Cn.prototype);Eo.isPureReactComponent=!0;var Cu=Array.isArray,Xa=Object.prototype.hasOwnProperty,Co={current:null},Ja={key:!0,ref:!0,__self:!0,__source:!0};function Za(e,t,n){var r,l={},i=null,o=null;if(t!=null)for(r in t.ref!==void 0&&(o=t.ref),t.key!==void 0&&(i=""+t.key),t)Xa.call(t,r)&&!Ja.hasOwnProperty(r)&&(l[r]=t[r]);var u=arguments.length-2;if(u===1)l.children=n;else if(1>>1,Z=P[Q];if(0>>1;Ql(Ln,O))etl(Jt,Ln)?(P[Q]=Jt,P[et]=O,Q=et):(P[Q]=Ln,P[be]=O,Q=be);else if(etl(Jt,O))P[Q]=Jt,P[et]=O,Q=et;else break e}}return z}function l(P,z){var O=P.sortIndex-z.sortIndex;return O!==0?O:P.id-z.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var o=Date,u=o.now();e.unstable_now=function(){return o.now()-u}}var a=[],s=[],h=1,d=null,m=3,w=!1,v=!1,x=!1,k=typeof setTimeout=="function"?setTimeout:null,f=typeof clearTimeout=="function"?clearTimeout:null,c=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function p(P){for(var z=n(s);z!==null;){if(z.callback===null)r(s);else if(z.startTime<=P)r(s),z.sortIndex=z.expirationTime,t(a,z);else break;z=n(s)}}function S(P){if(x=!1,p(P),!v)if(n(a)!==null)v=!0,Gt(C);else{var z=n(s);z!==null&&Nn(S,z.startTime-P)}}function C(P,z){v=!1,x&&(x=!1,f(L),L=-1),w=!0;var O=m;try{for(p(z),d=n(a);d!==null&&(!(d.expirationTime>z)||P&&!U());){var Q=d.callback;if(typeof Q=="function"){d.callback=null,m=d.priorityLevel;var Z=Q(d.expirationTime<=z);z=e.unstable_now(),typeof Z=="function"?d.callback=Z:d===n(a)&&r(a),p(z)}else r(a);d=n(a)}if(d!==null)var Xt=!0;else{var be=n(s);be!==null&&Nn(S,be.startTime-z),Xt=!1}return Xt}finally{d=null,m=O,w=!1}}var R=!1,_=null,L=-1,D=5,j=-1;function U(){return!(e.unstable_now()-jP||125Q?(P.sortIndex=O,t(s,P),n(a)===null&&P===n(s)&&(x?(f(L),L=-1):x=!0,Nn(S,O-Q))):(P.sortIndex=Z,t(a,P),v||w||(v=!0,Gt(C))),P},e.unstable_shouldYield=U,e.unstable_wrapCallback=function(P){var z=m;return function(){var O=m;m=z;try{return P.apply(this,arguments)}finally{m=O}}}})(ns);ts.exports=ns;var Vf=ts.exports;/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var Qf=y,Ce=Vf;function E(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),_i=Object.prototype.hasOwnProperty,Kf=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,_u={},ju={};function Yf(e){return _i.call(ju,e)?!0:_i.call(_u,e)?!1:Kf.test(e)?ju[e]=!0:(_u[e]=!0,!1)}function Gf(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function Xf(e,t,n,r){if(t===null||typeof t>"u"||Gf(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function me(e,t,n,r,l,i,o){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=i,this.removeEmptyString=o}var ue={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){ue[e]=new me(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];ue[t]=new me(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){ue[e]=new me(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){ue[e]=new me(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){ue[e]=new me(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){ue[e]=new me(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){ue[e]=new me(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){ue[e]=new me(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){ue[e]=new me(e,5,!1,e.toLowerCase(),null,!1,!1)});var _o=/[\-:]([a-z])/g;function jo(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(_o,jo);ue[t]=new me(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(_o,jo);ue[t]=new me(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(_o,jo);ue[t]=new me(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){ue[e]=new me(e,1,!1,e.toLowerCase(),null,!1,!1)});ue.xlinkHref=new me("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){ue[e]=new me(e,1,!1,e.toLowerCase(),null,!0,!0)});function Po(e,t,n,r){var l=ue.hasOwnProperty(t)?ue[t]:null;(l!==null?l.type!==0:r||!(2u||l[o]!==i[u]){var a=` -`+l[o].replace(" at new "," at ");return e.displayName&&a.includes("")&&(a=a.replace("",e.displayName)),a}while(1<=o&&0<=u);break}}}finally{ql=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?Un(e):""}function Jf(e){switch(e.tag){case 5:return Un(e.type);case 16:return Un("Lazy");case 13:return Un("Suspense");case 19:return Un("SuspenseList");case 0:case 2:case 15:return e=bl(e.type,!1),e;case 11:return e=bl(e.type.render,!1),e;case 1:return e=bl(e.type,!0),e;default:return""}}function Li(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case bt:return"Fragment";case qt:return"Portal";case ji:return"Profiler";case No:return"StrictMode";case Pi:return"Suspense";case Ni:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case is:return(e.displayName||"Context")+".Consumer";case ls:return(e._context.displayName||"Context")+".Provider";case Lo:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case To:return t=e.displayName||null,t!==null?t:Li(e.type)||"Memo";case ht:t=e._payload,e=e._init;try{return Li(e(t))}catch{}}return null}function Zf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return Li(t);case 8:return t===No?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function Pt(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function us(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function qf(e){var t=us(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,i=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(o){r=""+o,i.call(this,o)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(o){r=""+o},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function Pr(e){e._valueTracker||(e._valueTracker=qf(e))}function as(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=us(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function ol(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function Ti(e,t){var n=t.checked;return X({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function Nu(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=Pt(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function ss(e,t){t=t.checked,t!=null&&Po(e,"checked",t,!1)}function zi(e,t){ss(e,t);var n=Pt(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?Oi(e,t.type,n):t.hasOwnProperty("defaultValue")&&Oi(e,t.type,Pt(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function Lu(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function Oi(e,t,n){(t!=="number"||ol(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var Bn=Array.isArray;function fn(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l"+t.valueOf().toString()+"",t=Nr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function tr(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var Qn={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},bf=["Webkit","ms","Moz","O"];Object.keys(Qn).forEach(function(e){bf.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),Qn[t]=Qn[e]})});function ps(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||Qn.hasOwnProperty(e)&&Qn[e]?(""+t).trim():t+"px"}function hs(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=ps(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var ed=X({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function Di(e,t){if(t){if(ed[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(E(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(E(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(E(61))}if(t.style!=null&&typeof t.style!="object")throw Error(E(62))}}function Mi(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var $i=null;function zo(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var Ai=null,dn=null,pn=null;function Ou(e){if(e=xr(e)){if(typeof Ai!="function")throw Error(E(280));var t=e.stateNode;t&&(t=Ml(t),Ai(e.stateNode,e.type,t))}}function ms(e){dn?pn?pn.push(e):pn=[e]:dn=e}function gs(){if(dn){var e=dn,t=pn;if(pn=dn=null,Ou(e),t)for(e=0;e>>=0,e===0?32:31-(fd(e)/dd|0)|0}var Lr=64,Tr=4194304;function Wn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function cl(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,o=n&268435455;if(o!==0){var u=o&~l;u!==0?r=Wn(u):(i&=o,i!==0&&(r=Wn(i)))}else o=n&~l,o!==0?r=Wn(o):i!==0&&(r=Wn(i));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,i=t&-t,l>=i||l===16&&(i&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0n;n++)t.push(e);return t}function wr(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-Ae(t),e[t]=n}function gd(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0=Yn),Wu=" ",Hu=!1;function Ds(e,t){switch(e){case"keyup":return Vd.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Ms(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var en=!1;function Kd(e,t){switch(e){case"compositionend":return Ms(t);case"keypress":return t.which!==32?null:(Hu=!0,Wu);case"textInput":return e=t.data,e===Wu&&Hu?null:e;default:return null}}function Yd(e,t){if(en)return e==="compositionend"||!Uo&&Ds(e,t)?(e=Is(),Gr=Mo=vt=null,en=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=Yu(n)}}function Bs(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?Bs(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function Ws(){for(var e=window,t=ol();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=ol(e.document)}return t}function Bo(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function np(e){var t=Ws(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&Bs(n.ownerDocument.documentElement,n)){if(r!==null&&Bo(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=Gu(n,i);var o=Gu(n,r);l&&o&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==o.node||e.focusOffset!==o.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(t),e.extend(o.node,o.offset)):(t.setEnd(o.node,o.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n=document.documentMode,tn=null,Qi=null,Xn=null,Ki=!1;function Xu(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;Ki||tn==null||tn!==ol(r)||(r=tn,"selectionStart"in r&&Bo(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),Xn&&ur(Xn,r)||(Xn=r,r=pl(Qi,"onSelect"),0ln||(e.current=qi[ln],qi[ln]=null,ln--)}function B(e,t){ln++,qi[ln]=e.current,e.current=t}var Nt={},fe=Tt(Nt),ve=Tt(!1),Bt=Nt;function vn(e,t){var n=e.type.contextTypes;if(!n)return Nt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in n)l[i]=t[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function we(e){return e=e.childContextTypes,e!=null}function ml(){H(ve),H(fe)}function na(e,t,n){if(fe.current!==Nt)throw Error(E(168));B(fe,t),B(ve,n)}function Zs(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(E(108,Zf(e)||"Unknown",l));return X({},n,r)}function gl(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||Nt,Bt=fe.current,B(fe,e),B(ve,ve.current),!0}function ra(e,t,n){var r=e.stateNode;if(!r)throw Error(E(169));n?(e=Zs(e,t,Bt),r.__reactInternalMemoizedMergedChildContext=e,H(ve),H(fe),B(fe,e)):H(ve),B(ve,n)}var nt=null,$l=!1,pi=!1;function qs(e){nt===null?nt=[e]:nt.push(e)}function hp(e){$l=!0,qs(e)}function zt(){if(!pi&&nt!==null){pi=!0;var e=0,t=$;try{var n=nt;for($=1;e>=o,l-=o,rt=1<<32-Ae(t)+l|n<L?(D=_,_=null):D=_.sibling;var j=m(f,_,p[L],S);if(j===null){_===null&&(_=D);break}e&&_&&j.alternate===null&&t(f,_),c=i(j,c,L),R===null?C=j:R.sibling=j,R=j,_=D}if(L===p.length)return n(f,_),V&&Ot(f,L),C;if(_===null){for(;LL?(D=_,_=null):D=_.sibling;var U=m(f,_,j.value,S);if(U===null){_===null&&(_=D);break}e&&_&&U.alternate===null&&t(f,_),c=i(U,c,L),R===null?C=U:R.sibling=U,R=U,_=D}if(j.done)return n(f,_),V&&Ot(f,L),C;if(_===null){for(;!j.done;L++,j=p.next())j=d(f,j.value,S),j!==null&&(c=i(j,c,L),R===null?C=j:R.sibling=j,R=j);return V&&Ot(f,L),C}for(_=r(f,_);!j.done;L++,j=p.next())j=w(_,f,L,j.value,S),j!==null&&(e&&j.alternate!==null&&_.delete(j.key===null?L:j.key),c=i(j,c,L),R===null?C=j:R.sibling=j,R=j);return e&&_.forEach(function(We){return t(f,We)}),V&&Ot(f,L),C}function k(f,c,p,S){if(typeof p=="object"&&p!==null&&p.type===bt&&p.key===null&&(p=p.props.children),typeof p=="object"&&p!==null){switch(p.$$typeof){case jr:e:{for(var C=p.key,R=c;R!==null;){if(R.key===C){if(C=p.type,C===bt){if(R.tag===7){n(f,R.sibling),c=l(R,p.props.children),c.return=f,f=c;break e}}else if(R.elementType===C||typeof C=="object"&&C!==null&&C.$$typeof===ht&&oa(C)===R.type){n(f,R.sibling),c=l(R,p.props),c.ref=Dn(f,R,p),c.return=f,f=c;break e}n(f,R);break}else t(f,R);R=R.sibling}p.type===bt?(c=Ut(p.props.children,f.mode,S,p.key),c.return=f,f=c):(S=nl(p.type,p.key,p.props,null,f.mode,S),S.ref=Dn(f,c,p),S.return=f,f=S)}return o(f);case qt:e:{for(R=p.key;c!==null;){if(c.key===R)if(c.tag===4&&c.stateNode.containerInfo===p.containerInfo&&c.stateNode.implementation===p.implementation){n(f,c.sibling),c=l(c,p.children||[]),c.return=f,f=c;break e}else{n(f,c);break}else t(f,c);c=c.sibling}c=xi(p,f.mode,S),c.return=f,f=c}return o(f);case ht:return R=p._init,k(f,c,R(p._payload),S)}if(Bn(p))return v(f,c,p,S);if(Tn(p))return x(f,c,p,S);$r(f,p)}return typeof p=="string"&&p!==""||typeof p=="number"?(p=""+p,c!==null&&c.tag===6?(n(f,c.sibling),c=l(c,p),c.return=f,f=c):(n(f,c),c=Si(p,f.mode,S),c.return=f,f=c),o(f)):n(f,c)}return k}var Sn=nc(!0),rc=nc(!1),wl=Tt(null),Sl=null,an=null,Qo=null;function Ko(){Qo=an=Sl=null}function Yo(e){var t=wl.current;H(wl),e._currentValue=t}function to(e,t,n){for(;e!==null;){var r=e.alternate;if((e.childLanes&t)!==t?(e.childLanes|=t,r!==null&&(r.childLanes|=t)):r!==null&&(r.childLanes&t)!==t&&(r.childLanes|=t),e===n)break;e=e.return}}function mn(e,t){Sl=e,Qo=an=null,e=e.dependencies,e!==null&&e.firstContext!==null&&(e.lanes&t&&(ye=!0),e.firstContext=null)}function ze(e){var t=e._currentValue;if(Qo!==e)if(e={context:e,memoizedValue:t,next:null},an===null){if(Sl===null)throw Error(E(308));an=e,Sl.dependencies={lanes:0,firstContext:e}}else an=an.next=e;return t}var Mt=null;function Go(e){Mt===null?Mt=[e]:Mt.push(e)}function lc(e,t,n,r){var l=t.interleaved;return l===null?(n.next=n,Go(t)):(n.next=l.next,l.next=n),t.interleaved=n,at(e,r)}function at(e,t){e.lanes|=t;var n=e.alternate;for(n!==null&&(n.lanes|=t),n=e,e=e.return;e!==null;)e.childLanes|=t,n=e.alternate,n!==null&&(n.childLanes|=t),n=e,e=e.return;return n.tag===3?n.stateNode:null}var mt=!1;function Xo(e){e.updateQueue={baseState:e.memoizedState,firstBaseUpdate:null,lastBaseUpdate:null,shared:{pending:null,interleaved:null,lanes:0},effects:null}}function ic(e,t){e=e.updateQueue,t.updateQueue===e&&(t.updateQueue={baseState:e.baseState,firstBaseUpdate:e.firstBaseUpdate,lastBaseUpdate:e.lastBaseUpdate,shared:e.shared,effects:e.effects})}function it(e,t){return{eventTime:e,lane:t,tag:0,payload:null,callback:null,next:null}}function Ct(e,t,n){var r=e.updateQueue;if(r===null)return null;if(r=r.shared,F&2){var l=r.pending;return l===null?t.next=t:(t.next=l.next,l.next=t),r.pending=t,at(e,n)}return l=r.interleaved,l===null?(t.next=t,Go(r)):(t.next=l.next,l.next=t),r.interleaved=t,at(e,n)}function Jr(e,t,n){if(t=t.updateQueue,t!==null&&(t=t.shared,(n&4194240)!==0)){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,Io(e,n)}}function ua(e,t){var n=e.updateQueue,r=e.alternate;if(r!==null&&(r=r.updateQueue,n===r)){var l=null,i=null;if(n=n.firstBaseUpdate,n!==null){do{var o={eventTime:n.eventTime,lane:n.lane,tag:n.tag,payload:n.payload,callback:n.callback,next:null};i===null?l=i=o:i=i.next=o,n=n.next}while(n!==null);i===null?l=i=t:i=i.next=t}else l=i=t;n={baseState:r.baseState,firstBaseUpdate:l,lastBaseUpdate:i,shared:r.shared,effects:r.effects},e.updateQueue=n;return}e=n.lastBaseUpdate,e===null?n.firstBaseUpdate=t:e.next=t,n.lastBaseUpdate=t}function xl(e,t,n,r){var l=e.updateQueue;mt=!1;var i=l.firstBaseUpdate,o=l.lastBaseUpdate,u=l.shared.pending;if(u!==null){l.shared.pending=null;var a=u,s=a.next;a.next=null,o===null?i=s:o.next=s,o=a;var h=e.alternate;h!==null&&(h=h.updateQueue,u=h.lastBaseUpdate,u!==o&&(u===null?h.firstBaseUpdate=s:u.next=s,h.lastBaseUpdate=a))}if(i!==null){var d=l.baseState;o=0,h=s=a=null,u=i;do{var m=u.lane,w=u.eventTime;if((r&m)===m){h!==null&&(h=h.next={eventTime:w,lane:0,tag:u.tag,payload:u.payload,callback:u.callback,next:null});e:{var v=e,x=u;switch(m=t,w=n,x.tag){case 1:if(v=x.payload,typeof v=="function"){d=v.call(w,d,m);break e}d=v;break e;case 3:v.flags=v.flags&-65537|128;case 0:if(v=x.payload,m=typeof v=="function"?v.call(w,d,m):v,m==null)break e;d=X({},d,m);break e;case 2:mt=!0}}u.callback!==null&&u.lane!==0&&(e.flags|=64,m=l.effects,m===null?l.effects=[u]:m.push(u))}else w={eventTime:w,lane:m,tag:u.tag,payload:u.payload,callback:u.callback,next:null},h===null?(s=h=w,a=d):h=h.next=w,o|=m;if(u=u.next,u===null){if(u=l.shared.pending,u===null)break;m=u,u=m.next,m.next=null,l.lastBaseUpdate=m,l.shared.pending=null}}while(!0);if(h===null&&(a=d),l.baseState=a,l.firstBaseUpdate=s,l.lastBaseUpdate=h,t=l.shared.interleaved,t!==null){l=t;do o|=l.lane,l=l.next;while(l!==t)}else i===null&&(l.shared.lanes=0);Vt|=o,e.lanes=o,e.memoizedState=d}}function aa(e,t,n){if(e=t.effects,t.effects=null,e!==null)for(t=0;tn?n:4,e(!0);var r=mi.transition;mi.transition={};try{e(!1),t()}finally{$=n,mi.transition=r}}function kc(){return Oe().memoizedState}function vp(e,t,n){var r=_t(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Ec(e))Cc(t,n);else if(n=lc(e,t,n,r),n!==null){var l=pe();Ue(n,e,r,l),Rc(n,t,r)}}function wp(e,t,n){var r=_t(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Ec(e))Cc(t,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=t.lastRenderedReducer,i!==null))try{var o=t.lastRenderedState,u=i(o,n);if(l.hasEagerState=!0,l.eagerState=u,Be(u,o)){var a=t.interleaved;a===null?(l.next=l,Go(t)):(l.next=a.next,a.next=l),t.interleaved=l;return}}catch{}finally{}n=lc(e,t,l,r),n!==null&&(l=pe(),Ue(n,e,r,l),Rc(n,t,r))}}function Ec(e){var t=e.alternate;return e===Y||t!==null&&t===Y}function Cc(e,t){Jn=El=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Rc(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,Io(e,n)}}var Cl={readContext:ze,useCallback:ae,useContext:ae,useEffect:ae,useImperativeHandle:ae,useInsertionEffect:ae,useLayoutEffect:ae,useMemo:ae,useReducer:ae,useRef:ae,useState:ae,useDebugValue:ae,useDeferredValue:ae,useTransition:ae,useMutableSource:ae,useSyncExternalStore:ae,useId:ae,unstable_isNewReconciler:!1},Sp={readContext:ze,useCallback:function(e,t){return Ke().memoizedState=[e,t===void 0?null:t],e},useContext:ze,useEffect:ca,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,qr(4194308,4,yc.bind(null,t,e),n)},useLayoutEffect:function(e,t){return qr(4194308,4,e,t)},useInsertionEffect:function(e,t){return qr(4,2,e,t)},useMemo:function(e,t){var n=Ke();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=Ke();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=vp.bind(null,Y,e),[r.memoizedState,e]},useRef:function(e){var t=Ke();return e={current:e},t.memoizedState=e},useState:sa,useDebugValue:ru,useDeferredValue:function(e){return Ke().memoizedState=e},useTransition:function(){var e=sa(!1),t=e[0];return e=yp.bind(null,e[1]),Ke().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=Y,l=Ke();if(V){if(n===void 0)throw Error(E(407));n=n()}else{if(n=t(),le===null)throw Error(E(349));Ht&30||sc(r,t,n)}l.memoizedState=n;var i={value:n,getSnapshot:t};return l.queue=i,ca(fc.bind(null,r,i,e),[e]),r.flags|=2048,mr(9,cc.bind(null,r,i,n,t),void 0,null),n},useId:function(){var e=Ke(),t=le.identifierPrefix;if(V){var n=lt,r=rt;n=(r&~(1<<32-Ae(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=pr++,0<\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=o.createElement(n,{is:r.is}):(e=o.createElement(n),n==="select"&&(o=e,r.multiple?o.multiple=!0:r.size&&(o.size=r.size))):e=o.createElementNS(e,n),e[Ye]=t,e[cr]=r,Fc(e,t,!1,!1),t.stateNode=e;e:{switch(o=Mi(n,r),n){case"dialog":W("cancel",e),W("close",e),l=r;break;case"iframe":case"object":case"embed":W("load",e),l=r;break;case"video":case"audio":for(l=0;lEn&&(t.flags|=128,r=!0,Mn(i,!1),t.lanes=4194304)}else{if(!r)if(e=kl(o),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),Mn(i,!0),i.tail===null&&i.tailMode==="hidden"&&!o.alternate&&!V)return se(t),null}else 2*q()-i.renderingStartTime>En&&n!==1073741824&&(t.flags|=128,r=!0,Mn(i,!1),t.lanes=4194304);i.isBackwards?(o.sibling=t.child,t.child=o):(n=i.last,n!==null?n.sibling=o:t.child=o,i.last=o)}return i.tail!==null?(t=i.tail,i.rendering=t,i.tail=t.sibling,i.renderingStartTime=q(),t.sibling=null,n=K.current,B(K,r?n&1|2:n&1),t):(se(t),null);case 22:case 23:return su(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?xe&1073741824&&(se(t),t.subtreeFlags&6&&(t.flags|=8192)):se(t),null;case 24:return null;case 25:return null}throw Error(E(156,t.tag))}function Pp(e,t){switch(Ho(t),t.tag){case 1:return we(t.type)&&ml(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return xn(),H(ve),H(fe),qo(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return Zo(t),null;case 13:if(H(K),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(E(340));wn()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return H(K),null;case 4:return xn(),null;case 10:return Yo(t.type._context),null;case 22:case 23:return su(),null;case 24:return null;default:return null}}var Ur=!1,ce=!1,Np=typeof WeakSet=="function"?WeakSet:Set,N=null;function sn(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){J(e,t,r)}else n.current=null}function co(e,t,n){try{n()}catch(r){J(e,t,r)}}var xa=!1;function Lp(e,t){if(Yi=fl,e=Ws(),Bo(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{n.nodeType,i.nodeType}catch{n=null;break e}var o=0,u=-1,a=-1,s=0,h=0,d=e,m=null;t:for(;;){for(var w;d!==n||l!==0&&d.nodeType!==3||(u=o+l),d!==i||r!==0&&d.nodeType!==3||(a=o+r),d.nodeType===3&&(o+=d.nodeValue.length),(w=d.firstChild)!==null;)m=d,d=w;for(;;){if(d===e)break t;if(m===n&&++s===l&&(u=o),m===i&&++h===r&&(a=o),(w=d.nextSibling)!==null)break;d=m,m=d.parentNode}d=w}n=u===-1||a===-1?null:{start:u,end:a}}else n=null}n=n||{start:0,end:0}}else n=null;for(Gi={focusedElem:e,selectionRange:n},fl=!1,N=t;N!==null;)if(t=N,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,N=e;else for(;N!==null;){t=N;try{var v=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(v!==null){var x=v.memoizedProps,k=v.memoizedState,f=t.stateNode,c=f.getSnapshotBeforeUpdate(t.elementType===t.type?x:De(t.type,x),k);f.__reactInternalSnapshotBeforeUpdate=c}break;case 3:var p=t.stateNode.containerInfo;p.nodeType===1?p.textContent="":p.nodeType===9&&p.documentElement&&p.removeChild(p.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(E(163))}}catch(S){J(t,t.return,S)}if(e=t.sibling,e!==null){e.return=t.return,N=e;break}N=t.return}return v=xa,xa=!1,v}function Zn(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&co(t,n,i)}l=l.next}while(l!==r)}}function Bl(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function fo(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function $c(e){var t=e.alternate;t!==null&&(e.alternate=null,$c(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[Ye],delete t[cr],delete t[Zi],delete t[dp],delete t[pp])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Ac(e){return e.tag===5||e.tag===3||e.tag===4}function ka(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Ac(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function po(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=hl));else if(r!==4&&(e=e.child,e!==null))for(po(e,t,n),e=e.sibling;e!==null;)po(e,t,n),e=e.sibling}function ho(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(ho(e,t,n),e=e.sibling;e!==null;)ho(e,t,n),e=e.sibling}var ie=null,Me=!1;function pt(e,t,n){for(n=n.child;n!==null;)Uc(e,t,n),n=n.sibling}function Uc(e,t,n){if(Ge&&typeof Ge.onCommitFiberUnmount=="function")try{Ge.onCommitFiberUnmount(Ol,n)}catch{}switch(n.tag){case 5:ce||sn(n,t);case 6:var r=ie,l=Me;ie=null,pt(e,t,n),ie=r,Me=l,ie!==null&&(Me?(e=ie,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):ie.removeChild(n.stateNode));break;case 18:ie!==null&&(Me?(e=ie,n=n.stateNode,e.nodeType===8?di(e.parentNode,n):e.nodeType===1&&di(e,n),ir(e)):di(ie,n.stateNode));break;case 4:r=ie,l=Me,ie=n.stateNode.containerInfo,Me=!0,pt(e,t,n),ie=r,Me=l;break;case 0:case 11:case 14:case 15:if(!ce&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,o=i.destroy;i=i.tag,o!==void 0&&(i&2||i&4)&&co(n,t,o),l=l.next}while(l!==r)}pt(e,t,n);break;case 1:if(!ce&&(sn(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(u){J(n,t,u)}pt(e,t,n);break;case 21:pt(e,t,n);break;case 22:n.mode&1?(ce=(r=ce)||n.memoizedState!==null,pt(e,t,n),ce=r):pt(e,t,n);break;default:pt(e,t,n)}}function Ea(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new Np),t.forEach(function(r){var l=Ap.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Fe(e,t){var n=t.deletions;if(n!==null)for(var r=0;rl&&(l=o),r&=~i}if(r=l,r=q()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*zp(r/1960))-r,10e?16:e,wt===null)var r=!1;else{if(e=wt,wt=null,jl=0,F&6)throw Error(E(331));var l=F;for(F|=4,N=e.current;N!==null;){var i=N,o=i.child;if(N.flags&16){var u=i.deletions;if(u!==null){for(var a=0;aq()-uu?At(e,0):ou|=n),Se(e,t)}function Gc(e,t){t===0&&(e.mode&1?(t=Tr,Tr<<=1,!(Tr&130023424)&&(Tr=4194304)):t=1);var n=pe();e=at(e,t),e!==null&&(wr(e,t,n),Se(e,n))}function $p(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),Gc(e,n)}function Ap(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(E(314))}r!==null&&r.delete(t),Gc(e,n)}var Xc;Xc=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||ve.current)ye=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return ye=!1,_p(e,t,n);ye=!!(e.flags&131072)}else ye=!1,V&&t.flags&1048576&&bs(t,vl,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;br(e,t),e=t.pendingProps;var l=vn(t,fe.current);mn(t,n),l=eu(null,t,r,e,l,n);var i=tu();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,we(r)?(i=!0,gl(t)):i=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,Xo(t),l.updater=Ul,t.stateNode=l,l._reactInternals=t,ro(t,r,e,n),t=oo(null,t,r,!0,i,n)):(t.tag=0,V&&i&&Wo(t),de(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(br(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=Bp(r),e=De(r,e),l){case 0:t=io(null,t,r,e,n);break e;case 1:t=va(null,t,r,e,n);break e;case 11:t=ga(null,t,r,e,n);break e;case 14:t=ya(null,t,r,De(r.type,e),n);break e}throw Error(E(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:De(r,l),io(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:De(r,l),va(e,t,r,l,n);case 3:e:{if(zc(t),e===null)throw Error(E(387));r=t.pendingProps,i=t.memoizedState,l=i.element,ic(e,t),xl(t,r,null,n);var o=t.memoizedState;if(r=o.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:o.cache,pendingSuspenseBoundaries:o.pendingSuspenseBoundaries,transitions:o.transitions},t.updateQueue.baseState=i,t.memoizedState=i,t.flags&256){l=kn(Error(E(423)),t),t=wa(e,t,r,n,l);break e}else if(r!==l){l=kn(Error(E(424)),t),t=wa(e,t,r,n,l);break e}else for(ke=Et(t.stateNode.containerInfo.firstChild),Ee=t,V=!0,$e=null,n=rc(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(wn(),r===l){t=st(e,t,n);break e}de(e,t,r,n)}t=t.child}return t;case 5:return oc(t),e===null&&eo(t),r=t.type,l=t.pendingProps,i=e!==null?e.memoizedProps:null,o=l.children,Xi(r,l)?o=null:i!==null&&Xi(r,i)&&(t.flags|=32),Tc(e,t),de(e,t,o,n),t.child;case 6:return e===null&&eo(t),null;case 13:return Oc(e,t,n);case 4:return Jo(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=Sn(t,null,r,n):de(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:De(r,l),ga(e,t,r,l,n);case 7:return de(e,t,t.pendingProps,n),t.child;case 8:return de(e,t,t.pendingProps.children,n),t.child;case 12:return de(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,i=t.memoizedProps,o=l.value,B(wl,r._currentValue),r._currentValue=o,i!==null)if(Be(i.value,o)){if(i.children===l.children&&!ve.current){t=st(e,t,n);break e}}else for(i=t.child,i!==null&&(i.return=t);i!==null;){var u=i.dependencies;if(u!==null){o=i.child;for(var a=u.firstContext;a!==null;){if(a.context===r){if(i.tag===1){a=it(-1,n&-n),a.tag=2;var s=i.updateQueue;if(s!==null){s=s.shared;var h=s.pending;h===null?a.next=a:(a.next=h.next,h.next=a),s.pending=a}}i.lanes|=n,a=i.alternate,a!==null&&(a.lanes|=n),to(i.return,n,t),u.lanes|=n;break}a=a.next}}else if(i.tag===10)o=i.type===t.type?null:i.child;else if(i.tag===18){if(o=i.return,o===null)throw Error(E(341));o.lanes|=n,u=o.alternate,u!==null&&(u.lanes|=n),to(o,n,t),o=i.sibling}else o=i.child;if(o!==null)o.return=i;else for(o=i;o!==null;){if(o===t){o=null;break}if(i=o.sibling,i!==null){i.return=o.return,o=i;break}o=o.return}i=o}de(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,mn(t,n),l=ze(l),r=r(l),t.flags|=1,de(e,t,r,n),t.child;case 14:return r=t.type,l=De(r,t.pendingProps),l=De(r.type,l),ya(e,t,r,l,n);case 15:return Nc(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:De(r,l),br(e,t),t.tag=1,we(r)?(e=!0,gl(t)):e=!1,mn(t,n),_c(t,r,l),ro(t,r,l,n),oo(null,t,r,!0,e,n);case 19:return Ic(e,t,n);case 22:return Lc(e,t,n)}throw Error(E(156,t.tag))};function Jc(e,t){return Es(e,t)}function Up(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Le(e,t,n,r){return new Up(e,t,n,r)}function fu(e){return e=e.prototype,!(!e||!e.isReactComponent)}function Bp(e){if(typeof e=="function")return fu(e)?1:0;if(e!=null){if(e=e.$$typeof,e===Lo)return 11;if(e===To)return 14}return 2}function jt(e,t){var n=e.alternate;return n===null?(n=Le(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function nl(e,t,n,r,l,i){var o=2;if(r=e,typeof e=="function")fu(e)&&(o=1);else if(typeof e=="string")o=5;else e:switch(e){case bt:return Ut(n.children,l,i,t);case No:o=8,l|=8;break;case ji:return e=Le(12,n,t,l|2),e.elementType=ji,e.lanes=i,e;case Pi:return e=Le(13,n,t,l),e.elementType=Pi,e.lanes=i,e;case Ni:return e=Le(19,n,t,l),e.elementType=Ni,e.lanes=i,e;case os:return Hl(n,l,i,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case ls:o=10;break e;case is:o=9;break e;case Lo:o=11;break e;case To:o=14;break e;case ht:o=16,r=null;break e}throw Error(E(130,e==null?e:typeof e,""))}return t=Le(o,n,t,l),t.elementType=e,t.type=r,t.lanes=i,t}function Ut(e,t,n,r){return e=Le(7,e,r,t),e.lanes=n,e}function Hl(e,t,n,r){return e=Le(22,e,r,t),e.elementType=os,e.lanes=n,e.stateNode={isHidden:!1},e}function Si(e,t,n){return e=Le(6,e,null,t),e.lanes=n,e}function xi(e,t,n){return t=Le(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function Wp(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=ti(0),this.expirationTimes=ti(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=ti(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function du(e,t,n,r,l,i,o,u,a){return e=new Wp(e,t,n,u,a),t===1?(t=1,i===!0&&(t|=8)):t=0,i=Le(3,null,null,t),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},Xo(i),e}function Hp(e,t,n){var r=3"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(ef)}catch(e){console.error(e)}}ef(),es.exports=Re;var Gp=es.exports,Ta=Gp;Ri.createRoot=Ta.createRoot,Ri.hydrateRoot=Ta.hydrateRoot;/** - * react-router v7.13.1 - * - * Copyright (c) Remix Software Inc. - * - * This source code is licensed under the MIT license found in the - * LICENSE.md file in the root directory of this source tree. - * - * @license MIT - */var za="popstate";function Oa(e){return typeof e=="object"&&e!=null&&"pathname"in e&&"search"in e&&"hash"in e&&"state"in e&&"key"in e}function Xp(e={}){function t(r,l){var s;let i=(s=l.state)==null?void 0:s.masked,{pathname:o,search:u,hash:a}=i||r.location;return wo("",{pathname:o,search:u,hash:a},l.state&&l.state.usr||null,l.state&&l.state.key||"default",i?{pathname:r.location.pathname,search:r.location.search,hash:r.location.hash}:void 0)}function n(r,l){return typeof l=="string"?l:yr(l)}return Zp(t,n,null,e)}function G(e,t){if(e===!1||e===null||typeof e>"u")throw new Error(t)}function Ze(e,t){if(!e){typeof console<"u"&&console.warn(t);try{throw new Error(t)}catch{}}}function Jp(){return Math.random().toString(36).substring(2,10)}function Ia(e,t){return{usr:e.state,key:e.key,idx:t,masked:e.unstable_mask?{pathname:e.pathname,search:e.search,hash:e.hash}:void 0}}function wo(e,t,n=null,r,l){return{pathname:typeof e=="string"?e:e.pathname,search:"",hash:"",...typeof t=="string"?jn(t):t,state:n,key:t&&t.key||r||Jp(),unstable_mask:l}}function yr({pathname:e="/",search:t="",hash:n=""}){return t&&t!=="?"&&(e+=t.charAt(0)==="?"?t:"?"+t),n&&n!=="#"&&(e+=n.charAt(0)==="#"?n:"#"+n),e}function jn(e){let t={};if(e){let n=e.indexOf("#");n>=0&&(t.hash=e.substring(n),e=e.substring(0,n));let r=e.indexOf("?");r>=0&&(t.search=e.substring(r),e=e.substring(0,r)),e&&(t.pathname=e)}return t}function Zp(e,t,n,r={}){let{window:l=document.defaultView,v5Compat:i=!1}=r,o=l.history,u="POP",a=null,s=h();s==null&&(s=0,o.replaceState({...o.state,idx:s},""));function h(){return(o.state||{idx:null}).idx}function d(){u="POP";let k=h(),f=k==null?null:k-s;s=k,a&&a({action:u,location:x.location,delta:f})}function m(k,f){u="PUSH";let c=Oa(k)?k:wo(x.location,k,f);s=h()+1;let p=Ia(c,s),S=x.createHref(c.unstable_mask||c);try{o.pushState(p,"",S)}catch(C){if(C instanceof DOMException&&C.name==="DataCloneError")throw C;l.location.assign(S)}i&&a&&a({action:u,location:x.location,delta:1})}function w(k,f){u="REPLACE";let c=Oa(k)?k:wo(x.location,k,f);s=h();let p=Ia(c,s),S=x.createHref(c.unstable_mask||c);o.replaceState(p,"",S),i&&a&&a({action:u,location:x.location,delta:0})}function v(k){return qp(k)}let x={get action(){return u},get location(){return e(l,o)},listen(k){if(a)throw new Error("A history only accepts one active listener");return l.addEventListener(za,d),a=k,()=>{l.removeEventListener(za,d),a=null}},createHref(k){return t(l,k)},createURL:v,encodeLocation(k){let f=v(k);return{pathname:f.pathname,search:f.search,hash:f.hash}},push:m,replace:w,go(k){return o.go(k)}};return x}function qp(e,t=!1){let n="http://localhost";typeof window<"u"&&(n=window.location.origin!=="null"?window.location.origin:window.location.href),G(n,"No window.location.(origin|href) available to create URL");let r=typeof e=="string"?e:yr(e);return r=r.replace(/ $/,"%20"),!t&&r.startsWith("//")&&(r=n+r),new URL(r,n)}function tf(e,t,n="/"){return bp(e,t,n,!1)}function bp(e,t,n,r){let l=typeof t=="string"?jn(t):t,i=ct(l.pathname||"/",n);if(i==null)return null;let o=nf(e);eh(o);let u=null;for(let a=0;u==null&&a{let h={relativePath:s===void 0?o.path||"":s,caseSensitive:o.caseSensitive===!0,childrenIndex:u,route:o};if(h.relativePath.startsWith("/")){if(!h.relativePath.startsWith(r)&&a)return;G(h.relativePath.startsWith(r),`Absolute route path "${h.relativePath}" nested under path "${r}" is not valid. An absolute child route path must start with the combined path of all its parent routes.`),h.relativePath=h.relativePath.slice(r.length)}let d=Je([r,h.relativePath]),m=n.concat(h);o.children&&o.children.length>0&&(G(o.index!==!0,`Index routes must not have child routes. Please remove all child routes from route path "${d}".`),nf(o.children,t,m,d,a)),!(o.path==null&&!o.index)&&t.push({path:d,score:uh(d,o.index),routesMeta:m})};return e.forEach((o,u)=>{var a;if(o.path===""||!((a=o.path)!=null&&a.includes("?")))i(o,u);else for(let s of rf(o.path))i(o,u,!0,s)}),t}function rf(e){let t=e.split("/");if(t.length===0)return[];let[n,...r]=t,l=n.endsWith("?"),i=n.replace(/\?$/,"");if(r.length===0)return l?[i,""]:[i];let o=rf(r.join("/")),u=[];return u.push(...o.map(a=>a===""?i:[i,a].join("/"))),l&&u.push(...o),u.map(a=>e.startsWith("/")&&a===""?"/":a)}function eh(e){e.sort((t,n)=>t.score!==n.score?n.score-t.score:ah(t.routesMeta.map(r=>r.childrenIndex),n.routesMeta.map(r=>r.childrenIndex)))}var th=/^:[\w-]+$/,nh=3,rh=2,lh=1,ih=10,oh=-2,Fa=e=>e==="*";function uh(e,t){let n=e.split("/"),r=n.length;return n.some(Fa)&&(r+=oh),t&&(r+=rh),n.filter(l=>!Fa(l)).reduce((l,i)=>l+(th.test(i)?nh:i===""?lh:ih),r)}function ah(e,t){return e.length===t.length&&e.slice(0,-1).every((r,l)=>r===t[l])?e[e.length-1]-t[t.length-1]:0}function sh(e,t,n=!1){let{routesMeta:r}=e,l={},i="/",o=[];for(let u=0;u{if(h==="*"){let v=u[m]||"";o=i.slice(0,i.length-v.length).replace(/(.)\/+$/,"$1")}const w=u[m];return d&&!w?s[h]=void 0:s[h]=(w||"").replace(/%2F/g,"/"),s},{}),pathname:i,pathnameBase:o,pattern:e}}function ch(e,t=!1,n=!0){Ze(e==="*"||!e.endsWith("*")||e.endsWith("/*"),`Route path "${e}" will be treated as if it were "${e.replace(/\*$/,"/*")}" because the \`*\` character must always follow a \`/\` in the pattern. To get rid of this warning, please change the route path to "${e.replace(/\*$/,"/*")}".`);let r=[],l="^"+e.replace(/\/*\*?$/,"").replace(/^\/*/,"/").replace(/[\\.*+^${}|()[\]]/g,"\\$&").replace(/\/:([\w-]+)(\?)?/g,(o,u,a,s,h)=>{if(r.push({paramName:u,isOptional:a!=null}),a){let d=h.charAt(s+o.length);return d&&d!=="/"?"/([^\\/]*)":"(?:/([^\\/]*))?"}return"/([^\\/]+)"}).replace(/\/([\w-]+)\?(\/|$)/g,"(/$1)?$2");return e.endsWith("*")?(r.push({paramName:"*"}),l+=e==="*"||e==="/*"?"(.*)$":"(?:\\/(.+)|\\/*)$"):n?l+="\\/*$":e!==""&&e!=="/"&&(l+="(?:(?=\\/|$))"),[new RegExp(l,t?void 0:"i"),r]}function fh(e){try{return e.split("/").map(t=>decodeURIComponent(t).replace(/\//g,"%2F")).join("/")}catch(t){return Ze(!1,`The URL path "${e}" could not be decoded because it is a malformed URL segment. This is probably due to a bad percent encoding (${t}).`),e}}function ct(e,t){if(t==="/")return e;if(!e.toLowerCase().startsWith(t.toLowerCase()))return null;let n=t.endsWith("/")?t.length-1:t.length,r=e.charAt(n);return r&&r!=="/"?null:e.slice(n)||"/"}var dh=/^(?:[a-z][a-z0-9+.-]*:|\/\/)/i;function ph(e,t="/"){let{pathname:n,search:r="",hash:l=""}=typeof e=="string"?jn(e):e,i;return n?(n=n.replace(/\/\/+/g,"/"),n.startsWith("/")?i=Da(n.substring(1),"/"):i=Da(n,t)):i=t,{pathname:i,search:gh(r),hash:yh(l)}}function Da(e,t){let n=t.replace(/\/+$/,"").split("/");return e.split("/").forEach(l=>{l===".."?n.length>1&&n.pop():l!=="."&&n.push(l)}),n.length>1?n.join("/"):"/"}function ki(e,t,n,r){return`Cannot include a '${e}' character in a manually specified \`to.${t}\` field [${JSON.stringify(r)}]. Please separate it out to the \`to.${n}\` field. Alternatively you may provide the full path as a string in and the router will parse it for you.`}function hh(e){return e.filter((t,n)=>n===0||t.route.path&&t.route.path.length>0)}function lf(e){let t=hh(e);return t.map((n,r)=>r===t.length-1?n.pathname:n.pathnameBase)}function gu(e,t,n,r=!1){let l;typeof e=="string"?l=jn(e):(l={...e},G(!l.pathname||!l.pathname.includes("?"),ki("?","pathname","search",l)),G(!l.pathname||!l.pathname.includes("#"),ki("#","pathname","hash",l)),G(!l.search||!l.search.includes("#"),ki("#","search","hash",l)));let i=e===""||l.pathname==="",o=i?"/":l.pathname,u;if(o==null)u=n;else{let d=t.length-1;if(!r&&o.startsWith("..")){let m=o.split("/");for(;m[0]==="..";)m.shift(),d-=1;l.pathname=m.join("/")}u=d>=0?t[d]:"/"}let a=ph(l,u),s=o&&o!=="/"&&o.endsWith("/"),h=(i||o===".")&&n.endsWith("/");return!a.pathname.endsWith("/")&&(s||h)&&(a.pathname+="/"),a}var Je=e=>e.join("/").replace(/\/\/+/g,"/"),mh=e=>e.replace(/\/+$/,"").replace(/^\/*/,"/"),gh=e=>!e||e==="?"?"":e.startsWith("?")?e:"?"+e,yh=e=>!e||e==="#"?"":e.startsWith("#")?e:"#"+e,vh=class{constructor(e,t,n,r=!1){this.status=e,this.statusText=t||"",this.internal=r,n instanceof Error?(this.data=n.toString(),this.error=n):this.data=n}};function wh(e){return e!=null&&typeof e.status=="number"&&typeof e.statusText=="string"&&typeof e.internal=="boolean"&&"data"in e}function Sh(e){return e.map(t=>t.route.path).filter(Boolean).join("/").replace(/\/\/*/g,"/")||"/"}var of=typeof window<"u"&&typeof window.document<"u"&&typeof window.document.createElement<"u";function uf(e,t){let n=e;if(typeof n!="string"||!dh.test(n))return{absoluteURL:void 0,isExternal:!1,to:n};let r=n,l=!1;if(of)try{let i=new URL(window.location.href),o=n.startsWith("//")?new URL(i.protocol+n):new URL(n),u=ct(o.pathname,t);o.origin===i.origin&&u!=null?n=u+o.search+o.hash:l=!0}catch{Ze(!1,` contains an invalid URL which will probably break when clicked - please update to a valid URL path.`)}return{absoluteURL:r,isExternal:l,to:n}}Object.getOwnPropertyNames(Object.prototype).sort().join("\0");var af=["POST","PUT","PATCH","DELETE"];new Set(af);var xh=["GET",...af];new Set(xh);var Pn=y.createContext(null);Pn.displayName="DataRouter";var Gl=y.createContext(null);Gl.displayName="DataRouterState";var kh=y.createContext(!1),sf=y.createContext({isTransitioning:!1});sf.displayName="ViewTransition";var Eh=y.createContext(new Map);Eh.displayName="Fetchers";var Ch=y.createContext(null);Ch.displayName="Await";var Ie=y.createContext(null);Ie.displayName="Navigation";var Er=y.createContext(null);Er.displayName="Location";var qe=y.createContext({outlet:null,matches:[],isDataRoute:!1});qe.displayName="Route";var yu=y.createContext(null);yu.displayName="RouteError";var cf="REACT_ROUTER_ERROR",Rh="REDIRECT",_h="ROUTE_ERROR_RESPONSE";function jh(e){if(e.startsWith(`${cf}:${Rh}:{`))try{let t=JSON.parse(e.slice(28));if(typeof t=="object"&&t&&typeof t.status=="number"&&typeof t.statusText=="string"&&typeof t.location=="string"&&typeof t.reloadDocument=="boolean"&&typeof t.replace=="boolean")return t}catch{}}function Ph(e){if(e.startsWith(`${cf}:${_h}:{`))try{let t=JSON.parse(e.slice(40));if(typeof t=="object"&&t&&typeof t.status=="number"&&typeof t.statusText=="string")return new vh(t.status,t.statusText,t.data)}catch{}}function Nh(e,{relative:t}={}){G(Cr(),"useHref() may be used only in the context of a component.");let{basename:n,navigator:r}=y.useContext(Ie),{hash:l,pathname:i,search:o}=Rr(e,{relative:t}),u=i;return n!=="/"&&(u=i==="/"?n:Je([n,i])),r.createHref({pathname:u,search:o,hash:l})}function Cr(){return y.useContext(Er)!=null}function dt(){return G(Cr(),"useLocation() may be used only in the context of a component."),y.useContext(Er).location}var ff="You should call navigate() in a React.useEffect(), not when your component is first rendered.";function df(e){y.useContext(Ie).static||y.useLayoutEffect(e)}function Lh(){let{isDataRoute:e}=y.useContext(qe);return e?Qh():Th()}function Th(){G(Cr(),"useNavigate() may be used only in the context of a component.");let e=y.useContext(Pn),{basename:t,navigator:n}=y.useContext(Ie),{matches:r}=y.useContext(qe),{pathname:l}=dt(),i=JSON.stringify(lf(r)),o=y.useRef(!1);return df(()=>{o.current=!0}),y.useCallback((a,s={})=>{if(Ze(o.current,ff),!o.current)return;if(typeof a=="number"){n.go(a);return}let h=gu(a,JSON.parse(i),l,s.relative==="path");e==null&&t!=="/"&&(h.pathname=h.pathname==="/"?t:Je([t,h.pathname])),(s.replace?n.replace:n.push)(h,s.state,s)},[t,n,i,l,e])}var zh=y.createContext(null);function Oh(e){let t=y.useContext(qe).outlet;return y.useMemo(()=>t&&y.createElement(zh.Provider,{value:e},t),[t,e])}function Rr(e,{relative:t}={}){let{matches:n}=y.useContext(qe),{pathname:r}=dt(),l=JSON.stringify(lf(n));return y.useMemo(()=>gu(e,JSON.parse(l),r,t==="path"),[e,l,r,t])}function Ih(e,t){return pf(e,t)}function pf(e,t,n){var k;G(Cr(),"useRoutes() may be used only in the context of a component.");let{navigator:r}=y.useContext(Ie),{matches:l}=y.useContext(qe),i=l[l.length-1],o=i?i.params:{},u=i?i.pathname:"/",a=i?i.pathnameBase:"/",s=i&&i.route;{let f=s&&s.path||"";mf(u,!s||f.endsWith("*")||f.endsWith("*?"),`You rendered descendant (or called \`useRoutes()\`) at "${u}" (under ) but the parent route path has no trailing "*". This means if you navigate deeper, the parent won't match anymore and therefore the child routes will never render. - -Please change the parent to .`)}let h=dt(),d;if(t){let f=typeof t=="string"?jn(t):t;G(a==="/"||((k=f.pathname)==null?void 0:k.startsWith(a)),`When overriding the location using \`\` or \`useRoutes(routes, location)\`, the location pathname must begin with the portion of the URL pathname that was matched by all parent routes. The current pathname base is "${a}" but pathname "${f.pathname}" was given in the \`location\` prop.`),d=f}else d=h;let m=d.pathname||"/",w=m;if(a!=="/"){let f=a.replace(/^\//,"").split("/");w="/"+m.replace(/^\//,"").split("/").slice(f.length).join("/")}let v=tf(e,{pathname:w});Ze(s||v!=null,`No routes matched location "${d.pathname}${d.search}${d.hash}" `),Ze(v==null||v[v.length-1].route.element!==void 0||v[v.length-1].route.Component!==void 0||v[v.length-1].route.lazy!==void 0,`Matched leaf route at location "${d.pathname}${d.search}${d.hash}" does not have an element or Component. This means it will render an with a null value by default resulting in an "empty" page.`);let x=Ah(v&&v.map(f=>Object.assign({},f,{params:Object.assign({},o,f.params),pathname:Je([a,r.encodeLocation?r.encodeLocation(f.pathname.replace(/\?/g,"%3F").replace(/#/g,"%23")).pathname:f.pathname]),pathnameBase:f.pathnameBase==="/"?a:Je([a,r.encodeLocation?r.encodeLocation(f.pathnameBase.replace(/\?/g,"%3F").replace(/#/g,"%23")).pathname:f.pathnameBase])})),l,n);return t&&x?y.createElement(Er.Provider,{value:{location:{pathname:"/",search:"",hash:"",state:null,key:"default",unstable_mask:void 0,...d},navigationType:"POP"}},x):x}function Fh(){let e=Vh(),t=wh(e)?`${e.status} ${e.statusText}`:e instanceof Error?e.message:JSON.stringify(e),n=e instanceof Error?e.stack:null,r="rgba(200,200,200, 0.5)",l={padding:"0.5rem",backgroundColor:r},i={padding:"2px 4px",backgroundColor:r},o=null;return console.error("Error handled by React Router default ErrorBoundary:",e),o=y.createElement(y.Fragment,null,y.createElement("p",null,"💿 Hey developer 👋"),y.createElement("p",null,"You can provide a way better UX than this when your app throws errors by providing your own ",y.createElement("code",{style:i},"ErrorBoundary")," or"," ",y.createElement("code",{style:i},"errorElement")," prop on your route.")),y.createElement(y.Fragment,null,y.createElement("h2",null,"Unexpected Application Error!"),y.createElement("h3",{style:{fontStyle:"italic"}},t),n?y.createElement("pre",{style:l},n):null,o)}var Dh=y.createElement(Fh,null),hf=class extends y.Component{constructor(e){super(e),this.state={location:e.location,revalidation:e.revalidation,error:e.error}}static getDerivedStateFromError(e){return{error:e}}static getDerivedStateFromProps(e,t){return t.location!==e.location||t.revalidation!=="idle"&&e.revalidation==="idle"?{error:e.error,location:e.location,revalidation:e.revalidation}:{error:e.error!==void 0?e.error:t.error,location:t.location,revalidation:e.revalidation||t.revalidation}}componentDidCatch(e,t){this.props.onError?this.props.onError(e,t):console.error("React Router caught the following error during render",e)}render(){let e=this.state.error;if(this.context&&typeof e=="object"&&e&&"digest"in e&&typeof e.digest=="string"){const n=Ph(e.digest);n&&(e=n)}let t=e!==void 0?y.createElement(qe.Provider,{value:this.props.routeContext},y.createElement(yu.Provider,{value:e,children:this.props.component})):this.props.children;return this.context?y.createElement(Mh,{error:e},t):t}};hf.contextType=kh;var Ei=new WeakMap;function Mh({children:e,error:t}){let{basename:n}=y.useContext(Ie);if(typeof t=="object"&&t&&"digest"in t&&typeof t.digest=="string"){let r=jh(t.digest);if(r){let l=Ei.get(t);if(l)throw l;let i=uf(r.location,n);if(of&&!Ei.get(t))if(i.isExternal||r.reloadDocument)window.location.href=i.absoluteURL||i.to;else{const o=Promise.resolve().then(()=>window.__reactRouterDataRouter.navigate(i.to,{replace:r.replace}));throw Ei.set(t,o),o}return y.createElement("meta",{httpEquiv:"refresh",content:`0;url=${i.absoluteURL||i.to}`})}}return e}function $h({routeContext:e,match:t,children:n}){let r=y.useContext(Pn);return r&&r.static&&r.staticContext&&(t.route.errorElement||t.route.ErrorBoundary)&&(r.staticContext._deepestRenderedBoundaryId=t.route.id),y.createElement(qe.Provider,{value:e},n)}function Ah(e,t=[],n){let r=n==null?void 0:n.state;if(e==null){if(!r)return null;if(r.errors)e=r.matches;else if(t.length===0&&!r.initialized&&r.matches.length>0)e=r.matches;else return null}let l=e,i=r==null?void 0:r.errors;if(i!=null){let h=l.findIndex(d=>d.route.id&&(i==null?void 0:i[d.route.id])!==void 0);G(h>=0,`Could not find a matching route for errors on route IDs: ${Object.keys(i).join(",")}`),l=l.slice(0,Math.min(l.length,h+1))}let o=!1,u=-1;if(n&&r){o=r.renderFallback;for(let h=0;h=0?l=l.slice(0,u+1):l=[l[0]];break}}}}let a=n==null?void 0:n.onError,s=r&&a?(h,d)=>{var m,w;a(h,{location:r.location,params:((w=(m=r.matches)==null?void 0:m[0])==null?void 0:w.params)??{},unstable_pattern:Sh(r.matches),errorInfo:d})}:void 0;return l.reduceRight((h,d,m)=>{let w,v=!1,x=null,k=null;r&&(w=i&&d.route.id?i[d.route.id]:void 0,x=d.route.errorElement||Dh,o&&(u<0&&m===0?(mf("route-fallback",!1,"No `HydrateFallback` element provided to render during initial hydration"),v=!0,k=null):u===m&&(v=!0,k=d.route.hydrateFallbackElement||null)));let f=t.concat(l.slice(0,m+1)),c=()=>{let p;return w?p=x:v?p=k:d.route.Component?p=y.createElement(d.route.Component,null):d.route.element?p=d.route.element:p=h,y.createElement($h,{match:d,routeContext:{outlet:h,matches:f,isDataRoute:r!=null},children:p})};return r&&(d.route.ErrorBoundary||d.route.errorElement||m===0)?y.createElement(hf,{location:r.location,revalidation:r.revalidation,component:x,error:w,children:c(),routeContext:{outlet:null,matches:f,isDataRoute:!0},onError:s}):c()},null)}function vu(e){return`${e} must be used within a data router. See https://reactrouter.com/en/main/routers/picking-a-router.`}function Uh(e){let t=y.useContext(Pn);return G(t,vu(e)),t}function Bh(e){let t=y.useContext(Gl);return G(t,vu(e)),t}function Wh(e){let t=y.useContext(qe);return G(t,vu(e)),t}function wu(e){let t=Wh(e),n=t.matches[t.matches.length-1];return G(n.route.id,`${e} can only be used on routes that contain a unique "id"`),n.route.id}function Hh(){return wu("useRouteId")}function Vh(){var r;let e=y.useContext(yu),t=Bh("useRouteError"),n=wu("useRouteError");return e!==void 0?e:(r=t.errors)==null?void 0:r[n]}function Qh(){let{router:e}=Uh("useNavigate"),t=wu("useNavigate"),n=y.useRef(!1);return df(()=>{n.current=!0}),y.useCallback(async(l,i={})=>{Ze(n.current,ff),n.current&&(typeof l=="number"?await e.navigate(l):await e.navigate(l,{fromRouteId:t,...i}))},[e,t])}var Ma={};function mf(e,t,n){!t&&!Ma[e]&&(Ma[e]=!0,Ze(!1,n))}y.memo(Kh);function Kh({routes:e,future:t,state:n,isStatic:r,onError:l}){return pf(e,void 0,{state:n,isStatic:r,onError:l})}function Yh(e){return Oh(e.context)}function Ft(e){G(!1,"A is only ever to be used as the child of element, never rendered directly. Please wrap your in a .")}function Gh({basename:e="/",children:t=null,location:n,navigationType:r="POP",navigator:l,static:i=!1,unstable_useTransitions:o}){G(!Cr(),"You cannot render a inside another . You should never have more than one in your app.");let u=e.replace(/^\/*/,"/"),a=y.useMemo(()=>({basename:u,navigator:l,static:i,unstable_useTransitions:o,future:{}}),[u,l,i,o]);typeof n=="string"&&(n=jn(n));let{pathname:s="/",search:h="",hash:d="",state:m=null,key:w="default",unstable_mask:v}=n,x=y.useMemo(()=>{let k=ct(s,u);return k==null?null:{location:{pathname:k,search:h,hash:d,state:m,key:w,unstable_mask:v},navigationType:r}},[u,s,h,d,m,w,r,v]);return Ze(x!=null,` is not able to match the URL "${s}${h}${d}" because it does not start with the basename, so the won't render anything.`),x==null?null:y.createElement(Ie.Provider,{value:a},y.createElement(Er.Provider,{children:t,value:x}))}function Xh({children:e,location:t}){return Ih(So(e),t)}function So(e,t=[]){let n=[];return y.Children.forEach(e,(r,l)=>{if(!y.isValidElement(r))return;let i=[...t,l];if(r.type===y.Fragment){n.push.apply(n,So(r.props.children,i));return}G(r.type===Ft,`[${typeof r.type=="string"?r.type:r.type.name}] is not a component. All component children of must be a or `),G(!r.props.index||!r.props.children,"An index route cannot have child routes.");let o={id:r.props.id||i.join("-"),caseSensitive:r.props.caseSensitive,element:r.props.element,Component:r.props.Component,index:r.props.index,path:r.props.path,middleware:r.props.middleware,loader:r.props.loader,action:r.props.action,hydrateFallbackElement:r.props.hydrateFallbackElement,HydrateFallback:r.props.HydrateFallback,errorElement:r.props.errorElement,ErrorBoundary:r.props.ErrorBoundary,hasErrorBoundary:r.props.hasErrorBoundary===!0||r.props.ErrorBoundary!=null||r.props.errorElement!=null,shouldRevalidate:r.props.shouldRevalidate,handle:r.props.handle,lazy:r.props.lazy};r.props.children&&(o.children=So(r.props.children,i)),n.push(o)}),n}var rl="get",ll="application/x-www-form-urlencoded";function Xl(e){return typeof HTMLElement<"u"&&e instanceof HTMLElement}function Jh(e){return Xl(e)&&e.tagName.toLowerCase()==="button"}function Zh(e){return Xl(e)&&e.tagName.toLowerCase()==="form"}function qh(e){return Xl(e)&&e.tagName.toLowerCase()==="input"}function bh(e){return!!(e.metaKey||e.altKey||e.ctrlKey||e.shiftKey)}function em(e,t){return e.button===0&&(!t||t==="_self")&&!bh(e)}var Hr=null;function tm(){if(Hr===null)try{new FormData(document.createElement("form"),0),Hr=!1}catch{Hr=!0}return Hr}var nm=new Set(["application/x-www-form-urlencoded","multipart/form-data","text/plain"]);function Ci(e){return e!=null&&!nm.has(e)?(Ze(!1,`"${e}" is not a valid \`encType\` for \`
\`/\`\` and will default to "${ll}"`),null):e}function rm(e,t){let n,r,l,i,o;if(Zh(e)){let u=e.getAttribute("action");r=u?ct(u,t):null,n=e.getAttribute("method")||rl,l=Ci(e.getAttribute("enctype"))||ll,i=new FormData(e)}else if(Jh(e)||qh(e)&&(e.type==="submit"||e.type==="image")){let u=e.form;if(u==null)throw new Error('Cannot submit a + + ); +} diff --git a/radioshaq/web-interface/src/components/Layout.tsx b/radioshaq/web-interface/src/components/Layout.tsx index 57ddc27..b815c05 100644 --- a/radioshaq/web-interface/src/components/Layout.tsx +++ b/radioshaq/web-interface/src/components/Layout.tsx @@ -1,23 +1,32 @@ import { Link, Outlet, useLocation } from 'react-router-dom'; +import { useTranslation } from 'react-i18next'; import { ApiStatus } from './ApiStatus'; +import { EmergencyNotifier } from './EmergencyNotifier'; +import { SUPPORTED_LANGUAGES, type SupportedLanguageCode } from '../i18n'; -const nav = [ - { to: '/', label: 'Audio' }, - { to: '/callsigns', label: 'Callsigns' }, - { to: '/messages', label: 'Messages' }, - { to: '/transcripts', label: 'Transcripts' }, - { to: '/radio', label: 'Radio' }, - { to: '/settings', label: 'Settings' }, +const navPaths = [ + { to: '/', key: 'audio' as const }, + { to: '/emergency', key: 'emergency' as const }, + { to: '/callsigns', key: 'callsigns' as const }, + { to: '/messages', key: 'messages' as const }, + { to: '/transcripts', key: 'transcripts' as const }, + { to: '/radio', key: 'radio' as const }, + { to: '/map', key: 'map' as const }, + { to: '/settings', key: 'settings' as const }, ]; export function Layout() { const location = useLocation(); + const { t, i18n } = useTranslation(); + const currentLang = i18n.language as SupportedLanguageCode; + return (
+
diff --git a/radioshaq/web-interface/src/components/LicenseGate.tsx b/radioshaq/web-interface/src/components/LicenseGate.tsx index 69d6668..771a321 100644 --- a/radioshaq/web-interface/src/components/LicenseGate.tsx +++ b/radioshaq/web-interface/src/components/LicenseGate.tsx @@ -1,8 +1,11 @@ +import { Trans, useTranslation } from 'react-i18next'; + type LicenseGateProps = { onAccept: () => void; }; export function LicenseGate({ onAccept }: LicenseGateProps) { + const { t } = useTranslation(); return (
-

License Acceptance Required

+

{t('license.title')}

- RadioShaq is licensed under GPL-2.0-only. You must - accept this license before using the official web interface. + }} />

- Review the full license text in{' '} + {t('license.review')}{' '} - I Accept GPL-2.0-only + {t('license.accept')}

diff --git a/radioshaq/web-interface/src/components/audio/VADVisualizer.tsx b/radioshaq/web-interface/src/components/audio/VADVisualizer.tsx index 1a959b9..c3d1f9b 100644 --- a/radioshaq/web-interface/src/components/audio/VADVisualizer.tsx +++ b/radioshaq/web-interface/src/components/audio/VADVisualizer.tsx @@ -1,4 +1,5 @@ import { useEffect, useRef, useState } from 'react'; +import { useTranslation } from 'react-i18next'; import { connectMetricsWebSocket } from '../../services/radioshaqApi'; import type { AudioMetrics } from '../../types/audio'; @@ -11,6 +12,7 @@ interface VADVisualizerProps { } export function VADVisualizer({ sessionId }: VADVisualizerProps) { + const { t } = useTranslation(); const [metrics, setMetrics] = useState(null); const [connected, setConnected] = useState(false); const reconnectDelay = useRef(RECONNECT_DELAY_MS); @@ -57,16 +59,23 @@ export function VADVisualizer({ sessionId }: VADVisualizerProps) { }; }, [sessionId]); + const isPlaceholder = metrics?.placeholder === true || (metrics?.type === 'heartbeat' && metrics?.state === 'idle' && metrics?.snr_db == null); + return (
- WebSocket: {connected ? 'connected' : 'disconnected'} + {t('audio.vadStatusWebSocket')}: {connected ? t('audio.vadConnected') : t('audio.vadDisconnected')}
- {metrics && ( + {metrics && !isPlaceholder && (
- VAD: {metrics.vad_active ? 'active' : 'idle'} - {metrics.snr_db != null && SNR: {metrics.snr_db.toFixed(1)} dB} - {metrics.state && State: {metrics.state}} + {t('audio.vadLabel')}: {metrics.vad_active ? t('audio.vadActive') : t('audio.vadIdle')} + {metrics.snr_db != null && {t('audio.snrLabel')}: {metrics.snr_db.toFixed(1)} dB} + {metrics.state && {t('audio.stateLabel')}: {metrics.state}} +
+ )} + {metrics && isPlaceholder && ( +
+ {t('audio.vadPlaceholder')}
)}
diff --git a/radioshaq/web-interface/src/components/maps/FieldMapPanel.tsx b/radioshaq/web-interface/src/components/maps/FieldMapPanel.tsx new file mode 100644 index 0000000..f56bfad --- /dev/null +++ b/radioshaq/web-interface/src/components/maps/FieldMapPanel.tsx @@ -0,0 +1,196 @@ +import React, { useCallback, useEffect, useState } from 'react'; +import { useTranslation } from 'react-i18next'; +import { + getOperatorLocation, + getOperatorsNearby, + setOperatorLocation, + type OperatorLocation, +} from '../../services/radioshaqApi'; +import { escapeHtml } from '../../utils/escapeHtml'; +import { OperatorMap, type OperatorMapMarker } from './OperatorMap'; +import { getDefaultMapCenter } from '../../maps/mapSourceConfig'; + +const DEFAULT_CENTER = getDefaultMapCenter(); +const FIELD_RADIUS_METERS = 100000; + +function operatorToMarker(op: OperatorLocation, index: number): OperatorMapMarker { + const dist = + op.distance_meters != null + ? `${(op.distance_meters / 1000).toFixed(1)} km` + : ''; + const lastSeen = op.last_seen_at ?? op.timestamp ?? '—'; + return { + id: `op-${op.id ?? index}-${op.callsign}`, + position: { lat: op.latitude, lng: op.longitude }, + label: op.callsign, + infoHtml: ` +
+ ${escapeHtml(op.callsign)} + ${dist ? `
${escapeHtml(dist)}` : ''} +
Last seen: ${escapeHtml(String(lastSeen))} +
+ `, + }; +} + +export interface FieldMapPanelProps { + /** Station callsign to center on and to update location for. If not set, user can type it. */ + stationCallsign?: string | null; + /** Height of the map area */ + height?: number | string; +} + +export function FieldMapPanel({ stationCallsign: propCallsign, height = 360 }: FieldMapPanelProps) { + const { t } = useTranslation(); + const [stationCallsign, setStationCallsign] = useState(propCallsign ?? ''); + const [center, setCenter] = useState(DEFAULT_CENTER); + const [markers, setMarkers] = useState([]); + const [loading, setLoading] = useState(false); + const [error, setError] = useState(null); + const [updateLat, setUpdateLat] = useState(''); + const [updateLng, setUpdateLng] = useState(''); + const [updating, setUpdating] = useState(false); + const [updateSuccess, setUpdateSuccess] = useState(false); + + const effectiveCallsign = (propCallsign ?? stationCallsign).trim().toUpperCase() || null; + + const fetchForCallsign = useCallback( + async (callsign: string) => { + if (!callsign) return; + setLoading(true); + setError(null); + try { + const loc = await getOperatorLocation(callsign); + setCenter({ lat: loc.latitude, lng: loc.longitude }); + const res = await getOperatorsNearby({ + latitude: loc.latitude, + longitude: loc.longitude, + radius_meters: FIELD_RADIUS_METERS, + recent_hours: 168, + max_results: 100, + }); + const stationMarker: OperatorMapMarker = { + id: `station-${loc.callsign}`, + position: { lat: loc.latitude, lng: loc.longitude }, + label: loc.callsign, + infoHtml: `
${escapeHtml(loc.callsign)} (this station)
`, + }; + const others = res.operators + .filter((o) => (o as OperatorLocation).callsign !== callsign && o.latitude != null && o.longitude != null) + .map((o, i) => operatorToMarker(o as OperatorLocation, i)); + setMarkers([stationMarker, ...others]); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToLoad')); + setMarkers([]); + } finally { + setLoading(false); + } + }, + [t] + ); + + useEffect(() => { + if (effectiveCallsign) fetchForCallsign(effectiveCallsign); + else setMarkers([]); + }, [effectiveCallsign, fetchForCallsign]); + + const handleUpdateLocation = async (e: React.FormEvent) => { + e.preventDefault(); + const cs = effectiveCallsign ?? (propCallsign ?? stationCallsign).trim().toUpperCase(); + if (!cs) { + setError('Enter a callsign to update location.'); + return; + } + const lat = parseFloat(updateLat); + const lng = parseFloat(updateLng); + if (Number.isNaN(lat) || lat < -90 || lat > 90) { + setError('Latitude must be between -90 and 90.'); + return; + } + if (Number.isNaN(lng) || lng < -180 || lng > 180) { + setError('Longitude must be between -180 and 180.'); + return; + } + setUpdating(true); + setError(null); + setUpdateSuccess(false); + try { + await setOperatorLocation({ callsign: cs, latitude: lat, longitude: lng }); + setUpdateSuccess(true); + setCenter({ lat, lng }); + await fetchForCallsign(cs); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failed')); + } finally { + setUpdating(false); + } + }; + + return ( +
+

{t('map.fieldMapTitle')}

+ {error && ( +

+ {error} +

+ )} + {updateSuccess && ( +

{t('map.locationUpdated')}

+ )} + + {!propCallsign && ( +
+ + setStationCallsign(e.target.value)} + placeholder="e.g. K5ABC" + maxLength={10} + style={{ padding: '0.4rem', width: 120 }} + /> +
+ )} + + + {loading &&

{t('common.loading')}

} + + + + + + +
+ ); +} diff --git a/radioshaq/web-interface/src/components/maps/OperatorMap.tsx b/radioshaq/web-interface/src/components/maps/OperatorMap.tsx new file mode 100644 index 0000000..f71ed11 --- /dev/null +++ b/radioshaq/web-interface/src/components/maps/OperatorMap.tsx @@ -0,0 +1,225 @@ +import { useEffect, useRef, useCallback, useState } from 'react'; + +import { getMapProvider } from '../../maps/mapSourceConfig'; +import { loadGoogleMaps } from '../../maps/googleMapsLoader'; +import { OperatorMapLeaflet } from './OperatorMapLeaflet'; + +export interface OperatorMapMarker { + id: string; + position: { lat: number; lng: number }; + label?: string; + subtitle?: string; + infoHtml?: string; + iconUrl?: string; + color?: string; +} + +export interface OperatorMapProps { + center: { lat: number; lng: number }; + zoom: number; + markers: OperatorMapMarker[]; + height?: number | string; + className?: string; + /** When provider is OSM, optional tile source id. */ + tileSourceId?: string; +} + +const DEFAULT_HEIGHT = 480; + +function getGoogleMarkerIcon( + google: typeof globalThis.google, + marker: OperatorMapMarker +): string | google.maps.Symbol | undefined { + if (marker.iconUrl) { + return marker.iconUrl; + } + if (!marker.color) { + return undefined; + } + return { + path: google.maps.SymbolPath.CIRCLE, + scale: 8, + fillColor: marker.color, + fillOpacity: 0.9, + strokeColor: '#fff', + strokeWeight: 1.5, + }; +} + +/** + * Unified map: branches on getMapProvider(). Renders Google or Leaflet (OSM) implementation. + */ +export function OperatorMap(props: OperatorMapProps) { + const provider = getMapProvider(); + if (provider === 'osm') { + return ; + } + return ; +} + +/** + * Google Maps implementation. Requires VITE_GOOGLE_MAPS_API_KEY. + */ +function OperatorMapGoogle({ + center, + zoom, + markers, + height = DEFAULT_HEIGHT, + className = '', +}: OperatorMapProps) { + const containerRef = useRef(null); + const mapRef = useRef(null); + const markersRef = useRef([]); + const infoWindowRef = useRef(null); + const centerRef = useRef(center); + const zoomRef = useRef(zoom); + const [mapError, setMapError] = useState(null); + const [mapReady, setMapReady] = useState(false); + + const clearMarkers = useCallback(() => { + markersRef.current.forEach((m) => m.setMap(null)); + markersRef.current = []; + if (infoWindowRef.current) { + infoWindowRef.current.close(); + } + }, []); + + useEffect(() => { + centerRef.current = center; + zoomRef.current = zoom; + }, [center, zoom]); + + useEffect(() => { + if (!containerRef.current) return; + setMapError(null); + setMapReady(false); + let cancelled = false; + loadGoogleMaps() + .then((google) => { + if (cancelled || !containerRef.current) return; + try { + const latestCenter = centerRef.current; + const latestZoom = zoomRef.current; + const map = new google.maps.Map(containerRef.current, { + center: { lat: latestCenter.lat, lng: latestCenter.lng }, + zoom: latestZoom, + mapTypeControl: true, + fullscreenControl: true, + streetViewControl: true, + zoomControl: true, + }); + mapRef.current = map; + infoWindowRef.current = new google.maps.InfoWindow(); + if (!cancelled) setMapReady(true); + } catch (err) { + if (!cancelled) { + const message = err instanceof Error ? err.message : String(err); + setMapError(message || 'Map failed to load.'); + } + } + }) + .catch((err) => { + if (!cancelled) { + const message = err?.message ?? String(err); + setMapError(message || 'Map failed to load.'); + } + }); + return () => { + cancelled = true; + clearMarkers(); + mapRef.current = null; + infoWindowRef.current = null; + }; + }, []); + + useEffect(() => { + const map = mapRef.current; + if (!map || !mapReady) return; + map.setCenter({ lat: center.lat, lng: center.lng }); + map.setZoom(zoom); + }, [center.lat, center.lng, zoom, mapReady]); + + useEffect(() => { + const map = mapRef.current; + if (!map || !mapReady) return; + clearMarkers(); + loadGoogleMaps().then((google) => { + if (!mapRef.current || !infoWindowRef.current) return; + markers.forEach((m) => { + const marker = new google.maps.Marker({ + position: { lat: m.position.lat, lng: m.position.lng }, + map: mapRef.current!, + title: m.label ?? m.id, + label: m.label ? { text: m.label, color: '#000' } : undefined, + icon: getGoogleMarkerIcon(google, m), + }); + if (m.infoHtml) { + marker.addListener('click', () => { + infoWindowRef.current?.setContent(m.infoHtml!); + infoWindowRef.current?.open(mapRef.current!, marker); + }); + } + markersRef.current.push(marker); + }); + }); + }, [markers, clearMarkers, mapReady]); + + const heightStyle = typeof height === 'number' ? `${height}px` : height; + + return ( +
+ {!mapReady && !mapError && ( +
+ Loading map… +
+ )} + {mapError && ( +
+ Map unavailable +

{mapError}

+

+ Set VITE_GOOGLE_MAPS_API_KEY in web-interface/.env and restart the dev server (npm run dev). +

+
+ )} +
+
+ ); +} diff --git a/radioshaq/web-interface/src/components/maps/OperatorMapLeaflet.tsx b/radioshaq/web-interface/src/components/maps/OperatorMapLeaflet.tsx new file mode 100644 index 0000000..b1d77fc --- /dev/null +++ b/radioshaq/web-interface/src/components/maps/OperatorMapLeaflet.tsx @@ -0,0 +1,88 @@ +import { useEffect, useMemo } from 'react'; +import { MapContainer, TileLayer, Marker, Popup, useMap } from 'react-leaflet'; +import L from 'leaflet'; +import DOMPurify from 'dompurify'; +import markerIcon from 'leaflet/dist/images/marker-icon.png'; +import markerIcon2x from 'leaflet/dist/images/marker-icon-2x.png'; +import markerShadow from 'leaflet/dist/images/marker-shadow.png'; +import { getTileLayerProps } from '../../maps/mapSourceConfig'; +import type { OperatorMapProps } from './OperatorMap'; + +const DEFAULT_HEIGHT = 480; + +function ChangeView({ center, zoom }: { center: { lat: number; lng: number }; zoom: number }) { + const map = useMap(); + useEffect(() => { + map.setView([center.lat, center.lng], zoom); + }, [map, center.lat, center.lng, zoom]); + return null; +} + +const defaultIcon = L.icon({ + iconUrl: markerIcon, + iconRetinaUrl: markerIcon2x, + shadowUrl: markerShadow, + iconSize: [25, 41], + iconAnchor: [12, 41], + popupAnchor: [1, -34], + shadowSize: [41, 41], +}); + +/** + * Leaflet-based OperatorMap for OSM provider. Uses tile layer from mapSourceConfig. + */ +export function OperatorMapLeaflet({ + center, + zoom, + markers, + height = DEFAULT_HEIGHT, + className = '', + tileSourceId, +}: OperatorMapProps) { + const tileProps = useMemo(() => getTileLayerProps(tileSourceId), [tileSourceId]); + const heightStyle = typeof height === 'number' ? `${height}px` : height; + + return ( +
+ + + + {markers.map((m) => ( + + + {m.infoHtml ? ( +
+ ) : ( + {m.label ?? m.id} + )} + + + ))} + +
+ ); +} diff --git a/radioshaq/web-interface/src/components/maps/TranscriptMapModal.tsx b/radioshaq/web-interface/src/components/maps/TranscriptMapModal.tsx new file mode 100644 index 0000000..755b468 --- /dev/null +++ b/radioshaq/web-interface/src/components/maps/TranscriptMapModal.tsx @@ -0,0 +1,151 @@ +import { useCallback, useEffect, useState } from 'react'; +import { useTranslation } from 'react-i18next'; +import { getOperatorLocation, type OperatorLocation } from '../../services/radioshaqApi'; +import { escapeHtml } from '../../utils/escapeHtml'; +import { OperatorMap, type OperatorMapMarker } from './OperatorMap'; +import { getDefaultMapCenter } from '../../maps/mapSourceConfig'; +import type { TranscriptItem } from '../../services/radioshaqApi'; + +function haversineKm(lat1: number, lon1: number, lat2: number, lon2: number): number { + const R = 6371; + const dLat = ((lat2 - lat1) * Math.PI) / 180; + const dLon = ((lon2 - lon1) * Math.PI) / 180; + const a = + Math.sin(dLat / 2) * Math.sin(dLat / 2) + + Math.cos((lat1 * Math.PI) / 180) * + Math.cos((lat2 * Math.PI) / 180) * + Math.sin(dLon / 2) * + Math.sin(dLon / 2); + const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a)); + return R * c; +} + +export interface TranscriptMapModalProps { + transcript: TranscriptItem | null; + onClose: () => void; +} + +export function TranscriptMapModal({ transcript, onClose }: TranscriptMapModalProps) { + const { t } = useTranslation(); + const [markers, setMarkers] = useState([]); + const [center, setCenter] = useState(getDefaultMapCenter); + const [zoom, setZoom] = useState(4); + const [loading, setLoading] = useState(false); + const [error, setError] = useState(null); + const [distanceKm, setDistanceKm] = useState(null); + + const fetchLocations = useCallback(async (src: string, dest: string | undefined) => { + setLoading(true); + setError(null); + setMarkers([]); + setDistanceKm(null); + try { + const callsigns = dest && dest !== src ? [src, dest] : [src]; + const locations: { callsign: string; loc: OperatorLocation }[] = []; + for (const cs of callsigns) { + try { + const loc = await getOperatorLocation(cs); + locations.push({ callsign: cs, loc }); + } catch { + // skip if location not found for this callsign + } + } + if (locations.length === 0) { + setError(t('map.noLocationsForCallsigns')); + return; + } + const ms: OperatorMapMarker[] = locations.map(({ callsign, loc }, i) => ({ + id: `tx-${callsign}-${i}`, + position: { lat: loc.latitude, lng: loc.longitude }, + label: callsign, + infoHtml: `
${escapeHtml(callsign)}
`, + })); + setMarkers(ms); + + const lat1 = locations[0].loc.latitude; + const lon1 = locations[0].loc.longitude; + if (locations.length === 2) { + const lat2 = locations[1].loc.latitude; + const lon2 = locations[1].loc.longitude; + setDistanceKm(haversineKm(lat1, lon1, lat2, lon2)); + setCenter({ + lat: (lat1 + lat2) / 2, + lng: (lon1 + lon2) / 2, + }); + setZoom(5); + } else { + setCenter({ lat: lat1, lng: lon1 }); + setZoom(8); + } + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToLoad')); + } finally { + setLoading(false); + } + }, [t]); + + useEffect(() => { + if (!transcript) return; + const src = (transcript.source_callsign ?? '').trim().toUpperCase(); + const dest = (transcript.destination_callsign ?? '').trim().toUpperCase() || undefined; + if (!src) { + setError(t('map.noSourceCallsign')); + setLoading(false); + return; + } + fetchLocations(src, dest); + }, [transcript, fetchLocations]); + + if (!transcript) return null; + + return ( +
+
e.stopPropagation()} + > +
+

{t('map.viewOnMap')}

+ +
+ {loading &&

{t('common.loading')}

} + {error &&

{error}

} + {distanceKm != null && ( +

+ {t('map.distanceKm', { km: distanceKm.toFixed(1) })} +

+ )} + {!loading && markers.length > 0 && ( +
+ +
+ )} +
+
+ ); +} diff --git a/radioshaq/web-interface/src/features/audio/AudioConfigPage.tsx b/radioshaq/web-interface/src/features/audio/AudioConfigPage.tsx index 7956e76..a3e68c7 100644 --- a/radioshaq/web-interface/src/features/audio/AudioConfigPage.tsx +++ b/radioshaq/web-interface/src/features/audio/AudioConfigPage.tsx @@ -1,4 +1,5 @@ import { useEffect, useState } from 'react'; +import { useTranslation } from 'react-i18next'; import { AudioActivationMode, ResponseMode } from '../../types/audio'; import { ResponseModeSelector } from '../../components/audio/ResponseModeSelector'; import { ConfirmationQueue } from '../../components/audio/ConfirmationQueue'; @@ -9,10 +10,19 @@ import { listAudioDevices, listPendingResponses, } from '../../services/radioshaqApi'; -import type { AudioConfig, PendingResponse } from '../../types/audio'; +import type { AudioConfigResponse } from '../../services/radioshaqApi'; +import type { PendingResponse } from '../../types/audio'; + +const ASR_LANGUAGE_OPTIONS = [ + { value: 'auto', labelKey: 'audio.asrLanguageAuto' }, + { value: 'en', labelKey: 'audio.languageEn' }, + { value: 'fr', labelKey: 'audio.languageFr' }, + { value: 'es', labelKey: 'audio.languageEs' }, +] as const; export function AudioConfigPage() { - const [config, setConfig] = useState(null); + const { t } = useTranslation(); + const [config, setConfig] = useState(null); const [pending, setPending] = useState([]); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); @@ -22,7 +32,7 @@ export function AudioConfigPage() { const c = await getAudioConfig(); setConfig(c); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed to load config'); + setError(e instanceof Error ? e.message : t('common.failedToLoad')); } }; @@ -57,7 +67,7 @@ export function AudioConfigPage() { const updated = await updateAudioConfig({ response_mode: mode }); setConfig(updated); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed to update'); + setError(e instanceof Error ? e.message : t('common.failedToUpdate')); } }; @@ -71,7 +81,17 @@ export function AudioConfigPage() { const updated = await updateAudioConfig(patch); setConfig(updated); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed to update'); + setError(e instanceof Error ? e.message : t('common.failedToUpdate')); + } + }; + + const handleAsrLanguageChange = async (asr_language: string) => { + if (!config) return; + try { + const updated = await updateAudioConfig({ asr_language }); + setConfig(updated); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToUpdate')); } }; @@ -80,16 +100,42 @@ export function AudioConfigPage() { listAudioDevices().then(setDevices).catch(() => setDevices(null)); }, []); - if (loading) return

Loading…

; - if (error) return

Error: {error}

; + if (loading) return

{t('common.loading')}

; + if (error) return

{t('common.error')}: {error}

; if (!config) return null; + const showRestartNotice = config?._meta?.config_applies_after === 'restart'; + return (
-

RadioShaq Audio

+

{t('audio.title')}

+ {showRestartNotice && ( +

+ {t('common.configRestartNotice')} +

+ )} + +
+

{t('audio.asrLanguage')}

+

+ +

+
-

Response mode

+

{t('audio.responseMode')}

-

Audio activation

+

{t('audio.audioActivation')}

{config.audio_activation_enabled && ( <>

@@ -152,19 +198,19 @@ export function AudioConfigPage() { {devices && (
-

Audio devices

-

Inputs: {devices.input_devices?.length ?? 0} — Outputs: {devices.output_devices?.length ?? 0}

+

{t('audio.audioDevices')}

+

{t('audio.inputs')}: {devices.input_devices?.length ?? 0} — {t('audio.outputs')}: {devices.output_devices?.length ?? 0}

)}
-

VAD / metrics

+

{t('audio.vadMetrics')}

{(config.response_mode === 'confirm_first' || config.response_mode === 'confirm_timeout') && (
-

Confirmation queue

+

{t('audio.confirmationQueue')}

([]); const [loading, setLoading] = useState(true); const [error, setError] = useState(null); @@ -15,6 +19,12 @@ export function CallsignsPage() { const [submitting, setSubmitting] = useState(false); const [audioFile, setAudioFile] = useState(null); const [audioCallsign, setAudioCallsign] = useState(''); + const [setLocationCallsign, setSetLocationCallsign] = useState(null); + const [setLocationLat, setSetLocationLat] = useState(''); + const [setLocationLng, setSetLocationLng] = useState(''); + const [setLocationSubmitting, setSetLocationSubmitting] = useState(false); + const [setLocationError, setSetLocationError] = useState(null); + const [setLocationSuccess, setSetLocationSuccess] = useState(false); const load = async (silent = false) => { if (!silent) setLoading(true); @@ -23,7 +33,7 @@ export function CallsignsPage() { const res = await listCallsigns(); setList(res.registered ?? []); } catch (e) { - if (!silent) setError(e instanceof Error ? e.message : 'Failed to load'); + if (!silent) setError(e instanceof Error ? e.message : t('callsigns.failedToLoad')); } finally { if (!silent) setLoading(false); } @@ -50,7 +60,7 @@ export function CallsignsPage() { setAddCallsign(''); await load(); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed to add'); + setError(e instanceof Error ? e.message : t('callsigns.failedToAdd')); } finally { setSubmitting(false); } @@ -64,14 +74,58 @@ export function CallsignsPage() { await unregisterCallsign(cs); await load(); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed to remove'); + setError(e instanceof Error ? e.message : t('callsigns.failedToRemove')); + } + }; + + const handleSetLocation = async (e: React.FormEvent) => { + e.preventDefault(); + const cs = setLocationCallsign?.trim().toUpperCase(); + if (!cs) return; + const lat = parseFloat(setLocationLat); + const lng = parseFloat(setLocationLng); + if (Number.isNaN(lat) || lat < -90 || lat > 90) { + setSetLocationError(t('map.latInvalid') ?? 'Latitude must be between -90 and 90.'); + return; + } + if (Number.isNaN(lng) || lng < -180 || lng > 180) { + setSetLocationError(t('map.lngInvalid') ?? 'Longitude must be between -180 and 180.'); + return; + } + setSetLocationSubmitting(true); + setSetLocationError(null); + setSetLocationSuccess(false); + try { + await setOperatorLocation({ callsign: cs, latitude: lat, longitude: lng }); + setSetLocationSuccess(true); + setTimeout(() => { + setSetLocationCallsign(null); + setSetLocationLat(''); + setSetLocationLng(''); + setSetLocationSuccess(false); + }, 1500); + } catch (err) { + setSetLocationError(err instanceof Error ? err.message : t('common.failed')); + } finally { + setSetLocationSubmitting(false); + } + }; + + const openSetLocation = (callsign: string) => { + const cs = typeof callsign === 'string' ? callsign : (callsign as CallsignEntry).callsign; + if (cs) { + setSetLocationCallsign(String(cs)); + setSetLocationLat(''); + setSetLocationLng(''); + setSetLocationError(null); + setSetLocationSuccess(false); } }; const handleRegisterFromAudio = async (e: React.FormEvent) => { e.preventDefault(); if (!audioFile) { - setError('Select an audio file'); + setError(t('callsigns.selectAudio')); return; } setSubmitting(true); @@ -82,21 +136,21 @@ export function CallsignsPage() { setAudioCallsign(''); await load(); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed to register from audio'); + setError(e instanceof Error ? e.message : t('callsigns.failedRegisterAudio')); } finally { setSubmitting(false); } }; - if (loading) return

Loading…

; + if (loading) return

{t('common.loading')}

; return (
-

Callsigns (whitelist)

+

{t('callsigns.whitelistTitle')}

{error &&

{error}

}
-

Add callsign

+

{t('callsigns.addCallsign')}

-

Register from audio

+

{t('callsigns.registerFromAudio')}

setAudioFile(e.target.files?.[0] ?? null)} - aria-label="Audio file" + aria-label={t('callsigns.selectAudio')} /> setAudioCallsign(e.target.value)} - placeholder="Optional: confirm callsign (else from ASR)" + placeholder={t('callsigns.confirmCallsignPlaceholder')} maxLength={10} style={{ padding: '0.4rem' }} />
-

Registered ({list.length})

+

{t('callsigns.registeredCount', { count: list.length })}

- Auto-refresh every 20s + {t('callsigns.autoRefresh20')}

{list.length === 0 ? ( -

No callsigns registered.

+

{t('callsigns.noCallsigns')}

) : (
    {list.map((entry, i) => { @@ -154,8 +208,11 @@ export function CallsignsPage() { return (
  • {String(cs)} +
  • ); @@ -163,6 +220,93 @@ export function CallsignsPage() {
)}
+ + {setLocationCallsign && ( +
setSetLocationCallsign(null)} + > +
e.stopPropagation()} + > +

{t('map.setLocation') ?? 'Set location'} – {setLocationCallsign}

+ {setLocationError && ( +

{setLocationError}

+ )} + {setLocationSuccess && ( +

{t('map.locationUpdated')}

+ )} +
+ + + {setLocationLat && setLocationLng && !Number.isNaN(parseFloat(setLocationLat)) && !Number.isNaN(parseFloat(setLocationLng)) && ( +
+ +
+ )} +
+ + +
+
+
+
+ )}
); } diff --git a/radioshaq/web-interface/src/features/emergency/EmergencyPage.tsx b/radioshaq/web-interface/src/features/emergency/EmergencyPage.tsx new file mode 100644 index 0000000..9372839 --- /dev/null +++ b/radioshaq/web-interface/src/features/emergency/EmergencyPage.tsx @@ -0,0 +1,248 @@ +import { useCallback, useEffect, useState } from 'react'; +import { useTranslation } from 'react-i18next'; +import { + getEmergencyPendingCount, + listEmergencyEvents, + listEmergencyEventsWithLocation, + approveEmergencyEvent, + rejectEmergencyEvent, + type EmergencyEvent as EmergencyEventType, + type EmergencyEventLocation, +} from '../../services/radioshaqApi'; +import { OperatorMap, type OperatorMapMarker } from '../../components/maps/OperatorMap'; +import { getDefaultMapCenter } from '../../maps/mapSourceConfig'; +import { escapeHtml } from '../../utils/escapeHtml'; + +const POLL_INTERVAL_MS = 12_000; + +function emergencyToMarker(ev: EmergencyEventLocation): OperatorMapMarker { + return { + id: `ev-${ev.id}`, + position: { lat: ev.latitude, lng: ev.longitude }, + label: ev.initiator_callsign ?? `#${ev.id}`, + color: ev.status === 'pending' ? '#c62828' : ev.status === 'approved' ? '#2e7d32' : '#666', + infoHtml: ` +
+ ${escapeHtml(ev.initiator_callsign ?? '')} → ${escapeHtml(ev.target_callsign ?? '—')} +
${escapeHtml(ev.status ?? '')} · ${escapeHtml(ev.created_at ?? '')} +
+ `, + }; +} + +export function EmergencyPage() { + const { t } = useTranslation(); + const [events, setEvents] = useState([]); + const [pendingCount, setPendingCount] = useState(0); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + const [actionLoadingSet, setActionLoadingSet] = useState>(new Set()); + const [notes, setNotes] = useState>({}); + const [emergencyMapMarkers, setEmergencyMapMarkers] = useState([]); + const [emergencyMapCenter, setEmergencyMapCenter] = useState(getDefaultMapCenter); + const [selectedEventId, setSelectedEventId] = useState(null); + + const load = useCallback(async () => { + try { + const [countRes, listRes] = await Promise.all([ + getEmergencyPendingCount(), + listEmergencyEvents('pending'), + ]); + const count = countRes.count; + setPendingCount(count); + setEvents(listRes.events ?? []); + setError(null); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToLoad')); + setEvents([]); + setPendingCount(0); + } finally { + setLoading(false); + } + }, [t]); + + useEffect(() => { + load(); + const interval = setInterval(load, POLL_INTERVAL_MS); + return () => clearInterval(interval); + }, [load]); + + useEffect(() => { + listEmergencyEventsWithLocation({ limit: 50 }) + .then((r) => setEmergencyMapMarkers(r.events.map(emergencyToMarker))) + .catch(() => setEmergencyMapMarkers([])); + }, []); + + const focusMapOnEvent = (eventId: number) => { + const marker = emergencyMapMarkers.find((m) => m.id === `ev-${eventId}`); + if (marker) { + setSelectedEventId(eventId); + setEmergencyMapCenter(marker.position); + } + }; + + const handleApprove = async (eventId: number) => { + setActionLoadingSet((prev) => { + const next = new Set(prev); + next.add(eventId); + return next; + }); + try { + await approveEmergencyEvent(eventId, notes[eventId]); + setNotes((prev) => ({ ...prev, [eventId]: '' })); + await load(); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToUpdate')); + } finally { + setActionLoadingSet((prev) => { + const next = new Set(prev); + next.delete(eventId); + return next; + }); + } + }; + + const handleReject = async (eventId: number) => { + setActionLoadingSet((prev) => { + const next = new Set(prev); + next.add(eventId); + return next; + }); + try { + await rejectEmergencyEvent(eventId, notes[eventId]); + setNotes((prev) => ({ ...prev, [eventId]: '' })); + await load(); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToUpdate')); + } finally { + setActionLoadingSet((prev) => { + const next = new Set(prev); + next.delete(eventId); + return next; + }); + } + }; + + const requestNotificationPermission = async () => { + if ('Notification' in window && Notification.permission === 'default') { + void Notification.requestPermission(); + } + }; + + if (loading && events.length === 0) { + return

{t('common.loading')}

; + } + + return ( +
+
+

{t('emergency.title')}

+ + {t('emergency.pendingCount', { count: pendingCount })} + + + +
+ + {error && ( +
+ {error} +
+ )} + +

{t('emergency.intro')}

+ + {emergencyMapMarkers.length > 0 && ( +
+

{t('emergency.mapTitle') ?? 'Events on map'}

+ +
+ )} + + {events.length === 0 ? ( +

{t('emergency.noPending')}

+ ) : ( +
    + {events.map((ev) => { + const id = ev.id ?? 0; + const extra = ev.extra_data ?? {}; + const phone = extra.emergency_contact_phone ?? '—'; + const channel = extra.emergency_contact_channel ?? '—'; + const message = extra.message ?? ev.notes ?? '—'; + const loadingEv = actionLoadingSet.has(id); + + return ( +
  • +
    + #{id} {ev.initiator_callsign ?? '—'} → {ev.target_callsign ?? '—'} + {emergencyMapMarkers.some((m) => m.id === `ev-${id}`) && ( + + )} +
    +
    + {t('emergency.contact')}: {phone} ({channel}) +
    +
    + {message} +
    +
    + setNotes((p) => ({ ...p, [id]: e.target.value }))} + style={{ padding: '0.35rem 0.5rem', width: 200, maxWidth: '100%' }} + /> + + +
    +
  • + ); + })} +
+ )} + +

+ {t('emergency.autoRefresh')} +

+
+ ); +} diff --git a/radioshaq/web-interface/src/features/emergency/emergencyAlerts.ts b/radioshaq/web-interface/src/features/emergency/emergencyAlerts.ts new file mode 100644 index 0000000..d00f84e --- /dev/null +++ b/radioshaq/web-interface/src/features/emergency/emergencyAlerts.ts @@ -0,0 +1,75 @@ +/** Module-level AudioContext created on user gesture; kept alive for alert sounds. */ +let _ctx: AudioContext | null = null; + +/** + * Create (and resume) the AudioContext during a user gesture (e.g. "Enable alerts" click). + * Must be called before playEmergencyAlertSound() or the sound will not play (browser autoplay policy). + */ +export function initAudioContext(): void { + if (_ctx) return; + try { + const Ctx = window.AudioContext || (window as unknown as { webkitAudioContext: typeof AudioContext }).webkitAudioContext; + if (!Ctx) return; + _ctx = new Ctx(); + } catch { + /* ignore */ + } +} + +/** + * Request browser notification permission. Call during a user gesture (e.g. "Enable alerts" click). + * Until permission is granted, showEmergencyBrowserNotification is a no-op. + */ +export async function requestNotificationPermission(): Promise { + if (typeof window === 'undefined' || !('Notification' in window)) return; + if (Notification.permission === 'default') { + await Notification.requestPermission(); + } +} + +/** + * Play the emergency alert tone. Uses the AudioContext from initAudioContext(). + * If initAudioContext() was never called (no user gesture), the sound is silently skipped. + */ +export async function playEmergencyAlertSound(): Promise { + const ctx = _ctx; + if (!ctx) return; + try { + if (ctx.state === 'suspended') { + await ctx.resume(); + } + const osc = ctx.createOscillator(); + const gain = ctx.createGain(); + osc.connect(gain); + gain.connect(ctx.destination); + osc.frequency.value = 880; + osc.type = 'sine'; + gain.gain.setValueAtTime(0.2, ctx.currentTime); + gain.gain.exponentialRampToValueAtTime(0.01, ctx.currentTime + 0.3); + osc.start(ctx.currentTime); + osc.stop(ctx.currentTime + 0.3); + const osc2 = ctx.createOscillator(); + const gain2 = ctx.createGain(); + osc2.connect(gain2); + gain2.connect(ctx.destination); + osc2.frequency.value = 880; + osc2.type = 'sine'; + gain2.gain.setValueAtTime(0.2, ctx.currentTime + 0.4); + gain2.gain.exponentialRampToValueAtTime(0.01, ctx.currentTime + 0.7); + osc2.start(ctx.currentTime + 0.4); + osc2.stop(ctx.currentTime + 0.7); + } catch { + /* ignore */ + } +} + +export function showEmergencyBrowserNotification(count: number): void { + if (typeof window === 'undefined' || !('Notification' in window)) return; + if (Notification.permission === 'granted') { + new Notification('RadioShaq – Emergency', { + body: count === 1 ? '1 pending emergency message requires your action.' : `${count} pending emergency messages require your action.`, + tag: 'radioshaq-emergency', + requireInteraction: true, + }); + } +} diff --git a/radioshaq/web-interface/src/features/map/MapPage.tsx b/radioshaq/web-interface/src/features/map/MapPage.tsx new file mode 100644 index 0000000..e0145bc --- /dev/null +++ b/radioshaq/web-interface/src/features/map/MapPage.tsx @@ -0,0 +1,256 @@ +import React, { useCallback, useEffect, useState } from 'react'; +import { useTranslation } from 'react-i18next'; +import { + getOperatorLocation, + getOperatorsNearby, + listEmergencyEventsWithLocation, + type OperatorLocation, + type EmergencyEventLocation, +} from '../../services/radioshaqApi'; +import { OperatorMap, type OperatorMapMarker } from '../../components/maps/OperatorMap'; +import { + getMapProvider, + setMapProvider, + getDefaultMapCenter, + getDefaultMapRadiusMeters, + getMapSources, + getActiveMapSourceId, + type MapProvider, +} from '../../maps/mapSourceConfig'; +import { isGoogleMapsConfigured } from '../../maps/googleMapsLoader'; +import { escapeHtml } from '../../utils/escapeHtml'; + +const RADII_KM = [10, 50, 200, 1000] as const; + +function radiusKmFromMeters(m: number): number { + const km = m / 1000; + return RADII_KM.find((r) => r >= km) ?? 50; +} + +function operatorToMarker(op: OperatorLocation, index: number): OperatorMapMarker { + const dist = + op.distance_meters != null + ? `${(op.distance_meters / 1000).toFixed(1)} km` + : ''; + const lastSeen = op.last_seen_at ?? op.timestamp ?? '—'; + return { + id: `op-${op.id ?? index}-${op.callsign}`, + position: { lat: op.latitude, lng: op.longitude }, + label: op.callsign, + infoHtml: ` +
+ ${escapeHtml(op.callsign)} + ${dist ? `
${escapeHtml(dist)}` : ''} +
Last seen: ${escapeHtml(String(lastSeen))} +
+ `, + }; +} + +function emergencyToMarker(ev: EmergencyEventLocation): OperatorMapMarker { + return { + id: `ev-${ev.id}`, + position: { lat: ev.latitude, lng: ev.longitude }, + label: ev.initiator_callsign ?? `#${ev.id}`, + color: ev.status === 'pending' ? '#c62828' : ev.status === 'approved' ? '#2e7d32' : '#666', + infoHtml: ` +
+ ${escapeHtml(ev.initiator_callsign ?? '')} → ${escapeHtml(ev.target_callsign ?? '—')} +
${escapeHtml(ev.status ?? '')} · ${escapeHtml(ev.created_at ?? '')} +
+ `, + }; +} + +export function MapPage() { + const { t } = useTranslation(); + const defaultCenter = getDefaultMapCenter(); + const defaultRadiusM = getDefaultMapRadiusMeters(); + const [provider, setProviderState] = useState(getMapProvider); + const [center, setCenter] = useState(defaultCenter); + const [radiusKm, setRadiusKm] = useState(radiusKmFromMeters(defaultRadiusM)); + const [markers, setMarkers] = useState([]); + const [emergencyMarkers, setEmergencyMarkers] = useState([]); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + const [callsignSearch, setCallsignSearch] = useState(''); + const [searching, setSearching] = useState(false); + const [tileSourceId, setTileSourceId] = useState(getActiveMapSourceId); + + const handleProviderChange = (p: MapProvider) => { + setMapProvider(p); + setProviderState(p); + }; + + const mapSources = getMapSources(); + const showTileSwitcher = provider === 'osm' && mapSources.length > 1; + const googleConfigured = isGoogleMapsConfigured(); + const showGoogleWarning = provider === 'google' && !googleConfigured; + + const fetchNearby = useCallback(async (lat: number, lng: number, radiusMeters: number) => { + setLoading(true); + setError(null); + try { + const res = await getOperatorsNearby({ + latitude: lat, + longitude: lng, + radius_meters: radiusMeters, + recent_hours: 168, + max_results: 200, + }); + setCenter({ lat: res.latitude, lng: res.longitude }); + setMarkers( + res.operators + .filter((o) => o.latitude != null && o.longitude != null) + .map((o, i) => operatorToMarker(o as OperatorLocation, i)) + ); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToLoad')); + setMarkers([]); + } finally { + setLoading(false); + } + }, [t]); + + useEffect(() => { + fetchNearby(center.lat, center.lng, radiusKm * 1000); + }, [radiusKm, fetchNearby]); + + useEffect(() => { + listEmergencyEventsWithLocation({ limit: 50 }) + .then((r) => setEmergencyMarkers(r.events.map(emergencyToMarker))) + .catch(() => setEmergencyMarkers([])); + }, []); + + const handleCenterOnCallsign = async (e: React.FormEvent) => { + e.preventDefault(); + const cs = callsignSearch.trim().toUpperCase(); + if (!cs) return; + setSearching(true); + setError(null); + try { + const loc = await getOperatorLocation(cs); + setCenter({ lat: loc.latitude, lng: loc.longitude }); + await fetchNearby(loc.latitude, loc.longitude, radiusKm * 1000); + } catch (e) { + setError(e instanceof Error ? e.message : t('common.failedToLoad')); + } finally { + setSearching(false); + } + }; + + const handleRadiusChange = (km: number) => { + setRadiusKm(km); + }; + + return ( +
+

{t('map.title')}

+ +
+ {t('map.provider')}: + + {showTileSwitcher && ( + <> + {t('map.tileSource')}: + + + )} +
+ + {showGoogleWarning && ( +

+ {t('map.notConfigured')} {t('map.switchToOsm')} +

+ )} + {error && ( +

+ {error} +

+ )} + +
+
+ setCallsignSearch(e.target.value)} + placeholder={t('map.callsignPlaceholder')} + maxLength={10} + style={{ padding: '0.4rem', width: 100 }} + aria-label={t('map.centerOnCallsign')} + /> + +
+ {t('map.radius')}: + + {loading && {t('common.loading')}} +
+ + {!showGoogleWarning && ( + = 200 ? 5 : radiusKm >= 50 ? 7 : 9} + markers={[...markers, ...emergencyMarkers]} + height={500} + tileSourceId={provider === 'osm' ? tileSourceId : undefined} + /> + )} +

+ {t('map.operatorCount', { count: markers.length })} + {emergencyMarkers.length > 0 && ` · ${emergencyMarkers.length} emergency events`} +

+
+ ); +} diff --git a/radioshaq/web-interface/src/features/messages/MessagesPage.tsx b/radioshaq/web-interface/src/features/messages/MessagesPage.tsx index 2f8c136..3f83216 100644 --- a/radioshaq/web-interface/src/features/messages/MessagesPage.tsx +++ b/radioshaq/web-interface/src/features/messages/MessagesPage.tsx @@ -1,9 +1,11 @@ import React, { useState } from 'react'; +import { useTranslation } from 'react-i18next'; import { processMessage, whitelistRequest, injectMessage, injectAndStore, relayMessage } from '../../services/radioshaqApi'; type Tab = 'process' | 'whitelist' | 'inject' | 'inject_store' | 'relay'; export function MessagesPage() { + const { t } = useTranslation(); const [tab, setTab] = useState('process'); const [error, setError] = useState(null); const [result, setResult] = useState(null); @@ -41,7 +43,7 @@ export function MessagesPage() { const res = await processMessage({ message: processText.trim() }); setResult(JSON.stringify({ success: res.success, message: res.message, task_id: res.task_id }, null, 2)); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed'); + setError(e instanceof Error ? e.message : t('common.failed')); } finally { setSubmitting(false); } @@ -61,7 +63,7 @@ export function MessagesPage() { }); setResult(JSON.stringify({ success: res.success, message: res.message, approved: res.approved }, null, 2)); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed'); + setError(e instanceof Error ? e.message : t('common.failed')); } finally { setSubmitting(false); } @@ -82,7 +84,7 @@ export function MessagesPage() { }); setResult(JSON.stringify(res, null, 2)); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed'); + setError(e instanceof Error ? e.message : t('common.failed')); } finally { setSubmitting(false); } @@ -103,7 +105,7 @@ export function MessagesPage() { }); setResult(JSON.stringify(res, null, 2)); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed'); + setError(e instanceof Error ? e.message : t('common.failed')); } finally { setSubmitting(false); } @@ -125,60 +127,60 @@ export function MessagesPage() { }); setResult(JSON.stringify(res, null, 2)); } catch (e) { - setError(e instanceof Error ? e.message : 'Failed'); + setError(e instanceof Error ? e.message : t('common.failed')); } finally { setSubmitting(false); } }; - const tabs: { key: Tab; label: string }[] = [ - { key: 'process', label: 'Process (REACT)' }, - { key: 'whitelist', label: 'Whitelist request' }, - { key: 'inject', label: 'Inject (demo)' }, - { key: 'inject_store', label: 'Inject & store' }, - { key: 'relay', label: 'Relay (band translation)' }, + const tabs: { key: Tab; labelKey: string }[] = [ + { key: 'process', labelKey: 'messages.process' }, + { key: 'whitelist', labelKey: 'messages.whitelist' }, + { key: 'inject', labelKey: 'messages.inject' }, + { key: 'inject_store', labelKey: 'messages.injectStore' }, + { key: 'relay', labelKey: 'messages.relay' }, ]; return (
-

Messages & commands

+

{t('messages.title')}

{error &&

{error}

} {result &&
{result}
}
- {tabs.map(({ key, label }) => ( + {tabs.map(({ key, labelKey }) => ( ))}
{tab === 'process' && (
-

Process message (REACT)

+

{t('messages.process')}