-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path.env.example
More file actions
124 lines (100 loc) · 4.35 KB
/
.env.example
File metadata and controls
124 lines (100 loc) · 4.35 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
# PolyglotWhisperer — API keys and configuration
# Copy to .env and fill in your keys. Shell env vars take precedence.
# LiteLLM supports any provider — add the matching key for your models.
# ── Whisper API ──
# Any OpenAI SDK-compatible endpoint (Groq, OpenAI, custom server).
# Model name is bare (no prefix) — routing is handled by api_base.
# Groq (fastest + cheapest)
PGW_WHISPER__API_BASE=https://api.groq.com/openai/v1
PGW_WHISPER__API_KEY=gsk_...
PGW_WHISPER__API_MODEL=whisper-large-v3-turbo
# OpenAI
# PGW_WHISPER__API_BASE=https://api.openai.com/v1
# PGW_WHISPER__API_KEY=sk-...
# PGW_WHISPER__API_MODEL=whisper-1
# ── LLM API ──
# Any OpenAI SDK-compatible endpoint (Ollama, Groq, DeepSeek, OpenRouter).
# Ollama (local — default)
# PGW_LLM__API_BASE=http://localhost:11434/v1
# PGW_LLM__API_KEY=ollama
# PGW_LLM__API_MODEL=qwen3:8b
# DeepSeek
# PGW_LLM__API_BASE=https://api.deepseek.com/v1
# PGW_LLM__API_KEY=sk-...
# PGW_LLM__API_MODEL=deepseek-chat
# Groq
# PGW_LLM__API_BASE=https://api.groq.com/openai/v1
# PGW_LLM__API_KEY=gsk_...
# PGW_LLM__API_MODEL=llama-3.3-70b-versatile
# OpenRouter
# PGW_LLM__API_BASE=https://openrouter.ai/api/v1
# PGW_LLM__API_KEY=sk-or-v1-...
# PGW_LLM__API_MODEL=openai/gpt-oss-120b
# ── Backend selection ──
# PGW_WHISPER__BACKEND=api # "local" (stable-ts) or "api"
# PGW_LLM__BACKEND=api # "local" (Ollama) or "api"
# ── Backend selection ──
# Whisper: "local" (stable-ts, default) or "api" (LiteLLM cloud / custom server)
# PGW_WHISPER__BACKEND=api
# LLM: "local" (Ollama, default) or "api" (LiteLLM cloud / custom server)
# PGW_LLM__BACKEND=api
# ── Model overrides ──
# PGW_WHISPER__LOCAL_MODEL=large-v3-turbo
# PGW_WHISPER__API_MODEL=groq/whisper-large-v3-turbo
# PGW_LLM__LOCAL_MODEL=ollama_chat/qwen3:8b
# PGW_LLM__API_MODEL=openrouter/openai/gpt-oss-120b
# ── Directories ──
# PGW_WORKSPACE_DIR=./pgw_workspace
# PGW_WHISPER__LANGUAGE=fr
# ── Logging ──
# PGW_DEBUG=1 # --verbose
# PGW_LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
# PGW_LOG_FILE=~/pgw.log
# ── Database (web UI / multi-user) ──
#
# pgw serve persists users, sessions, workspaces, and vocab in a DB.
# Default: SQLite at <workspace>/pgw.db. Production: Postgres.
#
# Spin up local Postgres: docker compose -f docker-compose.dev.yml up -d
# PGW_DATABASE_URL=postgresql+psycopg://pgw:pgw@localhost:5432/pgw_dev
# PGW_DB_POOL_SIZE=20 # Postgres only; default 5
# ── Auth (web UI) ──
#
# Bootstrap an admin account non-interactively (Docker / CI). When set,
# `ensure_admin_from_env` creates a user with these credentials on the
# first server start where the users table is empty. Idempotent — once
# any user exists these env vars are ignored. Recommended for Docker:
# they survive `docker compose down -v` (postgres volume wipe) without
# you having to revisit the SPA's /setup page.
# PGW_ADMIN_EMAIL=admin@example.com
# PGW_ADMIN_PASSWORD=change-me-please
# Signs CSRF cookies + future signed URLs. Required in production —
# in dev a per-process random value is used (invalidates tokens on
# restart, which is fine for one developer).
# Generate with: python -c "import secrets; print(secrets.token_urlsafe(32))"
# PGW_SECRET_KEY=...
# ── Web UI: cookie security ──
#
# Honour ``X-Forwarded-Proto`` from a reverse proxy when deciding the
# Secure cookie flag. Only enable behind a trusted proxy (Caddy, Nginx,
# Traefik); otherwise a client could spoof it on a direct HTTP listen.
# PGW_TRUST_PROXY_HEADERS=1
# Force Secure on auth cookies regardless of scheme detection.
# PGW_SECURE_COOKIES=1
# ── Flashcard LLM refinement ──
#
# When a flashcard is created (via /api/flashcards or the AddCardModal),
# a BackgroundTask asks the configured LLM to enrich it: lemma, POS,
# polished definition, an example sentence pair, optional mnemonic.
# Results land asynchronously and the SPA polls until refine_status
# flips to ``done``. Reuses the same PGW_LLM__* config as translation
# / refinement of subtitles.
#
# Default: enabled when ``PGW_LLM__API_KEY`` is set. Set to "0" to
# disable entirely (cards still save with their original ``back``
# text).
# PGW_FLASHCARD_REFINE=0
#
# Mnemonics roughly double output tokens — off by default. Set to "1"
# to opt in. Quality varies a lot by model; try with one card first.
# PGW_FLASHCARD_REFINE_MNEMONIC=1