-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
The Issue
After hours of attempting a self-hosted deployment on Windows/WSL 2 using Google Gemini, the system remains in a non-functional state due to hardcoded validation loops and networking misinterpretations.
- Hardcoded OpenAI Initialization Check
The Bug: Even when LLM_PROVIDER is explicitly set to GOOGLE and a valid GOOGLE_API_KEY is provided, the backend initialization script triggers an OpenAI validation "ping".
The Result: Because no OpenAI key exists, the backend throws an OpenAI authentication failed error and halts the boot process before it ever switches to the Gemini logic.
The Impact: Users are locked out of using alternative LLM providers because the "gatekeeper" logic is provider-biased.
- The "Localhost" 403 Forbidden Loop
The Bug: The Skyvern UI (port 8080) is hardcoded to perform health/auth checks against /api/v1/internal/auth/status on 127.0.0.1.
The Result: In a Docker bridge network, the backend sees these requests as "remote" (the host's IP) and returns a 403 Forbidden.
The Impact: The UI misinterprets this security block as a "Network Error" or "Invalid API Key," creating a false-negative status that prevents browser sessions from launching.
- Vite Environment Variable Persistence
The Bug: Updates to VITE_ environment variables in docker-compose.yml are not reliably reflected in the UI container without a full --no-cache rebuild.
The Impact: The UI continues to request OPENAI_GPT4O long after the configuration has been changed to Gemini, causing a permanent authentication mismatch.
Environment Details
OS: Windows 11 / WSL 2 (Ubuntu)
Deployment: Docker Compose
Target LLM: Gemini 1.5 Flash / 2.0