Seekr is an open-source AI interviewer built with open-source LLMs via Ollama. It simulates realistic mock interviews to help users practice, prepare, and improve their interview performance—no proprietary APIs or cloud dependencies required.
-
AI-driven, dynamic interview sessions
-
Powered by local open-source models (via Ollama)
-
Role- and domain-specific question support
-
Easy to customize and extend
-
Full control and data privacy (runs locally)
-
Supports 10+ roles (e.g., intern, senior, CTO)
-
Covers 60+ topics (e.g., React, TypeScript, AWS)
- Llama 3.3
- Llama 3.2
- Gemma 3
- Phi-4
- Mistral
- DeepSeek
- Many more available in the Ollama Model Library
To run Seekr, you'll need to set up both the backend and frontend. You can run it manually or using Docker.
Make sure you have the following installed:
- Python ≥ 3.9
- Node.js ≥ 24 (recommended)
- Ollama with your desired model installed (e.g.,
phi4
)
Run
ollama run phi4
to make sure the model is working locally.
Navigate to the frontend
folder and create a .env
file with:
# Base URL of the backend API
VITE_API_BASE_URL=http://localhost:8000
Navigate to the backend
folder and create a .env
file with:
# The name of the model to use with Ollama (e.g., llama3, mistral, phi4, etc.)
OLLAMA_MODEL=phi4
# Use http://host.docker.internal:11434 for macOS/Windows Docker
# On Linux, either use http://host.docker.internal:11434 (with --add-host=host.docker.internal:host-gateway)
# If running Ollama locally (on the same machine), use: http://localhost:11434
OLLAMA_BASE_URL=http://host.docker.internal:11434
From the root directory, run:
docker-compose -f docker-compose.dev.yml up --build
Once running, open your browser and navigate to:
http://localhost:5173
You can also run Seekr manually without Docker. Follow these steps:
# Navigate to the backend folder
cd backend
# (Optional) Create and activate a virtual environment
python3 -m venv venv
source venv/bin/activate # Use `venv\Scripts\activate` on Windows
# Install dependencies
pip install -r requirements.txt
# Start the backend server
uvicorn main:app --reload
In a new terminal window/tab, run:
# Navigate to the frontend folder
cd frontend
# Install dependencies
npm install
# Start the frontend dev server
npm run dev
Once running, open your browser and navigate to:
http://localhost:5173
This is the landing or home screen where the user starts their journey in the application.
The user selects a specific topic they want to be tested or learn more about.
The user chooses the difficulty level for their questions — typically something like Easy, Medium, or Hard.
A loading screen appears while the system fetches or generates relevant questions based on the selected topic and difficulty.
A question is presented to the user, with input fields.
Once the user submits an answer, the system calculating the response.
After completing all questions, the user receives a summary or evaluation of their performance, possibly including score, strengths, and areas for improvement.
demo.mp4
If you would like to contribute to this web application, please open an issue on GitHub to discuss your ideas or proposed changes. Pull requests are also welcome.
This pdf to audio web application is available under the MIT License. You are free to use, modify, and distribute this project as you see fit.
OLLAMA_MODEL=
OLLAMA_BASE_URL=
BASE_URL=http://localhost:8000