BioVoice is easiest to learn when you separate three stages: install the local stack, rehearse without voice, then connect your microphone for a live session. This guide is the recommended first stop for public GitHub users.
- Installing the repo and pulling the validated demo data
- Running the app without voice first
- Choosing the right first workflow for your science story
- Knowing which doc to follow next
- macOS for the smoothest autolaunch flow
- Node.js 20+
- PyMOL and/or ChimeraX installed locally
curlavailable in your shell- Optional for live voice only: an
OPENAI_API_KEYwith Realtime access
npm install
npm run prepare:data
npm run generate:examplesIf you want to use live voice later, create your local env file:
cp .env.example .envAdd OPENAI_API_KEY only when you are ready for live voice. Keeping your real credentials in local .env is the normal supported setup.
By default, BioVoice only loads structure inputs from the prepared demo-data folder plus local runtime/output folders. If you want it to load private structures from another folder, add that folder to STRUCTURE_ALLOWED_PATHS in your local .env.
flowchart LR
A["Install dependencies"] --> B["Prepare demo data"]
B --> C["Try offline rehearsal"]
C --> D["First live voice session"]
D --> E["AlphaFold / Rosetta / cryo-EM tutorials"]
E --> F["Examples library and custom workflows"]
flowchart TD
A["What do you want to show first?"] --> B["Ligand pocket"]
A --> C["AlphaFold"]
A --> D["Rosetta"]
A --> E["Cryo-EM"]
B --> B1["Start with PyMOL pocket story"]
C --> C1["Start with ChimeraX overlay"]
D --> D1["Start with PyMOL top-design compare"]
E --> E1["Start with ChimeraX map fit"]
This is the best first command if you want to validate the interface, demo data, and local target control before touching microphone permissions or OpenAI billing.
npm run agent:start -- pymol --offline --clean-target- The local server starts on
http://localhost:3000 - PyMOL launches or reconnects
- The browser opens the BioVoice console
- You can inspect the workflow rail, run dry runs, reset the target, and capture the current view
npm run agent:start -- chimerax --offline --clean-target- For a first mic-enabled walkthrough: First Live Session
- For a polished structural pocket demo: Ligand Pocket Tutorial
- For prediction-versus-experiment: AlphaFold Tutorial
- For design review: Rosetta Tutorial
- For maps and density: Cryo-EM Tutorial
These docs are hand-authored. The deeper generated reference set lives here:
- PyMOL or ChimeraX does not start: make sure the application is installed locally; macOS autolaunch expects standard
/Applicationsinstalls - Port 3000 is already in use: set
PORTin your local.envbefore starting - Demo data is missing: rerun
npm run prepare:data - You expected voice immediately: offline rehearsal mode does not use the microphone or OpenAI
- You are on Linux or Windows: start PyMOL / ChimeraX manually and then use the same commands
- First Live Session to connect a microphone safely
- Architecture and Provider Support to understand the local/privacy model
- FAQ and Glossary for platform, privacy, and provider questions