Passive meeting transcription app for macOS. Runs entirely local - no cloud transcription, no account required.
- Local transcription using faster-whisper (no data leaves your machine)
- Menu bar app - start/stop recording with a click
- Streaming transcripts - markdown updates in real-time as you speak
- LLM summarization (optional) - summarize meetings with Claude or GPT
- macOS 12+ (Apple Silicon or Intel)
- Python 3.11
- Microphone access
- ~500MB disk space for ML models
brew install --cask https://raw.githubusercontent.com/tomfuertes/ohh-brother/main/Casks/ohh-brother.rb- Download the latest release from GitHub
- Move
Ohh Brother.appto/Applications
The Python ML environment requires one-time setup:
# Navigate to app resources
cd "/Applications/Ohh Brother.app/Contents/Resources/python"
# Create and activate virtual environment
python3.11 -m venv venv
source venv/bin/activate
# Install dependencies (downloads ~500MB of ML models)
pip install -r requirements.txt- Click the microphone icon in your menu bar
- Click "Start Recording"
- Speak - the app captures audio and transcribes in the background
- Click "Stop Recording" when done
- Find your transcript in History > [date/time]
- Open Settings and add your Claude or OpenAI API key
- After recording, click "Summarize Current"
- The summary appears in a new window
Transcripts are saved as Markdown and stream in real-time:
# Meeting - 2026-02-01 10:30 AM
[00:05] Let's get started with the Q3 review.
[00:15] Sure, I have the numbers here...- Bun (JavaScript runtime)
- Python 3.11
- Node.js (for Electron)
# Clone repo
git clone https://github.com/tomfuertes/ohh-brother
cd ohh-brother
# Install JS dependencies
bun install
# Setup Python environment
cd python
python3.11 -m venv venv
./venv/bin/pip install -r requirements.txt
cd ..
# Run in development
bun run dev# Build Electron app
bun run build
# Package for distribution
bun run package┌─────────────────────────────────────────────┐
│ Electron Menu Bar App (Bun runtime) │
│ - Menu UI (start/stop, history, settings) │
│ - LLM API calls for summarization │
│ - Spawns/manages Python subprocess │
└──────────────────┬──────────────────────────┘
│ spawns & IPC (stdio JSON)
┌──────────────────▼──────────────────────────┐
│ Python Subprocess │
│ - Audio capture (sounddevice) │
│ - Whisper transcription (faster-whisper) │
│ - Streams markdown to ~/Library/App Support│
└─────────────────────────────────────────────┘
- All transcription happens locally on your machine
- Audio is never sent to any server
- Transcripts are stored only in
~/Library/Application Support/OhhBrother/ - LLM summarization (if enabled) sends transcript text to Claude or OpenAI
Grant microphone access in System Settings > Privacy & Security > Microphone
First run downloads ~500MB of ML models. This is one-time only.
Transcription is CPU-intensive. Processing happens in 5-second batches to balance latency and resource usage.
MIT