SOund and Narrative Advanced Transcription Assistant
SONATA(SOund and Narrative Advanced Transcription Assistant) is advanced ASR system that captures human expressions including emotive sounds and non-verbal cues.
- 🎙️ High-accuracy speech-to-text transcription using WhisperX
- 😀 Recognition of 523+ emotive sounds and non-verbal cues
- 🌍 Multi-language support with 99+ languages
- 👥 SOTA speaker diarization using Silero VAD and WavLM embeddings
- ⏱️ Rich timestamp information at the word level
- 🔄 Audio preprocessing capabilities
📚 See detailed features documentation
Install the package from PyPI:
pip install sonata-asr
Or install from source:
git clone https://github.com/hwk06023/SONATA.git
cd SONATA
pip install -e .
from sonata.core.transcriber import IntegratedTranscriber
# Initialize the transcriber
transcriber = IntegratedTranscriber(asr_model="large-v3", device="cpu")
# Transcribe an audio file
result = transcriber.process_audio("path/to/audio.wav", language="en")
print(result["integrated_transcript"]["plain_text"])
# Basic usage
sonata-asr path/to/audio.wav
# With speaker diarization
sonata-asr path/to/audio.wav --diarize
# Set number of speakers if known
sonata-asr path/to/audio.wav --diarize --num-speakers 3
General:
-o, --output FILE Save transcript to specified JSON file
-l, --language LANG Language code (en, ko, zh, ja, fr, de, es, it, pt, ru)
-m, --model NAME WhisperX model size (tiny, small, medium, large-v3, etc.)
-d, --device DEVICE Device to run models on (cpu, cuda)
--text-output Save transcript to text file (defaults to input_name.txt)
--preprocess Preprocess audio (convert format and trim silence)
Diarization:
--diarize Enable SOTA speaker diarization using Silero VAD and WavLM
--num-speakers NUM Set exact number of speakers (optional)
Audio Events:
--threshold VALUE Threshold for audio event detection (0.0-1.0)
--custom-thresholds FILE Path to JSON file with custom audio event thresholds
--deep-detect Enable multi-scale audio event detection for better accuracy
--deep-detect-scales NUM Number of scales for deep detection (1-3, default: 3)
--deep-detect-window-sizes Custom window sizes for deep detection (comma-separated)
--deep-detect-hop-sizes Custom hop sizes for deep detection (comma-separated)
📚 See full usage documentation
⌨️ See complete CLI documentation
SONATA leverages Whisper large-v3 to support 99+ languages across varying levels of accuracy. Languages like English, Spanish, French, German, and Japanese have excellent transcription performance (5-12% error rates), while other languages have good to moderate accuracy.
Key features of SONATA's language support:
- Excellent accuracy for high-resource languages
- Character-based evaluation for languages like Chinese, Japanese, and Korean
- Specialized handling for language-specific characteristics
- Advanced auto-detection for multi-language content
🌐 See detailed language support documentation
SONATA can detect over 500 different audio events, from laughter and applause to ambient sounds and music. The customizable event detection thresholds allow you to fine-tune sensitivity for specific audio events to match your unique use cases, such as podcast analysis, meeting transcription, or nature recording analysis.
🎵 See audio events documentation
SONATA provides state-of-the-art speaker diarization to identify and separate different speakers in recordings. The system uses Silero VAD for speech detection and WavLM embeddings for speaker identification, making it ideal for transcribing multi-speaker content like meetings, interviews, and podcasts.
Using speaker diarization is simple:
# Basic diarization
sonata-asr path/to/audio.wav --diarize
# Set number of speakers if known
sonata-asr path/to/audio.wav --diarize --num-speakers 3
# Save intermediate step outputs for debugging or analysis
sonata-asr path/to/audio.wav --diarize --save-steps
When using the --save-steps
option, SONATA will save the following intermediate files in a directory named after your audio file:
- Voice activity detection segments
- Speaker change points
- Analysis segments
- Speaker embeddings information
- Clustering results
- Final speaker segments
This is particularly useful for fine-tuning or debugging diarization on challenging audio files.
🎙️ See speaker diarization documentation
- 🧠 Advanced ASR model diversity
- 😢 Improved emotive detection
- 🔊 Better speaker diarization
- ⚡ Performance optimization
- 🛠️ Fix parallel processing issues in deep detection mode for improved reliability
Contributions are welcome! SONATA offers multiple ways to contribute, including code improvements, documentation, testing, and bug reports. Our comprehensive contribution guide covers:
- Setting up the development environment
- Coding standards and best practices
- Testing procedures
- Pull request workflow
- Documentation guidelines
- Language-specific considerations
Whether you're an experienced developer or new to open source, we welcome your contributions.
This project is licensed under the GNU General Public License v3.0.
- WhisperX - Fast speech recognition
- AudioSet AST - Audio event detection
- MIT/ast-finetuned-audioset-10-10-0.4593 - Pretrained model for audio event classification
- Silero VAD - Voice activity detection for speaker diarization
- WavLM - Microsoft's advanced audio understanding model
- microsoft/wavlm-base-plus-sv - Speaker verification model for speaker embeddings
- SpeechBrain - Speaker diarization and embedding extraction
- PyAnnote - Advanced speaker diarization toolkit
- pyannote/segmentation - Speaker change detection
- pyannote/clustering - Speaker clustering
- HuggingFace Transformers - NLP tools and transformer models