Skip to content

docs: Add model documentation references to README highlights#501

Merged
Alex-Wengg merged 6 commits intomainfrom
docs/add-models-references-to-highlights
Apr 8, 2026
Merged

docs: Add model documentation references to README highlights#501
Alex-Wengg merged 6 commits intomainfrom
docs/add-models-references-to-highlights

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Apr 8, 2026

Summary

Adds links from README highlights section to ASR model documentation, highlighting Parakeet TDT v3 as the primary batch transcription model.

Changes

Updated the ASR highlight:

  • Lead with Parakeet TDT v3 (0.6b): Highlighted as the main selling point for batch transcription
  • Mention other models: Added "and other TDT/CTC models" to acknowledge the variety available
  • Language support: Includes 25 European languages, Japanese, and Chinese
  • Parakeet EOU: Links to streaming transcription section
  • See all ASR models: Links to general ASR models overview

Before

Generic description without highlighting the flagship Parakeet TDT v3 model.

After

Readers see Parakeet TDT v3 as the primary batch transcription model and can click through to get full model details including:

  • Mix of TDT and CTC architectures for batch transcription
  • Support for 25 European languages, Japanese, and Chinese
  • Model parameters and specifications
  • Performance metrics (WER, RTFx, etc.)
  • Use cases and context

Result

Better discoverability of ASR model documentation with Parakeet TDT v3 prominently featured as the main batch transcription model.

Add links from README highlights section to Documentation/Models.md for:
- ASR models (Parakeet TDT v3, Parakeet EOU)
- TTS models (Kokoro, PocketTTS)
- Diarization models
- VAD models (Silero)

Makes it easier for readers to find detailed model information.
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no bugs or issues to report.

Open in Devin Review

Keep only ASR model links and diarization section link.
Remove individual model links for Kokoro, PocketTTS, and Silero VAD.
Keep only ASR model links in highlights section.
Add brief mention that ASR supports Japanese and Chinese in addition
to the 25 European languages.
Changed from specifically mentioning 'Parakeet TDT v3' to 'Batch transcription
with TDT and CTC models' to accurately represent the mix of model architectures
available (TDT v2/v3, TDT-CTC-110M, CTC Japanese, TDT Japanese, CTC Chinese).
Lead with Parakeet TDT v3 (0.6b) as the main selling point while
also mentioning other TDT/CTC models available for batch transcription.
@Alex-Wengg Alex-Wengg merged commit 2675a8b into main Apr 8, 2026
12 checks passed
@Alex-Wengg Alex-Wengg deleted the docs/add-models-references-to-highlights branch April 8, 2026 00:55
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 27.14x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 8.770 22.7 Fetching diarization models
Model Compile 3.759 9.7 CoreML compilation
Audio Load 0.043 0.1 Loading audio file
Segmentation 11.596 30.0 Detecting speech regions
Embedding 19.326 50.0 Extracting speaker voices
Clustering 7.730 20.0 Grouping same speakers
Total 38.668 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 38.7s diarization time • Test runtime: 2m 16s • 04/07/2026, 09:16 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 11.04x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 45.2s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.045s Average chunk processing time
Max Chunk Time 0.090s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m51s • 04/07/2026, 10:17 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 715.7x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 694.8x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (202.5 KB)

Runtime: 0m40s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m39s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.04x ~2.5x
Overall RTFx 0.04x ~2.5x

Runtime: 5m9s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.07x
test-other 1.40% 0.00% 3.46x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.26x
test-other 1.16% 0.00% 3.36x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.60x Streaming real-time factor
Avg Chunk Time 1.466s Average time to process each chunk
Max Chunk Time 1.646s Maximum chunk processing time
First Token 1.770s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.60x Streaming real-time factor
Avg Chunk Time 1.473s Average time to process each chunk
Max Chunk Time 1.652s Maximum chunk processing time
First Token 1.543s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 5m33s • 04/07/2026, 10:16 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 13.3x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 34s • 2026-04-08T02:03:49.093Z

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 8, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 4.56x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 13.587 5.9 Fetching diarization models
Model Compile 5.823 2.5 CoreML compilation
Audio Load 0.052 0.0 Loading audio file
Segmentation 23.904 10.4 VAD + speech detection
Embedding 229.153 99.6 Speaker embedding extraction
Clustering (VBx) 0.746 0.3 Hungarian algorithm + VBx clustering
Total 230.067 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 253.8s processing • Test runtime: 4m 17s • 04/07/2026, 10:25 PM EST

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant