Skip to content

Conversation

@Alex-Wengg
Copy link
Contributor

Summary

  • Add streaming ASR support for NVIDIA's Nemotron Speech Streaming 0.6B model converted to CoreML
  • Implement NemotronStreamingAsrManager with true streaming (1.12s chunks, encoder cache)
  • Support int8 and float32 encoder variants (int8 is default, 4x smaller)
  • Add nemotron-benchmark CLI command for LibriSpeech evaluation

Performance

On LibriSpeech test-clean (100 files):

Metric Value
WER 1.99%
RTFx 8.6x
Memory 1.4 GB

Test plan

  • Run nemotron-benchmark --max-files 100 on LibriSpeech test-clean
  • Verify WER matches Python reference (~1.8%)
  • Test int8 encoder variant
  • Run full LibriSpeech test-clean benchmark (2620 files)

Usage

# Run benchmark
fluidaudiocli nemotron-benchmark --max-files 100

# With float32 encoder
fluidaudiocli nemotron-benchmark --encoder float32 --max-files 10

# With custom model directory
fluidaudiocli nemotron-benchmark --model-dir /path/to/models

🤖 Generated with Claude Code

@claude
Copy link
Contributor

claude bot commented Jan 12, 2026

Claude encountered an error —— View job


I'll analyze this and get back to you.

@github-actions
Copy link

github-actions bot commented Jan 12, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 7.6x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 4m 10s • 2026-01-15T20:13:40.526Z

@github-actions
Copy link

github-actions bot commented Jan 12, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.22x
test-other 1.96% 0.00% 3.48x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.40% 0.00% 5.16x
test-other 1.00% 0.00% 3.47x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.63x Streaming real-time factor
Avg Chunk Time 1.452s Average time to process each chunk
Max Chunk Time 1.676s Maximum chunk processing time
First Token 1.725s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.62x Streaming real-time factor
Avg Chunk Time 1.465s Average time to process each chunk
Max Chunk Time 1.676s Maximum chunk processing time
First Token 1.450s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 5m13s • 01/15/2026, 03:24 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link

github-actions bot commented Jan 12, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 4.98x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 96.9s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.097s Average chunk processing time
Max Chunk Time 0.194s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m49s • 01/15/2026, 03:16 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link

github-actions bot commented Jan 12, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 456.0x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 501.5x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link

github-actions bot commented Jan 12, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 14.91x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 9.537 13.6 Fetching diarization models
Model Compile 4.087 5.8 CoreML compilation
Audio Load 0.130 0.2 Loading audio file
Segmentation 21.102 30.0 Detecting speech regions
Embedding 35.171 50.0 Extracting speaker voices
Clustering 14.068 20.0 Grouping same speakers
Total 70.379 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 70.3s diarization time • Test runtime: 2m 46s • 01/15/2026, 03:06 PM EST

@github-actions
Copy link

github-actions bot commented Jan 12, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 3.66x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 15.771 5.5 Fetching diarization models
Model Compile 6.759 2.4 CoreML compilation
Audio Load 0.055 0.0 Loading audio file
Segmentation 34.969 12.2 VAD + speech detection
Embedding 282.915 98.7 Speaker embedding extraction
Clustering (VBx) 3.101 1.1 Hungarian algorithm + VBx clustering
Total 286.634 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 321.0s processing • Test runtime: 6m 45s • 01/15/2026, 03:16 PM EST


/// Configuration for Nemotron Speech Streaming 0.6B
/// Based on nvidia/nemotron-speech-streaming-en-0.6b with 1.12s chunks
public struct NemotronStreamingConfig: Sendable {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im getting a bit worried about how many different models we have and wether it makes sesne to keep adding different managers for htem when the interface should be the same

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It makes it harder to maintain this longer term and its unclear for users how and which model to use.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Im thinking if softformer is better we should just remove the older pyannote model tbh

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if there are distinct cases where pyannote dominates sorrformer

@Alex-Wengg Alex-Wengg force-pushed the feature/nemotron-streaming-support branch from 1d952d8 to cad45a1 Compare January 15, 2026 05:19
Add streaming ASR support for NVIDIA's Nemotron Speech Streaming 0.6B model
converted to CoreML. Features include:

- NemotronStreamingAsrManager: Actor-based streaming ASR with encoder cache
- True streaming with 1.12s audio chunks and encoder state carryover
- Support for int8 and float32 encoder variants (int8 default, 4x smaller)
- RNNT greedy decoding with proper decoder LSTM state management
- NemotronBenchmark CLI command for LibriSpeech evaluation

Performance on LibriSpeech test-clean (100 files):
- WER: 1.99%
- RTFx: 8.6x (8.6 times faster than real-time)
- Memory: 1.4 GB (with int8 encoder)

Models available at: alexwengg/nemotron-speech-streaming-en-0.6b-coreml
…gging

- Fix encoder path to use subdirectory structure (encoder/encoder_int8.mlmodelc)
- Fix download destination to avoid double folder nesting
- Add AppLogger.alwaysLogToConsole for CLI release builds
- Include both int8 and float32 encoder variants in required models
- Models auto-download from HuggingFace on first run
@Alex-Wengg Alex-Wengg force-pushed the feature/nemotron-streaming-support branch from cad45a1 to b802917 Compare January 15, 2026 05:20
Alex-Wengg and others added 9 commits January 15, 2026 11:44
Results on full test-clean dataset (2,620 files):
- WER: 2.51%
- RTFx: 5.7x
- Memory: 1.452 GB

Includes CLI commands for running benchmarks.
- Add NemotronChunkSize enum (1120ms, 560ms, 160ms, 80ms variants)
- Update Repo enum with chunk-size specific variants pointing to
  FluidInference/nemotron-speech-streaming-en-0.6b-coreml
- NemotronStreamingConfig now loads dynamically from metadata.json
- Support both .mlmodelc and .mlpackage encoder formats
- Add --chunk CLI option to nemotron-benchmark command
- Auto-download correct model variant based on chunk size selection
Simplified NemotronStreamingAsrManager to only support int8 quantized encoders:
- Replaced NemotronEncoderVariant enum with simple NemotronEncoder filename constant
- Removed encoderVariant parameter from loadModels()
- Removed --encoder CLI flag from benchmark command

All HuggingFace model variants now contain only int8 quantized encoders
(~564MB vs ~2.2GB for float32), so the float32 option is no longer needed.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
All HuggingFace variants now only include int8 quantized encoders.

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Covers:
- Benchmark results for all chunk sizes (1120ms, 560ms, 160ms, 80ms)
- Quick start guide with code examples
- Architecture overview and streaming pipeline
- CLI benchmark usage
- Comparison with Parakeet TDT

Co-Authored-By: Claude Opus 4.5 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants