Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,6 +168,24 @@ for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```

### Async Batched Transcription
The following code snippet illustrates how to run async batched transcription on an example audio file. `AsyncBatchedInferencePipeline.transcribe` is a drop-in replacement for `WhisperModel.transcribe`

```python
import asyncio
from faster_whisper import WhisperModel, AsyncBatchedInferencePipeline

async def transcribe_file():
model = WhisperModel("turbo", device="cuda", compute_type="float16")
batched_model = AsyncBatchedInferencePipeline(model=model)
segments_generator, info = await batched_model.transcribe("audio.mp3", batch_size=16)

async for segment in segments_generator:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

asyncio.run(transcribe_file())
```

### Faster Distil-Whisper

The Distil-Whisper checkpoints are compatible with the Faster-Whisper package. In particular, the latest [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)
Expand Down
7 changes: 6 additions & 1 deletion faster_whisper/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
from faster_whisper.audio import decode_audio
from faster_whisper.transcribe import BatchedInferencePipeline, WhisperModel
from faster_whisper.transcribe import (
AsyncBatchedInferencePipeline,
BatchedInferencePipeline,
WhisperModel,
)
from faster_whisper.utils import available_models, download_model, format_timestamp
from faster_whisper.version import __version__

Expand All @@ -8,6 +12,7 @@
"decode_audio",
"WhisperModel",
"BatchedInferencePipeline",
"AsyncBatchedInferencePipeline",
"download_model",
"format_timestamp",
"__version__",
Expand Down
Loading