Skip to content
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 147 additions & 0 deletions samples/js/whisper_speech_recognition/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# Whisper automatic speech recognition sample (JavaScript)

This example showcases inference of speech recognition Whisper Models. The application doesn't have many configuration options to encourage the reader to explore and modify the source code. For example, change the device for inference to GPU. The sample features `WhisperPipeline` and uses audio file in wav format as an input source. Audio conversion is performed by a custom helper in `wav_utils.js` (PCM16 mono/stereo at 16 kHz) to align numerical behavior with the C++ and Python sample paths.

## Download and convert the model and tokenizers

The `--upgrade-strategy eager` option is needed to ensure `optimum-intel` is upgraded to the latest version.

It's not required to install [../../export-requirements.txt](../../export-requirements.txt) for deployment if the model has already been exported.

```sh
pip install --upgrade-strategy eager -r <GENAI_ROOT_DIR>/samples/requirements.txt
optimum-cli export openvino --trust-remote-code --model openai/whisper-base whisper-base
```

## Prepare audio file

Prepare audio file in wav format with sampling rate 16k Hz.

You can download example audio file: https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/librispeech_s5/how_are_you_doing_today.wav

## Run

From the `samples/js` directory, install dependencies (if not already done):

```bash
npm install
```

If you use the master branch, you may need to [build openvino-genai-node from source](../../src/js/README.md#build-bindings) first.

Run the sample:

```bash
node whisper_speech_recognition/whisper_speech_recognition.js whisper-base how_are_you_doing_today.wav
```

Optional third argument is the device (default: CPU):

```bash
node whisper_speech_recognition/whisper_speech_recognition.js whisper-base how_are_you_doing_today.wav GPU
```

Output:

```
How are you doing today?
timestamps: [0.00, 2.00] text: How are you doing today?
[0.00, 0.xx]:
[0.xx, 0.xx]: How
...
```

Refer to the [Supported Models](https://openvinotoolkit.github.io/openvino.genai/docs/supported-models/#speech-recognition-models-whisper-based) for more details.

# Whisper pipeline usage

```javascript
import { WhisperPipeline } from 'openvino-genai-node';
import { readFileSync } from 'node:fs';
import { decode } from 'node-wav';

const pipeline = await WhisperPipeline(modelDir, "CPU");
const rawSpeechBuffer = readFileSync(audioFilePath);
const rawSpeech = decode(rawSpeechBuffer).channelData[0];
const result = await pipeline.generate(rawSpeech);
console.log(result.texts[0]);
// How are you doing today?
```

### Transcription

Whisper pipeline predicts the language of the source audio automatically.

If the source audio language is known in advance, it can be specified in generation config:

```javascript
const generationConfig = { language: "<|en|>", task: "transcribe" };
const result = await pipeline.generate(rawSpeech, { generationConfig });
```

### Translation

By default, Whisper performs the task of speech transcription, where the source audio language is the same as the target text language. To perform speech translation, where the target text is in English, set the task to "translate":

```javascript
const generationConfig = { task: "translate" };
const result = await pipeline.generate(rawSpeech, { generationConfig });
```

### Timestamps prediction

The model can predict timestamps. For sentence-level timestamps, pass the `return_timestamps` argument:

```javascript
const generationConfig = { return_timestamps: true, language: "<|en|>", task: "transcribe" };
const result = await pipeline.generate(rawSpeech, { generationConfig });
for (const chunk of result.chunks ?? []) {
console.log(`timestamps: [${chunk.startTs.toFixed(2)}, ${chunk.endTs.toFixed(2)}] text: ${chunk.text}`);
}
```

### Word-level timestamps

Pass `word_timestamps: true` in the pipeline constructor, then in the generation config:

```javascript
const pipeline = await WhisperPipeline(modelDir, "CPU", { word_timestamps: true });
const generationConfig = { return_timestamps: true, word_timestamps: true, language: "<|en|>", task: "transcribe" };
const result = await pipeline.generate(rawSpeech, { generationConfig });
for (const w of result.words ?? []) {
console.log(`[${w.startTs.toFixed(2)}, ${w.endTs.toFixed(2)}]: ${w.word}`);
}
```

### Initial prompt and hotwords

Whisper pipeline has `initial_prompt` and `hotwords` generate arguments:
* `initial_prompt`: initial prompt tokens passed as a previous transcription (after `<|startofprev|>` token) to the first processing window
* `hotwords`: hotwords tokens passed as a previous transcription (after `<|startofprev|>` token) to the all processing windows

The Whisper model can use that context to better understand the speech and maintain a consistent writing style. However, prompts do not need to be genuine transcripts from prior audio segments. Such prompts can be used to steer the model to use particular spellings or styles:

```javascript
let result = await pipeline.generate(rawSpeech);
// He has gone and gone for good answered Paul Icrom who...

const generationConfig = { initial_prompt: "Polychrome" };
result = await pipeline.generate(rawSpeech, { generationConfig });
// He has gone and gone for good answered Polychrome who...
```

### Troubleshooting

#### Empty or rubbish output

Ensure the input is a valid WAV file. The sample's `readAudio` helper converts it to 16 kHz mono before inference.

For non-WAV sources (MP3, M4A, FLAC), convert to WAV first with your preferred tool.

#### NPU device

For NPU, pass `STATIC_PIPELINE: true` in the pipeline properties:

```javascript
const pipeline = await WhisperPipeline(modelDir, "NPU", { word_timestamps: true, STATIC_PIPELINE: true });
```
102 changes: 102 additions & 0 deletions samples/js/whisper_speech_recognition/wav_utils.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
// Copyright (C) 2023-2026 Intel Corporation
// SPDX-License-Identifier: Apache-2.0

import { readFile } from 'node:fs/promises';

function parseWavPcm16Mono(buffer) {
if (buffer.length < 44) {
throw new Error('Invalid WAV payload: file is too small.');
}

if (buffer.toString('ascii', 0, 4) !== 'RIFF' || buffer.toString('ascii', 8, 12) !== 'WAVE') {
throw new Error('Invalid WAV payload: RIFF/WAVE header is missing.');
}

const view = new DataView(buffer.buffer, buffer.byteOffset, buffer.byteLength);
let offset = 12;

let audioFormat;
let channels;
let sampleRate;
let bitsPerSample;
let dataOffset;
let dataSize;

while (offset + 8 <= buffer.length) {
const chunkId = buffer.toString('ascii', offset, offset + 4);
const chunkSize = view.getUint32(offset + 4, true);
const chunkDataOffset = offset + 8;

if (chunkDataOffset + chunkSize > buffer.length) {
throw new Error('Invalid WAV payload: malformed chunk size.');
}

if (chunkId === 'fmt ') {
if (chunkSize < 16) {
throw new Error('Invalid WAV payload: fmt chunk is too small.');
}
audioFormat = view.getUint16(chunkDataOffset, true);
channels = view.getUint16(chunkDataOffset + 2, true);
sampleRate = view.getUint32(chunkDataOffset + 4, true);
bitsPerSample = view.getUint16(chunkDataOffset + 14, true);
} else if (chunkId === 'data') {
dataOffset = chunkDataOffset;
dataSize = chunkSize;
}

offset = chunkDataOffset + chunkSize + (chunkSize % 2);
}

if (audioFormat !== 1) {
throw new Error('Unsupported WAV format: only PCM is supported.');
}

if (channels !== 1 && channels !== 2) {
throw new Error('WAV file must be mono or stereo.');
}

if (sampleRate !== 16000) {
throw new Error(`WAV file must be 16 kHz, but got ${sampleRate}.`);
}

if (bitsPerSample !== 16) {
throw new Error(`Unsupported WAV bit depth: ${bitsPerSample}. Only 16-bit PCM is supported.`);
}

if (dataOffset === undefined || dataSize === undefined) {
throw new Error('Invalid WAV payload: missing data chunk.');
}

const bytesPerFrame = channels * 2;
const frameCount = Math.floor(dataSize / bytesPerFrame);
const mono = new Float32Array(frameCount);

for (let index = 0; index < frameCount; index++) {
const frameOffset = dataOffset + index * bytesPerFrame;
if (channels === 1) {
const sample = view.getInt16(frameOffset, true);
mono[index] = sample / 32768.0;
} else {
const left = view.getInt16(frameOffset, true);
const right = view.getInt16(frameOffset + 2, true);
mono[index] = (left + right) / 65536.0;
}
}

return mono;
}

/**
* Read WAV file and convert to 16kHz mono Float32Array for Whisper pipeline.
* @param {string} audioPath
* @returns {Promise<Float32Array>}
*/
export async function readAudio(audioPath) {
const wavBuffer = await readFile(audioPath);

if (wavBuffer.length === 0) {
throw new Error('Audio file is empty.');
}

return parseWavPcm16Mono(wavBuffer);
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
// Copyright (C) 2023-2026 Intel Corporation
// SPDX-License-Identifier: Apache-2.0

import { basename } from 'node:path';
import yargs from 'yargs/yargs';
import { hideBin } from 'yargs/helpers';
import { WhisperPipeline } from 'openvino-genai-node';
import { readAudio } from './wav_utils.js';

/**
* Parse CLI arguments, run Whisper inference and print transcription output.
* @returns {Promise<void>}
*/
async function main() {
const argv = yargs(hideBin(process.argv))
.scriptName(basename(process.argv[1]))
.command(
'$0 <model_dir> <audio_file> [device]',
'Run Whisper speech recognition on an audio file',
(yargsBuilder) =>
yargsBuilder
.positional('model_dir', {
type: 'string',
describe: 'Path to the converted Whisper model directory',
demandOption: true,
})
.positional('audio_file', {
type: 'string',
describe: 'Path to the WAV audio file',
demandOption: true,
})
.positional('device', {
type: 'string',
describe: 'Device to run the model on (e.g. CPU, GPU)',
default: 'CPU',
}),
)
.strict()
.help()
.parse();

const modelDir = argv.model_dir;
const wavFilePath = argv.audio_file;
const device = argv.device;

let properties = {};
if (device === 'NPU' || device.startsWith('GPU')) {
properties["CACHE_DIR"] = 'whisper_cache';
}
// Word timestamps require word_timestamps in the pipeline constructor
properties.word_timestamps = true;

const pipeline = await WhisperPipeline(modelDir, device, properties);

// Pass only the options to override; avoid spreading full getGenerationConfig()
// (it can contain values that do not round-trip correctly, e.g. max_new_tokens).
const generationConfig = {
language: '<|en|>',
task: 'transcribe',
return_timestamps: true,
word_timestamps: true,
};

const audioTensor = await readAudio(wavFilePath);
const result = await pipeline.generate(audioTensor, { generationConfig });

console.log(result.texts?.[0] ?? '');

if (result.chunks?.length) {
for (const chunk of result.chunks) {
console.log(`timestamps: [${chunk.startTs.toFixed(2)}, ${chunk.endTs.toFixed(2)}] text: ${chunk.text}`);
}
}

if (result.words?.length) {
for (const word of result.words) {
console.log(`[${word.startTs.toFixed(2)}, ${word.endTs.toFixed(2)}]: ${word.word}`);
}
}
}

main().catch((err) => {
console.error(err);
process.exit(1);
});
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import CodeBlock from '@theme/CodeBlock';

<CodeBlock language="javascript" showLineNumbers>
{`import { WhisperPipeline } from 'openvino-genai-node';
import { readAudio } from './wav_utils.js';

const rawSpeech = readAudio('sample.wav');

const pipeline = await WhisperPipeline(modelPath, "${props.device || 'CPU'}");
const generationConfig = { max_new_tokens: 100 };
const result = await pipeline.generate(rawSpeech, { generationConfig });
console.log(result.texts[0]);
`}
</CodeBlock>
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import CodeExampleCPP from './_code_example_cpp.mdx';
import CodeExamplePython from './_code_example_python.mdx';
import CodeExampleJS from './_code_example_js.mdx';

## Run Model Using OpenVINO GenAI

Expand Down Expand Up @@ -32,6 +33,16 @@ It will automatically load the model, tokenizer, detokenizer and default generat
</TabItem>
</Tabs>
</TabItemCpp>
<TabItemJS>
<Tabs groupId="device">
<TabItem label="CPU" value="cpu">
<CodeExampleJS device="CPU" />
</TabItem>
<TabItem label="GPU" value="gpu">
<CodeExampleJS device="GPU" />
</TabItem>
</Tabs>
</TabItemJS>
</LanguageTabs>

:::tip
Expand Down
Loading
Loading