Skip to content

Add Wasm fallback while running llms #97

Open
@sauravpanda

Description

@sauravpanda

Currently we use MLC which needs webgpu and also force webgpu in TTS and whisper. But we can fallback to onnx models and wasm device mode if we cant find the relevant items.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions