Hi,
I was testing your example_browserclient tutorial and it works well on a 8GB VRAM system, memory usage is around 3.5 to 4GB when loading the tiny.en and large-v3 models together as required by your system.
I'm trying to run it on a more resource constrained system with 2GB of VRAM so had some questions.
- Does your library support
turbo model?, from what I've heard its faster that large-v3 with minor reduction in accuracy.
- Is it possible to run the
example_browserclient with only 1 model that is turbo? is this allowed? would this cause accuracy issues or simply not work. I see that in your AudioToTextRecorder you pass in 2 models tiny.en and large-v2.
- Do you plan to support Nvidia Parakeet, from what I understand that it was designed for streaming use case.. not sure if streaming is the same as real time though.
Hi,
I was testing your
example_browserclienttutorial and it works well on a 8GB VRAM system, memory usage is around 3.5 to 4GB when loading thetiny.enandlarge-v3models together as required by your system.I'm trying to run it on a more resource constrained system with
2GBof VRAM so had some questions.turbomodel?, from what I've heard its faster thatlarge-v3with minor reduction in accuracy.example_browserclientwith only 1 model that isturbo? is this allowed? would this cause accuracy issues or simply not work. I see that in yourAudioToTextRecorderyou pass in 2 modelstiny.enandlarge-v2.