Open
Description
Hi there,
Thank you so much for your work on this project. It's truly amazing, and I’m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial.
To support this, could you help us add functionality to the current flow? The current process includes 1) speech-to-text, 2) LLM, and 3) text-to-speech. I’d like to add a fourth step: either speech-to-viseme or speech-to-text with return_timestamp = "word"
, followed by manual mapping of words to phonemes, and then to visemes.
Best regards,
Fabio
Metadata
Metadata
Assignees
Labels
No labels