Integrate 2D/3D avatars with a backend LLM server to provide real-time, intelligent responses to user queries through speech-based conversational interfaces.
Explore the "Get Started" guide to learn how to:
- setup the client and server machines,
- prepare and deploy:
MuseTalk - a real-time high quality lip-syncing model and
FunASR Paraformer-large - a non-autoregressive end-to-end speech recognition model, - create a production-ready Retrieval-Augmented Generation (RAG) pipeline.
The article delves into the most fundamental use cases of integrating interactive 2D/ 3D avatars, including text or video chat.
THIS PROJECT IS ARCHIVED
Intel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates.
Patches to this project are no longer accepted by Intel.
If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.