Skip to content

Commit 4e36141

Browse files
author
linzhengyu
committed
Merge remote-tracking branch 'origin/main' into feature/text-input
2 parents fa5c05a + 0740f73 commit 4e36141

File tree

2 files changed

+52
-14
lines changed

2 files changed

+52
-14
lines changed

README.md

Lines changed: 26 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,16 @@ Digital Life Project 2 (DLP3D) is an open-source real-time framework that brings
3434
</a>
3535
</div>
3636

37+
## Content
38+
39+
This organization contains the following key repositories:
40+
41+
- **[dlp3d.ai](https://github.com/dlp3d-ai/dlp3d.ai): the main entry point, start here!**
42+
- [orchestrator](https://github.com/dlp3d-ai/orchestrator): coordinates and synchronizes all components.
43+
- [web_backend](https://github.com/dlp3d-ai/web_backend): manages the backend web services.
44+
- [speech2motion](https://github.com/dlp3d-ai/speech2motion): converts speech into body animation.
45+
- [audio2face](https://github.com/dlp3d-ai/audio2face): generates facial animation from audio.
46+
- [MotionDataViewer](https://github.com/dlp3d-ai/MotionDataViewer): visualizes and inspects motion data.
3747

3848
## Get Started
3949

@@ -50,13 +60,22 @@ While DLP3D itself is distributed under the [MIT License](LICENSE), we remind us
5060
## Citations
5161
Digital Life Project 2 (SIGGRAPH Asia 2025)
5262
```
53-
@misc{dlp3d,
54-
author = {Cai, Zhongang and Ren, Daxuan and Gao, Yang and Wei, Yukun and Zhou, Tongxi and Jang, Huimuk and Zeng, Haoyang and Lin, Zhengyu and Loy, Chen Change and Liu, Ziwei and Yang, Lei},
55-
title = {Digital Life Project 2: Open-source Autonomous 3D Characters on the Web},
56-
howpublished = {SIGGRAPH Asia 2025 Real-Time Live!},
57-
year = {2025},
58-
note = {Live demonstration, Hong Kong, China}
59-
year={2025}
63+
@inproceedings{dlp3d,
64+
author = {Cai, Zhongang and Ren, Daxuan and Gao, Yang and Wei, Yukun and Zhou, Tongxi and Lin, Zhengyu and Jang, Huimuk and Zeng, Haoyang and Loy, Chen Change and Liu, Ziwei and Yang, Lei},
65+
title = {Digital Life Project 2: Open-source Autonomous 3D Characters on the Web},
66+
booktitle = {SIGGRAPH Asia 2025 Real-Time Live!},
67+
year = {2025},
68+
pages = {3},
69+
isbn = {9798400721359},
70+
publisher = {Association for Computing Machinery},
71+
address = {New York, NY, USA},
72+
url = {https://doi.org/10.1145/3757375.3774342},
73+
doi = {10.1145/3757375.3774342},
74+
abstract = {Digital Life Project 2 (DLP2) presents an open-source real-time framework that brings Large Language Models (LLMs) to life through expressive 3D avatars. Users converse naturally by voice, while characters respond on demand with unified audio, whole-body animation, and physics simulation directly in the browser. At its core are: (1) an agentic orchestration of large and small LLMs that governs character behavior, supported by a memory system tracking emotional states and evolving relationships to enable context-dependent reactions; (2) a hybrid real-time pipeline that segments long LLM responses, performs parallel motion retrieval and audio-motion synchronization, and streams efficiently through a custom Protocol Buffers structure for low-latency playback of voice, motion, and expression; and (3) robust mechanisms for user interruption handling, adaptive buffering, and fault tolerance. Characters are fully customizable in both appearance (3D models) and personality (character prompts) and readily adaptable to any LLM or text-to-speech (TTS) service. DLP2 demonstrates how LLMs can be embodied in responsive 3D characters, offering a practical blueprint for real-time, emotionally adaptive digital interactions on the web.},
75+
articleno = {3},
76+
numpages = {2},
77+
location = {Hong Kong Convention and Exhibition Centre, Hong Kong, Hong Kong},
78+
series = {SA '25}
6079
}
6180
```
6281
Digital Life Project (CVPR 2024) [[Homepage]](https://digital-life-project.com/)

docs/README_CN.md

Lines changed: 26 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,16 @@
3434
</a>
3535
</div>
3636

37+
## 内容
38+
39+
本组织包含以下关键仓库:
40+
41+
- **[dlp3d.ai](https://github.com/dlp3d-ai/dlp3d.ai):主要入口,从这里开始!**
42+
- [orchestrator](https://github.com/dlp3d-ai/orchestrator):协调并同步所有组件。
43+
- [web_backend](https://github.com/dlp3d-ai/web_backend):管理后端 Web 服务。
44+
- [speech2motion](https://github.com/dlp3d-ai/speech2motion):根据语音生成肢体动画。
45+
- [audio2face](https://github.com/dlp3d-ai/audio2face):根据语音生成面部动画。
46+
- [MotionDataViewer](https://github.com/dlp3d-ai/MotionDataViewer):可视化检查动画数据。
3747

3848
## 快速开始
3949

@@ -51,13 +61,22 @@ DLP3D 本身采用 [MIT 许可证](../LICENSE) 发布,但我们提醒用户须
5161
## 引用
5262
数字生命计划 2 (SIGGRAPH Asia 2025)
5363
```
54-
@misc{dlp3d,
55-
author = {Cai, Zhongang and Ren, Daxuan and Gao, Yang and Wei, Yukun and Zhou, Tongxi and Jang, Huimuk and Zeng, Haoyang and Lin, Zhengyu and Loy, Chen Change and Liu, Ziwei and Yang, Lei},
56-
title = {Digital Life Project 2: Open-source Autonomous 3D Characters on the Web},
57-
howpublished = {SIGGRAPH Asia 2025 Real-Time Live!},
58-
year = {2025},
59-
note = {Live demonstration, Hong Kong, China}
60-
year={2025}
64+
@inproceedings{dlp3d,
65+
author = {Cai, Zhongang and Ren, Daxuan and Gao, Yang and Wei, Yukun and Zhou, Tongxi and Lin, Zhengyu and Jang, Huimuk and Zeng, Haoyang and Loy, Chen Change and Liu, Ziwei and Yang, Lei},
66+
title = {Digital Life Project 2: Open-source Autonomous 3D Characters on the Web},
67+
booktitle = {SIGGRAPH Asia 2025 Real-Time Live!},
68+
year = {2025},
69+
pages = {3},
70+
isbn = {9798400721359},
71+
publisher = {Association for Computing Machinery},
72+
address = {New York, NY, USA},
73+
url = {https://doi.org/10.1145/3757375.3774342},
74+
doi = {10.1145/3757375.3774342},
75+
abstract = {Digital Life Project 2 (DLP2) presents an open-source real-time framework that brings Large Language Models (LLMs) to life through expressive 3D avatars. Users converse naturally by voice, while characters respond on demand with unified audio, whole-body animation, and physics simulation directly in the browser. At its core are: (1) an agentic orchestration of large and small LLMs that governs character behavior, supported by a memory system tracking emotional states and evolving relationships to enable context-dependent reactions; (2) a hybrid real-time pipeline that segments long LLM responses, performs parallel motion retrieval and audio-motion synchronization, and streams efficiently through a custom Protocol Buffers structure for low-latency playback of voice, motion, and expression; and (3) robust mechanisms for user interruption handling, adaptive buffering, and fault tolerance. Characters are fully customizable in both appearance (3D models) and personality (character prompts) and readily adaptable to any LLM or text-to-speech (TTS) service. DLP2 demonstrates how LLMs can be embodied in responsive 3D characters, offering a practical blueprint for real-time, emotionally adaptive digital interactions on the web.},
76+
articleno = {3},
77+
numpages = {2},
78+
location = {Hong Kong Convention and Exhibition Centre, Hong Kong, Hong Kong},
79+
series = {SA '25}
6180
}
6281
```
6382
数字生命计划 (CVPR 2024) [主页](https://digital-life-project.com/)

0 commit comments

Comments
 (0)