|
78 | 78 | <tr> |
79 | 79 | <td style="width:35%; vertical-align:middle; padding-right: 20px;"> |
80 | 80 | <div class="image-container"> |
81 | | - <img src='publications/2026_RPL.gif' width="100%"> |
| 81 | + <img src='publications/2026_PHP.gif' width="100%"> |
82 | 82 | </div> |
83 | 83 | </td> |
84 | 84 | <td style="width:65%; vertical-align:middle"> |
85 | | - <papertitle>RPL: Learning Robust Humanoid Perceptive Locomotion on Challenging Terrains</papertitle> |
| 85 | + <papertitle>Perceptive Humanoid Parkour: Chaining Dynamic Human Skills via Motion Matching</papertitle> |
86 | 86 | <br> |
87 | | - Yuanhang Zhang, Younggyo Seo, Juyue Chen, Yifu Yuan, Koushil Sreenath, Pieter Abbeel<sup>†</sup>, Carmelo Sferrazza<sup>†</sup>, Karen Liu<sup>†</sup>, Rocky Duan<sup>†</sup>, Guanya Shi<sup>†</sup> |
| 87 | + Zhen Wu<sup>*</sup>, Xiaoyu Huang<sup>*</sup>, Lujie Yang<sup>*</sup>, Yuanhang Zhang, Koushil Sreenath, Xi Chen, Pieter Abbeel<sup>†</sup>, Rocky Duan<sup>†</sup>, Angjoo Kanazawa<sup>†</sup>, Carmelo Sferrazza<sup>†</sup>, Guanya Shi<sup>†</sup>, C. Karen Liu<sup>†</sup> |
88 | 88 | <br> |
89 | | - <a href="https://arxiv.org/abs/2602.03002" target="_blank"><i class="far fa-file"></i> paper</a>   |
90 | | - <a href="https://rpl-humanoid.github.io/" target="_blank"><i class="fas fa-globe"></i> website</a> |
91 | | - <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: A single policy trained by RPL enables multi-directional robust humanoid locomotion over various challenging terrains. |
| 89 | + <a href="https://arxiv.org/abs/2602.15827" target="_blank"><i class="far fa-file"></i> paper</a>   |
| 90 | + <a href="https://php-parkour.github.io/" target="_blank"><i class="fas fa-globe"></i> website</a> |
| 91 | + <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: PHP enables humanoid robots to autonomously perform long-horizon, vision-based parkour across challenging obstacle courses. |
92 | 92 | </td> |
93 | 93 | </tr> |
94 | 94 | </table> |
|
99 | 99 | <tr> |
100 | 100 | <td style="width:35%; vertical-align:middle; padding-right: 20px;"> |
101 | 101 | <div class="image-container"> |
102 | | - <img src='publications/2025_FastSAC.gif' width="100%"> |
| 102 | + <img src='publications/2026_FPO.gif' width="100%"> |
103 | 103 | </div> |
104 | 104 | </td> |
105 | 105 | <td style="width:65%; vertical-align:middle"> |
106 | | - <papertitle>Learning Sim-to-Real Humanoid Locomotion in 15 Minutes</papertitle> |
| 106 | + <papertitle>Flow Policy Gradients for Robot Control</papertitle> |
107 | 107 | <br> |
108 | | - Younggyo Seo, Carmelo Sferrazza, Juyue Chen, Guanya Shi, Rocky Duan, Pieter Abbeel |
| 108 | + Brent Yi<sup>*</sup>, Hongsuk Choi<sup>*</sup>, Himanshu Gaurav Singh, Xiaoyu Huang, Takara E. Truong, Carmelo Sferrazza, Yi Ma, Rocky Duan<sup>†</sup>, Pieter Abbeel<sup>†</sup>, Guanya Shi<sup>†</sup>, Karen Liu<sup>†</sup>, Angjoo Kanazawa<sup>†</sup> |
109 | 109 | <br> |
110 | | - <a href="https://arxiv.org/abs/2512.01996" target="_blank"><i class="far fa-file"></i> paper</a>   |
111 | | - <a href="https://younggyo.me/fastsac-humanoid/" target="_blank"><i class="fas fa-globe"></i> website</a>   |
112 | | - <a href="https://github.com/amazon-far/holosoma" target="_blank"><i class="fas fa-code"></i> code</a> |
113 | | - <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: We provide a simple recipe with FastSAC and FastTD3 for rapid sim2real humanoid learning. |
| 110 | + <a href="https://arxiv.org/abs/2602.02481" target="_blank"><i class="far fa-file"></i> paper</a>   |
| 111 | + <a href="https://hongsukchoi.github.io/fpo-control/" target="_blank"><i class="fas fa-globe"></i> website</a>   |
| 112 | + <a href="https://github.com/amazon-far/fpo-control" target="_blank"><i class="fas fa-code"></i> code</a> |
| 113 | + <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: A simple recipe for online RL with flow policies, validated in robot locomotion, humanoid motion tracking, and manipulation. |
114 | 114 | </td> |
115 | 115 | </tr> |
116 | 116 | </table> |
|
121 | 121 | <tr> |
122 | 122 | <td style="width:35%; vertical-align:middle; padding-right: 20px;"> |
123 | 123 | <div class="image-container"> |
124 | | - <img src='publications/2025_DoorMan.gif' width="90%"> |
| 124 | + <img src='publications/2026_RPL.gif' width="100%"> |
125 | 125 | </div> |
126 | 126 | </td> |
127 | 127 | <td style="width:65%; vertical-align:middle"> |
128 | | - <papertitle>Opening the Sim-to-Real Door for Humanoid Pixel-to-Action Policy Transfer</papertitle> |
| 128 | + <papertitle>RPL: Learning Robust Humanoid Perceptive Locomotion on Challenging Terrains</papertitle> |
129 | 129 | <br> |
130 | | - Haoru Xue<sup>*</sup>, Tairan He<sup>*</sup>, Zi Wang<sup>*</sup>, Qingwei Ben, Wenli Xiao, Zhengyi Luo, Xingye Da, Fernando Castañeda, Guanya Shi, Shankar Sastry, Linxi "Jim" Fan, Yuke Zhu |
| 130 | + Yuanhang Zhang, Younggyo Seo, Juyue Chen, Yifu Yuan, Koushil Sreenath, Pieter Abbeel<sup>†</sup>, Carmelo Sferrazza<sup>†</sup>, Karen Liu<sup>†</sup>, Rocky Duan<sup>†</sup>, Guanya Shi<sup>†</sup> |
131 | 131 | <br> |
132 | | - <a href="https://arxiv.org/abs/2512.01061" target="_blank"><i class="far fa-file"></i> paper</a>   |
133 | | - <a href="https://doorman-humanoid.github.io/" target="_blank"><i class="fas fa-globe"></i> website</a> |
134 | | - <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: DoorMan proposes a teacher-student-bootstrap framework for challenging humanoid loco-manipulation tasks such as door opening. |
| 132 | + <a href="https://arxiv.org/abs/2602.03002" target="_blank"><i class="far fa-file"></i> paper</a>   |
| 133 | + <a href="https://rpl-humanoid.github.io/" target="_blank"><i class="fas fa-globe"></i> website</a> |
| 134 | + <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: A single policy trained by RPL enables multi-directional robust humanoid locomotion over various challenging terrains. |
135 | 135 | </td> |
136 | 136 | </tr> |
137 | 137 | </table> |
|
142 | 142 | <tr> |
143 | 143 | <td style="width:35%; vertical-align:middle; padding-right: 20px;"> |
144 | 144 | <div class="image-container"> |
145 | | - <img src='publications/2025_VIRAL.gif' width="90%"> |
| 145 | + <img src='publications/2025_FastSAC.gif' width="100%"> |
146 | 146 | </div> |
147 | 147 | </td> |
148 | 148 | <td style="width:65%; vertical-align:middle"> |
149 | | - <papertitle>VIRAL: Visual Sim-to-Real at Scale for Humanoid Loco-Manipulation</papertitle> |
| 149 | + <papertitle>Learning Sim-to-Real Humanoid Locomotion in 15 Minutes</papertitle> |
150 | 150 | <br> |
151 | | - Tairan He<sup>*</sup>, Zi Wang<sup>*</sup>, Haoru Xue<sup>*</sup>, Qingwei Ben<sup>*</sup>, Zhengyi Luo, Wenli Xiao, Ye Yuan, Xingye Da, Fernando Castañeda, Shankar Sastry, Changliu Liu, Guanya Shi, Linxi Fan, Yuke Zhu |
| 151 | + Younggyo Seo<sup>*</sup>, Carmelo Sferrazza<sup>*</sup>, Juyue Chen, Guanya Shi, Rocky Duan, Pieter Abbeel |
152 | 152 | <br> |
153 | | - <a href="https://arxiv.org/abs/2511.15200" target="_blank"><i class="far fa-file"></i> paper</a>   |
154 | | - <a href="https://viral-humanoid.github.io/" target="_blank"><i class="fas fa-globe"></i> website</a> |
155 | | - <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: VIRAL investigates the scaling law of visual sim-to-real and finds a recipe to achieve zero-shot, robust, and continuous real-world deployment. |
| 153 | + <a href="https://arxiv.org/abs/2512.01996" target="_blank"><i class="far fa-file"></i> paper</a>   |
| 154 | + <a href="https://younggyo.me/fastsac-humanoid/" target="_blank"><i class="fas fa-globe"></i> website</a>   |
| 155 | + <a href="https://github.com/amazon-far/holosoma" target="_blank"><i class="fas fa-code"></i> code</a> |
| 156 | + <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: We provide a simple recipe with FastSAC and FastTD3 for rapid sim2real humanoid learning. |
156 | 157 | </td> |
157 | 158 | </tr> |
158 | 159 | </table> |
|
255 | 256 | </tr> |
256 | 257 | </table> |
257 | 258 |
|
| 259 | + <table width="880" border="0" align="center" cellspacing="0" cellpadding="0"> |
| 260 | + <tr> |
| 261 | + <td style="width:35%; vertical-align:middle; padding-right: 20px;"> |
| 262 | + <div class="image-container"> |
| 263 | + <img src='publications/2025_DoorMan.gif' width="90%"> |
| 264 | + </div> |
| 265 | + </td> |
| 266 | + <td style="width:65%; vertical-align:middle"> |
| 267 | + <papertitle>Opening the Sim-to-Real Door for Humanoid Pixel-to-Action Policy Transfer</papertitle> |
| 268 | + <br> |
| 269 | + Haoru Xue<sup>*</sup>, Tairan He<sup>*</sup>, Zi Wang<sup>*</sup>, Qingwei Ben, Wenli Xiao, Zhengyi Luo, Xingye Da, Fernando Castañeda, Guanya Shi, Shankar Sastry, Linxi "Jim" Fan, Yuke Zhu |
| 270 | + <br> |
| 271 | + <em>IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)</em>, 2026 |
| 272 | + <br> |
| 273 | + <a href="https://arxiv.org/abs/2512.01061" target="_blank"><i class="far fa-file"></i> paper</a>   |
| 274 | + <a href="https://doorman-humanoid.github.io/" target="_blank"><i class="fas fa-globe"></i> website</a> |
| 275 | + <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: DoorMan proposes a teacher-student-bootstrap framework for challenging humanoid loco-manipulation tasks such as door opening. |
| 276 | + </td> |
| 277 | + </tr> |
| 278 | + </table> |
| 279 | + |
| 280 | + <br> |
| 281 | + |
| 282 | + <table width="880" border="0" align="center" cellspacing="0" cellpadding="0"> |
| 283 | + <tr> |
| 284 | + <td style="width:35%; vertical-align:middle; padding-right: 20px;"> |
| 285 | + <div class="image-container"> |
| 286 | + <img src='publications/2025_VIRAL.gif' width="90%"> |
| 287 | + </div> |
| 288 | + </td> |
| 289 | + <td style="width:65%; vertical-align:middle"> |
| 290 | + <papertitle>VIRAL: Visual Sim-to-Real at Scale for Humanoid Loco-Manipulation</papertitle> |
| 291 | + <br> |
| 292 | + Tairan He<sup>*</sup>, Zi Wang<sup>*</sup>, Haoru Xue<sup>*</sup>, Qingwei Ben<sup>*</sup>, Zhengyi Luo, Wenli Xiao, Ye Yuan, Xingye Da, Fernando Castañeda, Shankar Sastry, Changliu Liu, Guanya Shi, Linxi Fan, Yuke Zhu |
| 293 | + <br> |
| 294 | + <em>IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)</em>, 2026 |
| 295 | + <br> |
| 296 | + <a href="https://arxiv.org/abs/2511.15200" target="_blank"><i class="far fa-file"></i> paper</a>   |
| 297 | + <a href="https://viral-humanoid.github.io/" target="_blank"><i class="fas fa-globe"></i> website</a> |
| 298 | + <p style="margin-top: 5px"><i class="fas fa-comment-dots"></i> TL;DR: VIRAL investigates the scaling law of visual sim-to-real and finds a recipe to achieve zero-shot, robust, and continuous real-world deployment. |
| 299 | + </td> |
| 300 | + </tr> |
| 301 | + </table> |
| 302 | + |
| 303 | + <br> |
| 304 | + |
258 | 305 | <table width="880" border="0" align="center" cellspacing="0" cellpadding="0"> |
259 | 306 | <tr style="background-color: var(--highlight-color)"> |
260 | 307 | <td style="width:35%; vertical-align:middle; padding-right: 20px;"> |
|
0 commit comments