Skip to content

Official implementation of 2HandedAfforder (ICCV 2025) — a vision-language-based framework for learning precise, actionable bimanual affordances from human videos, enabling robots to reason about and execute coordinated two-handed actions.

Notifications You must be signed in to change notification settings

pearl-robot-lab/2HandedAfforder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

75 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2HandedAfforder

Official repository for 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, published at ICCV 2025.

Best Paper Finalist @ Human to Robot (H2R) workshop at CoRL 2025

About

This repository contains the code and tools for learning precise, actionable bimanual affordances from human activity videos. Our framework extracts affordance data from video datasets and provides a VLM-based affordance prediction model that can identify task-specific object regions for both single-handed and coordinated two-handed manipulation tasks.

Release includes:

  • 2HANDS Dataset: Precise object affordance region segmentations with affordance class-labels extracted from human activity videos
  • 2HandedAfforder Model: Weights of our model, a VLM-based affordance predictor for bimanual manipulation tasks
  • ActAffordance: A human-annotated benchmark for evaluation of bimanual text-prompted affordances

Resources

For more information, including paper, video, dataset, and detailed documentation, please visit:

Project Website: https://sites.google.com/view/2handedafforder

Citation

If you find this work useful, please cite:

@InProceedings{Heidinger_2025_ICCV,
  author = {Heidinger, Marvin and Jauhri, Snehal and Prasad, Vignesh and Chalvatzaki, Georgia},
  title = {2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2025},
  pages = {14743-14753}
}

Authors

Marvin Heidinger*, Snehal Jauhri*, Vignesh Prasad, and Georgia Chalvatzaki
PEARL Lab, TU Darmstadt, Germany

* Equal contribution

Acknowledgements

This project has received funding from the European Union's Horizon Europe programme under Grant Agreement no. 101120823

About

Official implementation of 2HandedAfforder (ICCV 2025) — a vision-language-based framework for learning precise, actionable bimanual affordances from human videos, enabling robots to reason about and execute coordinated two-handed actions.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •