This is the official repository for the paper "ProGait: A Multi-Purpose Video Dataset and Benchmark for Transfemoral Prosthesis Users" (ICCV'25).
2025/7/2We have published our dataset at Hugging Face: Link to the dataset page.2025/6/25Our paper "ProGait: A Multi-Purpose Video Dataset and Benchmark for Transfemoral Prosthesis Users" get accepted by ICCV'25! 🎉 🎉 🎉
ProGait is a multi-purpose video dataset aimed to support multiple vision tasks on prosthesis users, including Video Object Segmentation, 2D Human Pose Estimation, and Gait Analysis. ProGait provides 412 video clips from four above-knee amputees when testing multiple newly-fitted prosthetic legs through walking trials, and depicts the presence, contours, poses, and gait patterns of human subjects with transfemoral prosthetic legs.
Example annotations:
- The raw videos and corresponding annotations is available HERE.
- Instructions for downloading can be found HERE. Use
huggingface-clifor example:huggingface-cli download ericyxy98/ProGait --repo-type dataset --local-dir path/to/dataset rm -r path/to/dataset/.cache
annotations
├── inside <------------------ Scenario: inside parallel bars
│ ├── *_annotations.xml <----- CVAT XML format
│ ├── *_keypoints.npy.gz <---- 2D pose keypoints in format of numpy ndarray
│ ├── *_masks.npy.gz <-------- Segmentation masks in format of numpy ndarray
│ └── *.txt <----------------- Textual descriptions
└── outside <------------------ Scenario: outside parallel bars
└── ...
- The IDs are named in format of
<subject>_<prosthesis>_<trial>_<f(rontal)/s(agittal)>[_<additional round trips>]. For example:1_3_2_frefers to the frontal view of Subject 1 using their 3rd prosthesis and having their 2nd walking trial;2_6_2_s_2refers to the sagittal view of Subject 2 using their 6th prosthesis and having their 2nd walking trial, specifically the 2nd additional round-trip (which is the 3rd overall). - Pose keypoints have dimensions of
(num_frames, num_keypoints, 3), where the 3 corresponds to x-, y-coordinates, and confidence scores. - Segmentation masks have dimensions of
(num_frames, frame_height, frame_width, 1). - NOTE: A single text description can apply to multiple video sequences within the same walking trial.
- Clone the repository
git clone https://github.com/pittisl/ProGait.git cd ProGait - Setup the virtual environment
conda env create -f environment.yml conda activate progait
- Prepare the dataset
- Download the dataset. See above.
- Place the data files under
datasets/progait/, which should look like:. ├── datasets │ └── progait │ ├── annotations │ | └── ... │ ├── previews │ | └── ... │ ├── videos │ | └── ... │ └── metadata.jsonl ├── scripts │ └── ... ├── models │ └── ... ├── README.md └── ...
- Run the demo
python verify_data.py
- TBD
ProGait provides annotations for 3 different tasks:
- Bounding boxes and segmentation masks of the prothesis user
- 23 pose keypoints of the target (17 for body and 6 for feet, following the COCO-wholebody definition)
- Text descriptions of four key components:
- The general gait category
- The specific gait deviation
- Recommendations on how to adjust the prosthesis to correct the gait
- The reasons of these recommendations
This project is released under the MIT License.
We are aware that Orthocare Innovations PLLC also used "ProGait" as the name of their mobile app product. Our work and dataset are not affiliated with Orthocare Innovations PLLC, and are not associated with their ProGait app, Europa+ system, or any other product.
If you find ProGait dataset useful for your project, please cite our paper as follows.
Xiangyu Yin, Boyuan Yang, Weichen Liu, Qiyao Xue, Abrar Alamri, Goeran Fiedler, Wei Gao, "ProGait: A Multi-Purpose Video Dataset and Benchmark for Transfemoral Prosthesis Users", ICCV, 2025.
BibTeX entry:
TBD
