Skip to content

[ICLR 2025] Dataset and Code for Paper "Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels"

Notifications You must be signed in to change notification settings

genforce/PedGen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels

This is the official implementation of the ICLR 2025 paper, "Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels", including the preprocessing of the CityWalkers dataset and the PedGen model code release.

Zhizheng Liu, Joe Lin, Wayne Wu, Bolei Zhou
University of California, Los Angeles
Teaser

Installation and Demo

Setup the repo:

git clone --recursive [email protected]:genforce/PedGen.git
cd PedGen
conda env create -f env.yaml -n pedgen 
conda activate pedgen
pip install -e .

Download checkpoints at this link in ckpts folder:

  • pedgen_no_context.ckpt, PedGen model without context factors.
  • pedgen_with_context.ckpt, PedGen model with all context factors (scene, human, goal).

Run Demo:

python scripts/demo.py

Feel free to try different context factors to generate diverse movements.

Preprocessing CityWalkers

Please check preprocess.md for details.

Training & Evaluation

We use lightning-cli to do train/eval our model. To train on CityWalkers and reproduce our results run:

python scripts/main.py fit -c cfgs/pedgen_with_context.yaml --data.data_root $DATA_ROOT

, where $DATA_ROOT is the root of the preprocessed CityWalkers dataset. Additional information about CARLA evaluation can be found here.

Acknowledgements

We would like to thank the following projects for inspiring our work and open-sourcing their implementations: WHAM, SLAHMR, MDM, HumanMAC, TRUMANS, ZoeDepth, SegFormer, SLOPER4D.

Contact

For any questions or discussions, please contact Zhizheng Liu.

Reference

If our work is helpful to your research, please cite the following:

@inproceedings{liu2025learning,
  title={Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels},
  author={Liu, Zhizheng and Lin, Joe and Wu, Wayne and Zhou, Bolei},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025}
}

About

[ICLR 2025] Dataset and Code for Paper "Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published