-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Description
Hi, thank you for the great work on Epona!
I'm trying to understand the preprocessing pipeline for nuScenes data and have some questions about the JSON metadata files and frame rates.
1. How were the nuScenes JSON files generated?
The README mentions downloading pre-processed JSON files from HuggingFace (meta_data_nusc), but I couldn't find the preprocessing script in the repository (unlike nuPlan which has create_nuplan_json.py). Could you share the script used to generate these JSON files, or explain the preprocessing steps?
2. What is the actual frame rate in the nuScenes JSON files?
I noticed some inconsistencies in the codebase regarding ori_fps:
- TrainDataset in
dataset.pyusesori_fps = 10 - Other classes like
TrainImgDataset(in the unuseddataset_nusc.py) useori_fps = 12 # 10 hz
The nuScenes camera is nominally ~12 Hz. Were the JSON files pre-downsampled to 10 Hz, or do they contain all ~12 Hz frames?
3. Why does nuPlan pretraining use 5 Hz while nuScenes fine-tuning uses 10 Hz?
I see in the configs:
dit_config_dcae_nuplan.py: downsample_fps = 5dit_config_dcae_nuscenes.py: downsample_fps = 10
Is there a specific reason for this difference (e.g., dataset size, training efficiency)?
Also, the paper mention that nuScenes model was trained from scratch, but the open sourced weights seem to fine-tune the nuPlan model. Which is correct?
I'm trying to prepare a custom dataset for training/fine-tuning and want to make sure I match the expected frame rate correctly.
Thanks in advance!
Metadata
Metadata
Assignees
Labels
No labels