-
Notifications
You must be signed in to change notification settings - Fork 155
Open
Description
torchpack dist-run -np 32 -H $server1:8, $server2:8, $server3:8, $server4:8 \
python train.py configs/netaug.yaml \
--data_provider "{/mnt/dolphinfs/hdd_pool/docker/user/hadoop-basecv/maxin/data/ILSVRC2012, image_size:176, base_batch_size:64}" \
--run_config "{base_lr:0.0125}" \
--model "{name:mcunet}" \
--netaug "{aug_expand_list:[1.0,1.6,2.2,2.8], aug_width_mult_list:[1.0,1.6,2.2]}" \
--path runs/mcunet_imagenet
Is torchpack used for multiple machine training? And how could I use the bash above for one machine multiple GPUs training? Could you please give me one example? Thanks~
Metadata
Metadata
Assignees
Labels
No labels