Skip to content

在使用BDD100K数据集进行训练的时候,单卡训练没有问题,多卡训练出现了 assert (boxes1[:, 2:] >= boxes1[:, :2]).all()错误,请问是什么原因啊? #36

@damengyua

Description

@damengyua

这是报错:“Traceback (most recent call last):
File "/data/jsj-faster/MeMOTR-main/main.py", line 124, in
main(config=merged_config)
File "/data/jsj-faster/MeMOTR-main/main.py", line 107, in main
train(config=config)
File "/data/jsj-faster/MeMOTR-main/train_engine.py", line 129, in train
train_one_epoch(
File "/data/jsj-faster/MeMOTR-main/train_engine.py", line 214, in train_one_epoch
previous_tracks, new_tracks, unmatched_dets = criterion.process_single_frame(
File "/data/jsj-faster/MeMOTR-main/models/criterion.py", line 197, in process_single_frame
matcher_res = self.matcher(outputs=detection_res, targets=untracked_gt_trackinstances, use_focal=True)
File "/data/jsj-faster/anaconda3/envs/memotr/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data/jsj-faster/MeMOTR-main/models/matcher.py", line 117, in forward
cost_giou = -generalized_box_iou(box_cxcywh_to_xyxy(out_bbox),
File "/data/jsj-faster/MeMOTR-main/utils/box_ops.py", line 91, in generalized_box_iou
assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
AssertionError
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 55800 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 55801 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 55802 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 55803 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 55804 closing signal SIGTERM

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions