Skip to content

多尺度测试问题 #53

Open
Open
@XinzheGeng

Description

@XinzheGeng

作者您好,我在使用ONE-PEACE-Adapter在语义分割数据集LoveDA微调后使用多尺度测试时,出现了位置编码尺寸的问题,请问要如何解决,完整报错如下:
(onepeace) root@32e9cc11dfcb:/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg# sh test.sh
2024-05-13 12:17:20,330 - mmseg - INFO - Multi-processing start method is None
2024-05-13 12:17:20,330 - mmseg - INFO - OpenCV num_threads is `128
2024-05-13 12:17:20,363 - mmseg - INFO - Loaded 1796 images
/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1646755897462/work/aten/src/ATen/native/TensorShape.cpp:2228.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/losses/cross_entropy_loss.py:231: UserWarning: Default avg_non_ignore is False, if you would like to ignore the certain label and average loss over non-ignore labels, which is the same with PyTorch official cross_entropy, set ``avg_non_ignore=True``.
warnings.warn(
load checkpoint from local path: /mnt/gengxz/ckps/onepeace/mask2former_onepeace_adapter_g_512_80k_loveda_ss/best_mIoU_iter_12000.pth
/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/test.py:266: UserWarning: SyncBN is only supported with DDP. To be compatible with DP, we convert SyncBN to BN. Please use dist_train.sh which can avoid this error.
warnings.warn(
[ ] 0/1796, elapsed: 0s, ETA:torch.Size([1, 257, 1536])
torch.Size([1, 1025, 1536])
Traceback (most recent call last):
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/test.py", line 327, in
main()
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/test.py", line 275, in main
results = single_gpu_test(
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/mmseg/apis/test.py", line 91, in single_gpu_test
result = model(return_loss=False, **data)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/mmcv/parallel/data_parallel.py", line 50, in forward
return super().forward(*inputs, **kwargs)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/mmcv/runner/fp16_utils.py", line 110, in new_func
return old_func(*args, **kwargs)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/mmseg/models/segmentors/base.py", line 110, in forward
return self.forward_test(img, img_metas, **kwargs)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/mmseg/models/segmentors/base.py", line 94, in forward_test
return self.aug_test(imgs, img_metas, **kwargs)
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/segmentors/encoder_decoder_mask2former.py", line 277, in aug_test
seg_logit = self.inference(imgs[0], img_metas[0], rescale)
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/segmentors/encoder_decoder_mask2former.py", line 241, in inference
seg_logit = self.slide_inference(img, img_meta, rescale)
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/segmentors/encoder_decoder_mask2former.py", line 181, in slide_inference
crop_seg_logit = self.encode_decode(crop_img, img_meta)
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/segmentors/encoder_decoder_mask2former.py", line 74, in encode_decode
x = self.extract_feat(img)
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/segmentors/encoder_decoder_mask2former.py", line 66, in extract_feat
x = self.backbone(img)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/backbones/onepeace_adapter.py", line 101, in forward
x, self_attn_bias, H, W = self.image_adapter(x)
File "/root/anaconda3/envs/onepeace/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/gengxz/projects/ONE-PEACE_cuda113/one_peace_vision/seg/mmseg_custom/models/backbones/onepeace.py", line 181, in forward
x += pos_embed.unsqueeze(0)
RuntimeError: The size of tensor a (257) must match the size of tensor b (1025) at non-singleton dimension 1


测试命令如下:
CUDA_VISIBLE_DEVICES=0
python test.py
configs/loveda/mask2former_onepeace_adapter_g_512_80k_loveda_ss.py
/mnt/gengxz/ckps/onepeace/mask2former_onepeace_adapter_g_512_80k_loveda_ss/best_mIoU_iter_12000.pth
--format-only
--format_dir /mnt/gengxz/ckps/onepeace/mask2former_onepeace_adapter_g_512_80k_loveda_ss/loveda_test_ms
--aug-test

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions