Why doesn't the IBasicBlock of the backbone in arcface_torch use the attention module, while arcface_mxnet does have it?
The screenshot is as follows:
arcface_mxnet, The backbone is defined in the file "recognition/arcface_mxnet/symbol/fresnet.py"
arcface_torch, The backbone is defined in the file "recognition/arcface_torch/backbones/iresnet.py"
Could you explain why the arcface_torch model does not use an attention module? Has the performance of the model deteriorated after adding the attention module?