Skip to content
This repository was archived by the owner on Sep 18, 2024. It is now read-only.
This repository was archived by the owner on Sep 18, 2024. It is now read-only.

Runtime Error when running AutoCompress with Level Pruner #3946

Open
@crawlingcub

Description

@crawlingcub

Describe the issue:

Running AutoCompress pruner with level pruner sub-algorithm causes this error on the VGG16 model with imagenet dataset. This does not happen when using a different sub-algorithm. This same error also happens with ResNet or other models.

Config used: [{'sparsity': 0.8154490441762983, 'op_types': ['Conv2d']}, {'sparsity': 0.2977677745735964, 'op_types': ['Linear']}]

Other arguments:
autocompress(model, config_list, trainer, evaluator, dummy_input, num_iterations=1, optimize_mode='maximize', base_algo='level', cool_down_rate=0.9, admm_num_iterations=2, admm_epochs_per_iteration=1)

Error Log:

Traceback (most recent call last):
....
    pruner.compress()
  File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/nni/algorithms/compression/pytorch/pruning/auto_compress_pruner.py", line 212, in compress
    m_speedup.speedup_model()
  File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/nni/compression/pytorch/speedup/compressor.py", line 183, in speedup_model
    self.infer_modules_masks()
  File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/nni/compression/pytorch/speedup/compressor.py", line 140, in infer_modules_masks
    self.infer_module_mask(module_name, None, mask=mask)
  File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/nni/compression/pytorch/speedup/compressor.py", line 86, in infer_module_mask
    input_cmask, output_cmask = infer_from_mask[m_type](module_masks, mask)
  File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/nni/compression/pytorch/speedup/infer_shape.py", line 232, in <lambda>
    'Conv2d': lambda module_masks, mask: conv2d_mask(module_masks, mask),
  File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/nni/compression/pytorch/speedup/infer_shape.py", line 909, in conv2d_mask
    mask, dim=conv_prune_dim)
  File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/nni/compression/pytorch/speedup/infer_shape.py", line 902, in convert_to_coarse_mask
    assert torch.all(torch.eq(index, bias_index)), \
RuntimeError: The size of tensor a (497) must match the size of tensor b (512) at non-singleton dimension 0

Environment:

  • NNI version: 2.3
  • Training service (local|remote|pai|aml|etc): local
  • Client OS: Ubuntu 18.04
  • Python version: 3.7
  • PyTorch/TensorFlow version: 1.8.1
  • Is conda/virtualenv/venv used?: yes
  • Is running in Docker?: no

How to reproduce it?:
Running any model with autocompress and level pruning algorithm will lead to this error.

Is this due to a bug? I can provide the INFO logs if needed. Thanks!

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions