Skip to content
This repository was archived by the owner on Sep 18, 2024. It is now read-only.
This repository was archived by the owner on Sep 18, 2024. It is now read-only.

after applying speedup_model, model inference got error  #4934

Open
@Jervint

Description

@Jervint

Describe the issue:
origin:
conv1(in_channels=32,out_channels=64)
conv2(in_channels=64,out_channels=32)
sparsity=0.5
after compress && speedup:
conv1(in_channels=32,out_channels=32)
conv2(in_channels=41,out_channels=16)

the error maybe cased by infer_mask.update_indirect_sprarity when some module has channel/group dependency

Environment:

  • NNI version:
  • Training service (local|remote|pai|aml|etc):
  • Client OS:
  • Server OS (for remote mode only):
  • Python version:
  • PyTorch/TensorFlow version:
  • Is conda/virtualenv/venv used?:
  • Is running in Docker?:

Configuration:

  • Experiment config (remember to remove secrets!):
  • Search space:

Log message:

  • nnimanager.log:
  • dispatcher.log:
  • nnictl stdout and stderr:

How to reproduce it?:

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions