Skip to content

Optimize BatchNormalization by avoiding tensor slicing #661

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 12, 2025

Conversation

robertknight
Copy link
Owner

Since the tensor is contiguous and we only need the data for each chunk, we can replace N * C slice_mut calls with a much cheaper chunks_mut iterator.

This made BatchNormalization ~10% faster on a MobileNet v4 model where there are layers with many channels but a relatively small number of elements per channel.

Since the tensor is contiguous and we only need the data for each chunk, we can
replace N * C `slice_mut` calls with a much cheaper `chunks_mut` iterator.

This made BatchNormalization ~10% faster on a MobileNet v4 model where there are
layers with many channels but a relatively small number of elements per channel.
@robertknight robertknight merged commit 0fb9130 into main Apr 12, 2025
2 checks passed
@robertknight robertknight deleted the batch-norm-opt branch April 12, 2025 06:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant