mlas/arm64: add NEON conv asm kernels and tune NCHWC kernel selection#27099
mlas/arm64: add NEON conv asm kernels and tune NCHWC kernel selection#27099hariharans29 merged 11 commits intomicrosoft:mainfrom
Conversation
|
Interesting contribution - thank you! A few questions -
|
|
Hi @aviralagrawal, thank you vey much for your prompt feedback.
Compared to direct GEMM implementation of pointwise convolution asm kernel computes 1x1 conv directly:
As usual there are trade-offs so direct GEMM would be faster when output count is small because then asm kernel drops to single-output path which has less ILP and won't be able to reuse filter loads, non-unit stride and non-contigious output regions hence why we have heuristics checking for stride width and height and very large K/M when GEMM blocking can make better use of caches then a fixed 4-output tile. This is best illustrated if we extract pointwise convolutions from mobilnet that we ran and we can see that on average asm implementation is 1.07x faster, and significant speed ups are when number of channels is high and K/M are small (in the image those are H and W dimensions). In convolution heavy networks the convolutions that are dominant are ones with large number of channels and low height and width so we see visible performance improvements as optimisations from this PR are weighted in that direction.
For benchmarking we used the model from: https://github.com/onnx/models/blob/main/validated/vision/classification/mobilenet/model/mobilenetv2-7.onnx
Running |
|
Thanks @milpuz01 for the detailed description & comment! A couple questions from my side:
|
|
Hi @Rohanjames1997, thank you very much for your comments.
No, particular reason. Mostly because the focus for this PR was on MobileNet model and lack of bandwidth. Thank you for sharing the model where
Yes, I think that is great idea and would be interesting to hear from @hariharans29 too what other testing we should make to try to make these kernels default. As you can see above this change is not going to accelerate all possible pointwise convolutions for example but on average it will show the improvements so if we could agree on a set of performance targets we can use that to drive the decision. Also thank you for your code review I will address them in a separate commit. |
Unfortunately, I don't have a comprehensive list of performance targets to be met to make the feature default. Since, the performance testing may not include all possible Conv shapes, I would like to err on the side of caution and atleast provide one release timeline heads-up to the users before considering making the feature default. I would also encourage you to open a discussion to solicit feedback from other ORT users on ARM if they see speed-up for their models with this feature. It would provide greater confidence and a strong data point to turn it on by default. Thanks for this contribution, we will review it shortly ! |
Thanks @hariharans29. I agree with erring on the side of caution. If this PR goes through and it is in the main release is it possible to add a note that we would like to make |
Thanks @milpuz01. The PR should go through in main eventually but I don't think it will go in 1.24.0 unfortunately as the release branch is cut and the bar to take in new code at this point is critical bug fixes and urgent customer asks only. I will try to take this in for 1.24.1 when it happens and sure I will add a note about considering making it default in one of the future releases, but ultimately, as discussed in the comment #27099 (comment), I expect the NchwcFloatKernel needs optimizations before considering that. |
There was a problem hiding this comment.
Pull request overview
Adds new AArch64 NEON assembly micro-kernels for NCHW, depthwise NCHWc, and pointwise NCHWc convolution, integrates them into the MLAS build, and updates NCHWc kernel-selection heuristics to prefer the asm kernels in selected shapes.
Changes:
- Add new AArch64
.Sconvolution micro-kernels (NCHW, depthwise NCHWc, pointwise NCHWc) and wire them into the MLAS build. - Update ARM64 platform init and NCHWc execution heuristics to select asm kernels for pointwise (stride-1, larger tiles) and depthwise (wider outputs).
- Remove the old intrinsics wrapper for the NCHW float kernel in the NCHWc NEON source file.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| cmake/onnxruntime_mlas.cmake | Adds new AArch64 asm sources to the ARM NEON NCHWc MLAS build setup. |
| onnxruntime/core/mlas/lib/snchwc.cpp | Adds ARM64 heuristics to prefer asm depthwise/pointwise kernels in “safe” cases. |
| onnxruntime/core/mlas/lib/sconv_nchwc_kernel_neon.cpp | Removes the old NCHW float kernel wrapper implementation from the NCHWc NEON source file. |
| onnxruntime/core/mlas/lib/platform.cpp | Switches ARM64 NCHW conv kernel default to asm; updates commentary around kernel choices. |
| onnxruntime/core/mlas/lib/mlasi.h | Declares new asm kernel entry points for ARM64 NEON NCHWc. |
| onnxruntime/core/mlas/lib/aarch64/SconvKernelNeon.S | Adds new NCHW convolution asm micro-kernel. |
| onnxruntime/core/mlas/lib/aarch64/SconvDepthwiseKernelNeon.S | Adds new depthwise NCHWc asm micro-kernel (fast/slow path for padding). |
| onnxruntime/core/mlas/lib/aarch64/SconvPointwiseKernelNeon.S | Adds new pointwise NCHWc asm micro-kernel (multi-output reuse). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline |
|
Azure Pipelines successfully started running 4 pipeline(s). |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
This change looks good to me. Thanks. Can you please address remaining comments (Copilot + mine) so that it can be merged ? FYI - I have called out the NCHWc layout support on ARM in the 1.24 release notes, so that the community can give a try and share feedback/issues if any - https://github.com/microsoft/onnxruntime/releases. CC: @Rohanjames1997 |
|
Nice 🚀 |
Thanks for bringing this to my attention. I am not sure how the contributors list is generated myself. I ll pass along the information for folks to take a look. Meanwhile, I have added Rohan manually to the list and apologies. EDIT: Filed as issue for tracking: #27274 |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 8 out of 8 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
### Description Initially the NCHWc code was built only on Mac CIs to keep the build path regression-free. There were some Linux-specific paths introduced in #26838 and there is more community interest in contributing to these code paths. See #27099. Hence, it makes sense to keep these code paths built and tested on Linux and Windows too. ### Motivation and Context Improve CI quality with regards to ARM64 NCHWc builds CC: @Rohanjames1997
|
/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline |
|
Can you please rebase with main ? |
|
Azure Pipelines successfully started running 4 pipeline(s). |
Signed-off-by: Milos Puzovic <milos.puzovic@arm.com>
Signed-off-by: Milos Puzovic <milos.puzovic@arm.com>
Description Enables the file mapping of weights as well as the overall context bin. This feature is currently only enabled for ARM64 WIN devices Motivation and Context Currently, when reading the context bin, ORT allocates a large buffer on the heap. Assuming the same model is used, each ORT session will allocate a buffer for the context bin. This is incredibly wasteful when large models are used. Instead, WIN file mapping can be leveraged to map the context bin, then every time a context needs to be created with the context bin, the pointer to the context bin can be retrieved and used instead of some pre-allocated buffer, thus making QNN EP more memory-efficient. In the case of multiple ORT sessions, the context bin will only be loaded once for all sessions, increasing memory efficiency and overall initialization performance. This is very useful regarding the use of LLMs going forward. --------- Co-authored-by: quic_calvnguy <quic_calvnguy@quic_inc.com>
…ft#27151) Previously in `MatMulReadFnSource()` we use duplicated code to read data from two inputs `a` and `b`. This patch implements another overload of `MatMulReadFnSource()` to only read data from one input to reduce duplicated code and get ready for further use.
Signed-off-by: Milos Puzovic <milos.puzovic@arm.com>
…ck spill Signed-off-by: Milos Puzovic <milos.puzovic@arm.com>
Fix bad merge
Signed-off-by: Milos Puzovic <milos.puzovic@arm.com>
2d05853 to
bd38b0e
Compare
Signed-off-by: Milos Puzovic <milos.puzovic@arm.com>
Just rebased. |
|
/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline |
|
Azure Pipelines successfully started running 4 pipeline(s). |

Overview
This PR adds ARM64 NEON assembly micro‑kernels for NCHW, depthwise, and pointwise convolution, wires them into the MLAS build, and adds shape‑based selection heuristics for NCHWC depthwise/pointwise to favor the asm kernels in safe cases (stride‑1 pointwise; wider depthwise outputs). The BF16 path is unchanged.
Key changes
Performance
Numbers below are expressed as multipliers vs the non‑NCHWC baseline (same model and perf_test settings):
Baseline (no
--enable_arm_neon_nchwc)With
--enable_arm_neon_nchwc(no asm additions/heuristics)With this PR (asm kernels + heuristics)
Testing
./build.sh --config Release --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests --enable_pybind --build_wheel --enable_arm_neon_nchwcOMP_NUM_THREADS=8 ./build/Linux/Release/onnxruntime_perf_test -I -m times -r 1000 --x 8 ~/mobilenetv2-7.onnx