-
Notifications
You must be signed in to change notification settings - Fork 102
Description
Hi~I have tested your pre-trained Bi-Real Net (18-layer) on Huawei Mate30 Pro and I got an inference time of 20ms which was nice. However, when I built dabnn on Hi3516dv300 (armv7, with NEON open) and ran the same model with net_test, I got an inference time of 2000ms, could you please help me with this "bad performace"?
To reproduce this result:
1.Use the corresponding toolchain file to build dabnn. (Here is my CMAKE_CXX_FLAGS)
SET(CMAKE_CXX_FLAGS " -mfloat-abi=softfp -mfpu=neon-vfpv4 -mcpu=cortex-a7 ${CMAKE_CXX_FLAGS}" )
2.Test the generated net_test on Hi3516dv300 and here is what I got:
/mnt/output/tianlin/dabnn_test/bin # ./net_test
Running main() from /home/tianlin/dabnn/third_party/googletest/googletest/src/gtest_main.cc
[==========] Running 4 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 4 tests from net
[ RUN ] net.bireal18imagenet_comparison
[ OK ] net.bireal18imagenet_comparison (6526 ms)
[ RUN ] net.bireal18imagenet
[INFO] bireal18imagenet's latency is 2219.43 ms
[ OK ] net.bireal18imagenet (2323 ms)
[ RUN ] net.bireal18imagenetstem_comparison
[ OK ] net.bireal18imagenetstem_comparison (6047 ms)
[ RUN ] net.bireal18imagenetstem
[INFO] bireal18imagenetstem's latency is 2045.26 ms
[ OK ] net.bireal18imagenetstem (2232 ms)
[----------] 4 tests from net (17129 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test suite ran. (17130 ms total)
[ PASSED ] 4 tests.