-
Notifications
You must be signed in to change notification settings - Fork 98
Unify use of OpenMP for HNSW threading models #724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: branch-25.06
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, very welcome change, but I have a few questions to make sure it works as intended.
- I don't see any changes to CMakeLists. Is the benchmark executable already linked with openmp?
- Can we now remove
cpp/bench/ann/src/common/thread_pool.hpp
completely?
@@ -66,13 +66,13 @@ class hnsw_lib : public algo<T> { | |||
struct build_param { | |||
int m; | |||
int ef_construction; | |||
int num_threads = omp_get_num_procs(); | |||
int num_threads = omp_get_max_threads(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the consequences of switching from omp_get_num_procs()
to omp_get_max_threads()
with respect to the SMT/hyperthreading?
Does hyperthreading make HNSW slower or faster?
Also, does OpenMP take into account whether the number of cores is limited by numactl
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Answering your questions one by one:
I don't see any changes to CMakeLists. Is the benchmark executable already linked with openmp?
All benchmark executables link to OpenMP already.
Can we now remove cpp/bench/ann/src/common/thread_pool.hpp completely?
No, unfortunately the FAISS wrappers still use it.
What are the consequences of switching from omp_get_num_procs() to omp_get_max_threads() with respect to the SMT/hyperthreading?
omp_get_num_procs()
only returns number of physical cores. The difference in search times are very visibly apparent when accounting for thread usage to be all available hyperthreads vs just using physical cores. It makes HNSW faster.
Also, does OpenMP take into account whether the number of cores is limited by numactl?
No, it does not. We would either have to write a custom implementation or use some thread-pool library that can account for this. I think it would fall out of the scope of this PR and would be a general design philosophy discussion that we would have to undertake with the team, as we use OpenMP in quite a lot of places outside of HNSW.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the answers!
The difference in search times are very visibly apparent when accounting for thread usage to be all available hyperthreads vs just using physical cores. It makes HNSW faster.
Could you please add this to the comment above where we set this default?
Can we now remove cpp/bench/ann/src/common/thread_pool.hpp completely?
No, unfortunately the FAISS wrappers still use it
Would it be worth the effort to adjust FAISS wrappers to use OpenMP in this PR as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth the effort to adjust FAISS wrappers to use OpenMP in this PR as well?
I believe @tarang-jain is reworking the FAISS wrappers so we can do that as a follow-on to his PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Divye, the PR looks good to me!
@@ -66,13 +67,13 @@ class hnsw_lib : public algo<T> { | |||
struct build_param { | |||
int m; | |||
int ef_construction; | |||
int num_threads = omp_get_num_procs(); | |||
int num_threads = omp_get_max_threads(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you rename num_threads
to something else (say build_num_threads
or nthreads
). The problem arises when trying to data_export
. The build dataframe columns are appended at the end of the search dataframe. We want this param to be appended as a separate column at the end of the combined dataframe but currently there is a naming clash because the search params also have num_threads
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @tarang-jain the problem you're talking about is in the python post-processing, right? This sounds like a fragile setup, as nothing prevents anyone from adding more algorithms with the same build/search parameters in future. Perhaps, it would be better to solve this problem for good where it occurs? You could either join the columns, or prepend build_/search_ to their names, or just not use names as keys (allow duplicates).
I also think it would be a bit confusing for the user to having differentiate between the two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right! It occurs in the Python post-processing. Prepending build_/search_ to the names seems like the easiest thing to do.
cuvs_bench
hnswlib wrapper was using a custom threading pool whilecuvs
hnsw wrapper was using OpenMP for parallelism. This was causing unexpected deviations in measured timings.Furthermore, the default for # of threads in
cuvs_bench
hnswlib wrapper search params was 1, which is incorrect.