Multi-GPU Sharding #3562
gorlando04
started this conversation in
General
Replies: 2 comments
-
|
see |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Hello @mdouze, thank you very much for the help, with this i could implement a GPU sharding script, with the additional help of the bench_gpu_1bn.py script. Although, I have one final question. Is sharding supposed to be slower than Replication? In my tests sharding was slower than replication for datasets with smaller sizes like 10M - 50M, is this normal? If yes, is there an explanation why |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Summary
Faiss version: Faiss 1.6.3
Installed from: anaconda (pytorch)
Running on:
Interface:
Reproduction instructions
Hi, I'm trying to do a multi-gpu kNN search using faiss. But i'm having problem in working with sharding. Replication is an easy solution but this approach limits the datasets size, so i want to understand how i could implement a IVFPQ index, using Multi-GPU sharding.
Beta Was this translation helpful? Give feedback.
All reactions