qKnowledgeGradient CPU & GPU usage #2325
Replies: 5 comments 1 reply
-
Hi @ruhanaazam. The GPU will generally speed things up for larger tensor operations. With BO, the sizes of the underlying tensors are typically rather small, due to the small data size. This limits how much the GPU is utilized, as well as any performance benefits from using the GPU. |
Beta Was this translation helpful? Give feedback.
-
The main use case for GPU in BoTorch today are Hypervolume-based acquisition functions, where the computation of the (expected) hypervolume involves rather large tensors and can benefit a lot from using the GPU. See https://proceedings.neurips.cc/paper/2020/hash/6fec24eac8f18ed793f5eaad3dd7977c-Abstract.html for some context on this. |
Beta Was this translation helpful? Give feedback.
-
I'll close this issue and keep it around as a discussion for future discoverability. |
Beta Was this translation helpful? Give feedback.
-
qKG uses 3000% CPU while qEI only uses 100%, and our GPU servers don't have much CPU. Would you recommend running KG (and generally lookahead acquisition function) on the CPU server? I tried this, and it is slower than a GPU-based job about 15 times. |
Beta Was this translation helpful? Give feedback.
-
@saitcakmak I don't think this problem is solved. It seems like running this job consumes a lot of CPU and very little GPU, but if one runs without a GPU, the runtime is 15x longer. That seems like it doesn't add up? |
Beta Was this translation helpful? Give feedback.
-
Hi, I am interested in using
qKnowledgeGradient
in a simple BO loop. When I run the code below, the CPU usage on my machine is extremely high while GPU is fairly low. Is there a way to make runningqKnowledgeGradient
more computationally efficient?System information
Botorch: 0.8.3
GPyTorch: 1.10
Pytorch: 2.0.1
Ubuntu 20.04
Python: 3.10
Beta Was this translation helpful? Give feedback.
All reactions