Replies: 3 comments 1 reply
-
How large is the data size in this case? How much memory does your GPU have / is anything else running on that device? Note that |
Beta Was this translation helpful? Give feedback.
-
I'm running an iterative process to find "high scoring" items in a dataset
Due to the iterative process, the training dataset grows over time. I hit a memory error on ~20k inputs on a GPU with 24GB memory. The vectors in Is there a better approach in botorch for the sort of thing I am trying to do? Especially given I would want it to scale to ~100k examples |
Beta Was this translation helpful? Give feedback.
-
When you say "Train model" and "Sample from model" you are talking about the BoTorch surrogate model, right? How long does it take to "sample similar items from database" and "score items"?
So
The number of data points you are considering here is outside of the range where a standard exact GP model (as That said, if the dimensionality of each data point is indeed 768 that's also very large for a GP (or more generally kernel-based methods), and you'd probably need some custom approaches to get this to work well. In short, it unfortunately does seem like your use case is one that may not be well served by BoTorch and you may want to consider other "active learning" methods that are targeted to this problem and scale better. |
Beta Was this translation helpful? Give feedback.
-
I'm running into an issue where
fit_gpytorch_mll
is giving me a cuda out of memory error. Is there a parameter to tune batch size or control memory usage?Beta Was this translation helpful? Give feedback.
All reactions