Open
Description
🐛 Bug
The evaluation of the MultiTaskGP
fails in cases where there is no training data available. The same situation works without problems for a SingleTaskGP
, which conceptually should simply return values from the prior. However, the MultiTaskGP
crashes with a ZeroDivisionError
.
To reproduce
SingleTaskGP
import torch
from botorch.models.gp_regression import SingleTaskGP
N_train = 0
N_test = 10
train_X = torch.rand(N_train, 2, dtype=torch.float64)
train_Y = torch.sin(train_X).sum(dim=1, keepdim=True)
test_X = torch.rand(N_test, 2, dtype=torch.float64)
model = SingleTaskGP(train_X, train_Y)
model.posterior(test_X).mean
MultiTaskGP
import torch
from botorch.models.multitask import MultiTaskGP
N_train = 0
N_test = 10
train_X1, train_X2 = torch.rand(N_train, 2), torch.rand(N_train, 2)
i1, i2 = torch.zeros(N_train, 1), torch.ones(N_train, 1)
train_X = torch.cat(
[
torch.cat([train_X1, i1], -1),
torch.cat([train_X2, i2], -1),
]
)
train_Y = torch.randn(train_X.shape[0], 1)
test_X1, test_X2 = torch.rand(N_train, 2), torch.rand(N_train, 2)
i1, i2 = torch.zeros(N_train, 1), torch.ones(N_train, 1)
test_X = torch.cat(
[
torch.cat([test_X1, i1], -1),
torch.cat([test_X2, i2], -1),
]
)
model = MultiTaskGP(train_X, train_Y, task_feature=-1)
model.posterior(test_X).mean
Stack trace/error message
ZeroDivisionError: integer division or modulo by zero
Expected Behavior
Just like the SingleTaskGP
, the MultiTaskGP
should return the prior values.
System information
Please complete the following information:
- BoTorch Version: 0.11.0
- GPyTorch Version: 1.11
- PyTorch Version: 2.3.0
- OS: macOS