get_probabilistic_reparameterization_input_transform
gives incorrect dimension when optimising over only categorical inputs #2
Closed
Description
I'm trying to optimise a function which just has categorical inputs. However I think the get_probabilistic_reparameterization_input_transform
functions appears to fail in this case.
Here is an example with one continuous feature and two categorical features.
import torch
from discrete_mixed_bo.probabilistic_reparameterization import get_probabilistic_reparameterization_input_transform
input_transform = get_probabilistic_reparameterization_input_transform(
dim=13, ##
use_analytic=True,
integer_indices=[],
integer_bounds=torch.zeros((2, 0), dtype=torch.float64),
categorical_features=OrderedDict([(1, 6), (2, 6)]),
tau=0.1,
)
print(input_transform(torch.rand(100, 1, 1, 13)).shape)
This will output torch.Size([100, 36, 1, 13])
. Now if we remove the continuous feature.
import torch
from discrete_mixed_bo.probabilistic_reparameterization import get_probabilistic_reparameterization_input_transform
input_transform = get_probabilistic_reparameterization_input_transform(
dim=12, ##
use_analytic=True,
integer_indices=[],
integer_bounds=torch.zeros((2, 0), dtype=torch.float64),
categorical_features=OrderedDict([(0, 6), (1, 6)]),
tau=0.1,
)
print(input_transform(torch.rand(100, 1, 1, 12)).shape)
And the output is torch.Size([36, 12])
. Should this not be torch.Size([100, 36, 1, 12])
?
I've managed to find the line that seems to cause this issue. It is the tf.eval()
on line 95. I'm not sure why this causes the transform to collapse the dimensions.
Metadata
Assignees
Labels
No labels