-
Notifications
You must be signed in to change notification settings - Fork 169
Open
Description
In applications such as link, it is useful to evaluate the loss of a problem in Neuromancer in a batched fashion. The loss values serve as outputs used in a loss function outside of the Neuromancer environment. In this case, it is necessary to retain the batch dimension as opposed to returning the average loss of the batch. Currently, this is being remedied by using a batch size of 1 and torch.vmap.
Current: problem returns a single loss value for batch
H = 2
A, B, Q = 3*[torch.tensor([[1.]])]
# system definition
dx_fun = lambda x, u: x @ A.T + u @ B.T
dx_node = Node(dx_fun, ['x', 'u'], ['x'])
mu_node = Node(blocks.Linear(1, 1, bias=False), ['x'], ['u'])
l_fun = lambda x: Q*x**2
l_node = Node(l_fun, ['x'], ['l'])
cl_system = System([mu_node, dx_node, l_node], nsteps=H+1)
# problem definition
x, u, l = variable('x'), variable('u'), variable('l')
l_loss = Objective(var=H*l[:, :-1, :], name='stage_loss') # cost for steps k<H
loss = PenaltyLoss([l_loss], [])
problem = Problem([cl_system], loss)
# problem evaluation
x_batch = torch.randn(64, 1)
input_dict = DictDataset({'x': x_batch.unsqueeze(1)})
input_dict.datadict['name'] = 'eval'
output_dict = problem(input_dict.datadict)
output_dict['eval_loss'].shape
returns torch.Size([]).
Requested: Option for problem to return individual loss values for each input
Metadata
Metadata
Assignees
Labels
No labels