You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Professor Lu Lu and DeepXDE Community,
I have encountered a problem when I try to perturb the model after it is trained.
After I perturbed the model (manually changed the trainable parameters along one direction), I computed the new output by calling model_perb.outputs_losses_test(inputs, target, aux), which should return the outputs and the losses of the model. However, the outputs and the losses were completely the same as in the original unperturbed model. Then I tried directly calling model_perb.net(inputs) and this time the outputs did actually change compared to the original model. I checked the source code of DeepXDE and could not figure out why this happened. Part of my code is as follows:
defget_weights(net):
""" Extract parameters from net, and return a list of tensors"""return [p.dataforpinnet.parameters()]
defset_weights_1D(net, weights, directions=None, step=None):
changes= [d*stepfordindirections]
for (p, w, d) inzip(net.parameters(), weights, changes):
p.data=w+torch.Tensor(d).type(type(w))
# create a copy of the modelimportcopymodel_perb=copy.deepcopy(model)
w=get_weights(model_perb.net)
lam1=-3000set_weights_1D(model_perb.net, w, directions=top_eigenvector[0], step=lam1)
inputs=model_perb.train_state.X_testtarget=model_perb.train_state.y_testaux=model_perb.train_state.test_aux_varsoutputs, losses=model_perb.outputs_losses_test(inputs, target, aux)
outputs_net=model_perb.net(torch.as_tensor(inputs))
print(outputs_net)
print(outputs)
Here, top_eigenvector[0] is the top eigenvector of the original model's Hessian matrix and is not zero (checked).
The perturbing step is a bit ridiculous but I also tried other numbers, which made no difference.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Professor Lu Lu and DeepXDE Community,
I have encountered a problem when I try to perturb the model after it is trained.
After I perturbed the model (manually changed the trainable parameters along one direction), I computed the new output by calling model_perb.outputs_losses_test(inputs, target, aux), which should return the outputs and the losses of the model. However, the outputs and the losses were completely the same as in the original unperturbed model. Then I tried directly calling model_perb.net(inputs) and this time the outputs did actually change compared to the original model. I checked the source code of DeepXDE and could not figure out why this happened. Part of my code is as follows:
The output was like
Here, top_eigenvector[0] is the top eigenvector of the original model's Hessian matrix and is not zero (checked).
The perturbing step is a bit ridiculous but I also tried other numbers, which made no difference.
Thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions