Open
Description
- the problem here is that the trained weights do not live in the PipeOpTorch objects
- PipeOpTorch should probably keep a link to the nn_module's parameters (torch_tensors have reference semantics, in some way). (Anticipating problems with cloning here...). They should have a hyperparameter 'fix_weights' or something like that, in which case the new nn_module they create has these weights (and they are fixed).
- maybe also have a 'relative learning rate' hyperparameter