-
Notifications
You must be signed in to change notification settings - Fork 894
Add l1 and l1+l2 regularizers for PyTorch backend #1884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
You are right that regularization should not be applied to |
|
Fixed. Please note that I replaced with And I removed the exception because AdamW will still start with the default |
|
Let us fix l2 in this PR. Use a new PR for adding l1 and l1+l2. |
|
Take a look now please. |
|
In line 113, emphasize that |
Fixed. |
L1 regularizer is added. Not quite sure about the line:
Perhaps regularization should not be applied to
self.external_trainable_variables? Then it must be like:However, in current l2 regularization
trainable_variables(bothself.net.parameters()andself.external_trainable_variables) are used.