Skip to content

Commit cab40db

Browse files
authored
Fixed the errors in gradient_centralization.py (#2042)
* Update gradient_centralization.py * updated
1 parent 6681f9e commit cab40db

File tree

3 files changed

+7
-10
lines changed

3 files changed

+7
-10
lines changed

examples/vision/gradient_centralization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ def prepare(ds, shuffle=False, augment=False):
151151
subclass the `RMSProp` optimizer class modifying the
152152
`keras.optimizers.Optimizer.get_gradients()` method where we now implement Gradient
153153
Centralization. On a high level the idea is that let us say we obtain our gradients
154-
through back propogation for a Dense or Convolution layer we then compute the mean of the
154+
through back propagation for a Dense or Convolution layer we then compute the mean of the
155155
column vectors of the weight matrix, and then remove the mean from each column vector.
156156
157157
The experiments in [this paper](https://arxiv.org/abs/2004.01461) on various

examples/vision/ipynb/gradient_centralization.ipynb

Lines changed: 5 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -64,8 +64,7 @@
6464
"from keras import ops\n",
6565
"\n",
6666
"from tensorflow import data as tf_data\n",
67-
"import tensorflow_datasets as tfds\n",
68-
""
67+
"import tensorflow_datasets as tfds\n"
6968
]
7069
},
7170
{
@@ -159,8 +158,7 @@
159158
" )\n",
160159
"\n",
161160
" # Use buffered prefecting\n",
162-
" return ds.prefetch(buffer_size=AUTOTUNE)\n",
163-
""
161+
" return ds.prefetch(buffer_size=AUTOTUNE)\n"
164162
]
165163
},
166164
{
@@ -238,7 +236,7 @@
238236
"subclass the `RMSProp` optimizer class modifying the\n",
239237
"`keras.optimizers.Optimizer.get_gradients()` method where we now implement Gradient\n",
240238
"Centralization. On a high level the idea is that let us say we obtain our gradients\n",
241-
"through back propogation for a Dense or Convolution layer we then compute the mean of the\n",
239+
"through back propagation for a Dense or Convolution layer we then compute the mean of the\n",
242240
"column vectors of the weight matrix, and then remove the mean from each column vector.\n",
243241
"\n",
244242
"The experiments in [this paper](https://arxiv.org/abs/2004.01461) on various\n",
@@ -314,8 +312,7 @@
314312
" self.epoch_time_start = time()\n",
315313
"\n",
316314
" def on_epoch_end(self, batch, logs={}):\n",
317-
" self.times.append(time() - self.epoch_time_start)\n",
318-
""
315+
" self.times.append(time() - self.epoch_time_start)\n"
319316
]
320317
},
321318
{
@@ -473,4 +470,4 @@
473470
},
474471
"nbformat": 4,
475472
"nbformat_minor": 0
476-
}
473+
}

examples/vision/md/gradient_centralization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -170,7 +170,7 @@ We will now
170170
subclass the `RMSProp` optimizer class modifying the
171171
`keras.optimizers.Optimizer.get_gradients()` method where we now implement Gradient
172172
Centralization. On a high level the idea is that let us say we obtain our gradients
173-
through back propogation for a Dense or Convolution layer we then compute the mean of the
173+
through back propagation for a Dense or Convolution layer we then compute the mean of the
174174
column vectors of the weight matrix, and then remove the mean from each column vector.
175175

176176
The experiments in [this paper](https://arxiv.org/abs/2004.01461) on various

0 commit comments

Comments
 (0)