-
Notifications
You must be signed in to change notification settings - Fork 218
Description
First, I read you book and really enjoyed it. I liked how clearly you explained concepts and provided code within the text. I learned a lot.
Github code: The code on GitHub is https://github.com/PacktPublishing/Deep-Learning-with-TensorFlow-2-and-Keras/blob/master/Chapter%206/DCGAN.ipynb
Book: Deep Learning with TensorFlow 2 and Keras Second Edition
I am working on recreating the deep convolutional GAN starting on page 198 and finding the quality of results changes drastically run to run. I would like to rule out a few discrepancies between the book and the current GitHub code.
-
On page 199, the book says the learning rate is 0.002. Two zeros after the decimal place. On GitHub and in book’s code, the learning rate 0.0002. Three zeros after the decimal place. Which one is correct?
-
On page 200, the text says noise is 100 dimensions. In the book, the code is aligned to this. Z has a default value of 100 and is unchanged when the instance is created. On GitHub, the default is 10. Not 100. Is one value preferred over the other? Is this an insignificant choice?
-
For the vanilla gan, the discriminator's weights are set to trainable before calling discriminator.train_on_batch. Then the weights are reverted back to not trainable when gan.train_on_batch is called. For the deep convolutional GAN, the discriminator’s weights are never set to trainable. Was this on purpose?
-
For the vanilla gan, the discriminator receives one set of fake data. Then a different random sample is created and the GAN is trained. For the deep convolutional gan, the discriminator and gan are trained on the same random sample of noise. Is there a reason for the difference? Is this an insignificant choice?