-
Notifications
You must be signed in to change notification settings - Fork 3
Visualization Neural Networks
Jhalak Patel edited this page Nov 22, 2017
·
3 revisions
-
- Understand DeconvNet:
- Decide which filter or activation to visualize. eg. 15th filter of conv4_3 layer. Shows pattern in the image space that causes this activation. We should pick a high magnitude activation - as only some image features will cause high magnitude activations. For a randomly picked activation set - we can find "N" images in validation datasets which give highest value for that activation.
- Pass the image forward through conv net, upto and including the layer which we have choosen the activation from i.e. conv4_3 layer
- Zero out filter activations (channels) in the last layer except for the one we want to visualize
- Go back to the image spec through deconv net:
- Unpooling:Max pooling can not be inverted exactly. Author proposed to remember the "position" of max lower layer activation in "switch tables". while going back from upper layer to lower layer. activation from upper layer is copy pasted to the position pointed by switch variable, and all the other lower layer activations are set to zero. Note: Different images will produce different activations, thus value of switch layer will change.
- ReLU:Inverse of ReLU is ReLU. As Conv is applied to rectified activations in forward pass, deconv should be applied to rectified reconstruction in backward pass.
- Deconvolution: Same filters that are corresponding to conv layers. But they are flipped horizontally and vertically.
- Follow above 3 steps till we reach image layer. The pattern emerges in the image layer is the discriminative pattern that selected activation is sensitive to.
- Understand DeconvNet: