Skip to content

Neural Network

Jhalak Patel edited this page Oct 4, 2017 · 7 revisions

Neural Network Basics:

  1. Back Propagation: Reverse differentiation. Useful for calculating the gradient descent i.e. rate of change of output i.e. Loss function w.r.t. to inputs parameters e.g. weights. Makes neural network training very fast. From Wikipedia, Backpropagation, an abbreviation for “backward propagation of errors”, is a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network (using chain rule, which make backprop powerful than forwardprop). The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function.

    1. BackPropogation - Karpathy
    2. Why we need to learn BackPropagation
    3. BackProp - DeepLearningAndNeuralNetwork Blog
    4. BackProp - Colah's Blog
    5. BackProp - Quora

  1. Gradient Descent:

  2. Loss Function:

  3. Batch Normalization:

    1. Batch Norm with BackPropagation
  4. Data Leakage : When information outside the dataset is used to create the model. When data we are using to train has information about what we are trying to predict.

Clone this wiki locally