Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Understanding Deep Learning - Notebooks

Source: udlbook/udlbook

This directory contains all the Jupyter notebooks from the book "Understanding Deep Learning" by Simon J.D. Prince. These notebooks provide practical implementations and explanations of deep learning concepts covered in the book.

About the Book

"Understanding Deep Learning" is a comprehensive textbook that covers deep learning from fundamentals to advanced topics. The notebooks in this directory complement the book chapters with hands-on code examples and experiments.

Book PDF: The PDF version of the book is also available in this directory: UnderstandingDeepLearning_05_29_25_C.pdf

Notebooks

Chapter 1: Background Mathematics

  • Notebook 1.1 - Background mathematics

Chapter 2: Supervised Learning

  • Notebook 2.1 - Supervised learning

Chapter 3: Shallow Networks

  • Notebook 3.1 - Shallow networks I
  • Notebook 3.2 - Shallow networks II
  • Notebook 3.3 - Shallow network regions
  • Notebook 3.4 - Activation functions

Chapter 4: Deep Networks

  • Notebook 4.1 - Composing networks
  • Notebook 4.2 - Clipping functions
  • Notebook 4.3 - Deep networks

Chapter 5: Loss Functions

  • Notebook 5.1 - Least squares loss
  • Notebook 5.2 - Binary cross-entropy loss
  • Notebook 5.3 - Multiclass cross-entropy loss

Chapter 6: Optimization

  • Notebook 6.1 - Line search
  • Notebook 6.2 - Gradient descent
  • Notebook 6.3 - Stochastic gradient descent
  • Notebook 6.4 - Momentum
  • Notebook 6.5 - Adam

Chapter 7: Training Deep Networks

  • Notebook 7.1 - Backpropagation in toy model
  • Notebook 7.2 - Backpropagation
  • Notebook 7.3 - Initialization

Chapter 8: Generalization

  • Notebook 8.1 - MNIST-1D performance
  • Notebook 8.2 - Bias-variance trade-off
  • Notebook 8.3 - Double descent
  • Notebook 8.4 - High-dimensional spaces

Chapter 9: Regularization

  • Notebook 9.1 - L2 regularization
  • Notebook 9.2 - Implicit regularization
  • Notebook 9.3 - Ensembling
  • Notebook 9.4 - Bayesian approach
  • Notebook 9.5 - Augmentation

Chapter 10: Convolutional Networks

  • Notebook 10.1 - 1D convolution
  • Notebook 10.2 - Convolution for MNIST-1D
  • Notebook 10.3 - 2D convolution
  • Notebook 10.4 - Downsampling & upsampling
  • Notebook 10.5 - Convolution for MNIST

Chapter 11: Residual Networks

  • Notebook 11.1 - Shattered gradients
  • Notebook 11.2 - Residual networks
  • Notebook 11.3 - Batch normalization

Chapter 12: Transformers

  • Notebook 12.1 - Self-attention
  • Notebook 12.2 - Multi-head self-attention
  • Notebook 12.3 - Tokenization
  • Notebook 12.4 - Decoding strategies

Chapter 13: Graph Neural Networks

  • Notebook 13.1 - Encoding graphs
  • Notebook 13.2 - Graph classification
  • Notebook 13.3 - Neighborhood sampling
  • Notebook 13.4 - Graph attention

Chapter 15: Generative Adversarial Networks

  • Notebook 15.1 - GAN toy example
  • Notebook 15.2 - Wasserstein distance

Chapter 16: Normalizing Flows

  • Notebook 16.1 - 1D normalizing flows
  • Notebook 16.2 - Autoregressive flows
  • Notebook 16.3 - Contraction mappings

Chapter 17: Variational Autoencoders

  • Notebook 17.1 - Latent variable models
  • Notebook 17.2 - Reparameterization trick
  • Notebook 17.3 - Importance sampling

Chapter 18: Diffusion Models

  • Notebook 18.1 - Diffusion encoder
  • Notebook 18.2 - 1D diffusion model
  • Notebook 18.3 - Reparameterized model
  • Notebook 18.4 - Families of diffusion models

Chapter 19: Reinforcement Learning

  • Notebook 19.1 - Markov decision processes
  • Notebook 19.2 - Dynamic programming
  • Notebook 19.3 - Monte-Carlo methods
  • Notebook 19.4 - Temporal difference methods
  • Notebook 19.5 - Control variates

Chapter 20: Deep Learning in Practice

  • Notebook 20.1 - Random data
  • Notebook 20.2 - Full-batch gradient descent
  • Notebook 20.3 - Lottery tickets
  • Notebook 20.4 - Adversarial attacks

Chapter 21: Ethics and Explainability

  • Notebook 21.1 - Bias mitigation
  • Notebook 21.2 - Explainability

Usage

Each notebook can be opened and run in Jupyter Notebook, JupyterLab, or Google Colab. The notebooks are self-contained and include all necessary code and explanations.

Original Repository

For the most up-to-date versions of these notebooks, visit the original repository:

License

These notebooks are provided under the MIT License as specified in the original repository.


This content was automatically fetched from the original repository. For the most up-to-date versions, please refer to the source repository.