Source: udlbook/udlbook
This directory contains all the Jupyter notebooks from the book "Understanding Deep Learning" by Simon J.D. Prince. These notebooks provide practical implementations and explanations of deep learning concepts covered in the book.
"Understanding Deep Learning" is a comprehensive textbook that covers deep learning from fundamentals to advanced topics. The notebooks in this directory complement the book chapters with hands-on code examples and experiments.
Book PDF: The PDF version of the book is also available in this directory: UnderstandingDeepLearning_05_29_25_C.pdf
- Notebook 1.1 - Background mathematics
- Notebook 2.1 - Supervised learning
- Notebook 3.1 - Shallow networks I
- Notebook 3.2 - Shallow networks II
- Notebook 3.3 - Shallow network regions
- Notebook 3.4 - Activation functions
- Notebook 4.1 - Composing networks
- Notebook 4.2 - Clipping functions
- Notebook 4.3 - Deep networks
- Notebook 5.1 - Least squares loss
- Notebook 5.2 - Binary cross-entropy loss
- Notebook 5.3 - Multiclass cross-entropy loss
- Notebook 6.1 - Line search
- Notebook 6.2 - Gradient descent
- Notebook 6.3 - Stochastic gradient descent
- Notebook 6.4 - Momentum
- Notebook 6.5 - Adam
- Notebook 7.1 - Backpropagation in toy model
- Notebook 7.2 - Backpropagation
- Notebook 7.3 - Initialization
- Notebook 8.1 - MNIST-1D performance
- Notebook 8.2 - Bias-variance trade-off
- Notebook 8.3 - Double descent
- Notebook 8.4 - High-dimensional spaces
- Notebook 9.1 - L2 regularization
- Notebook 9.2 - Implicit regularization
- Notebook 9.3 - Ensembling
- Notebook 9.4 - Bayesian approach
- Notebook 9.5 - Augmentation
- Notebook 10.1 - 1D convolution
- Notebook 10.2 - Convolution for MNIST-1D
- Notebook 10.3 - 2D convolution
- Notebook 10.4 - Downsampling & upsampling
- Notebook 10.5 - Convolution for MNIST
- Notebook 11.1 - Shattered gradients
- Notebook 11.2 - Residual networks
- Notebook 11.3 - Batch normalization
- Notebook 12.1 - Self-attention
- Notebook 12.2 - Multi-head self-attention
- Notebook 12.3 - Tokenization
- Notebook 12.4 - Decoding strategies
- Notebook 13.1 - Encoding graphs
- Notebook 13.2 - Graph classification
- Notebook 13.3 - Neighborhood sampling
- Notebook 13.4 - Graph attention
- Notebook 15.1 - GAN toy example
- Notebook 15.2 - Wasserstein distance
- Notebook 16.1 - 1D normalizing flows
- Notebook 16.2 - Autoregressive flows
- Notebook 16.3 - Contraction mappings
- Notebook 17.1 - Latent variable models
- Notebook 17.2 - Reparameterization trick
- Notebook 17.3 - Importance sampling
- Notebook 18.1 - Diffusion encoder
- Notebook 18.2 - 1D diffusion model
- Notebook 18.3 - Reparameterized model
- Notebook 18.4 - Families of diffusion models
- Notebook 19.1 - Markov decision processes
- Notebook 19.2 - Dynamic programming
- Notebook 19.3 - Monte-Carlo methods
- Notebook 19.4 - Temporal difference methods
- Notebook 19.5 - Control variates
- Notebook 20.1 - Random data
- Notebook 20.2 - Full-batch gradient descent
- Notebook 20.3 - Lottery tickets
- Notebook 20.4 - Adversarial attacks
- Notebook 21.1 - Bias mitigation
- Notebook 21.2 - Explainability
Each notebook can be opened and run in Jupyter Notebook, JupyterLab, or Google Colab. The notebooks are self-contained and include all necessary code and explanations.
For the most up-to-date versions of these notebooks, visit the original repository:
- GitHub: https://github.com/udlbook/udlbook
- Notebooks Directory: https://github.com/udlbook/udlbook/tree/main/Notebooks
These notebooks are provided under the MIT License as specified in the original repository.
This content was automatically fetched from the original repository. For the most up-to-date versions, please refer to the source repository.