Conducted on: 28/07/2025
ResNet, DenseNet and Kaggle Competition submissions
- Degradation problem in optimising very deep neural networks.
- Benefit of using skip connections and identity mapping from previous layer.
- Training process of ResNet (Stochastic Depth Regularisation)
- Brief overview of Fischer Vectors
- Dense Connectivity (and the benefit of concatenation over summation in this)
- Intuition behind Growth rate, Bottleneck layers and Compression in transition layers.
- Advantage of deterministic connection in DenseNet in preventing overfitting and ensuring good gradient flow.
- Brief discussion on how Residual networks behave like ensembles of relatively shallow networks.
- Analysis of Feature Reuse by observing average absolute filter weights of conv layers in a trained DenseNet.
- Training data sources and Performance of DenseNet on various competitive datasets.
- Squeeze and Excitation Networks
- MobileNet V1
- MobileNet V2
Ayushman Dutta
3rd year: Mukil Sir
2nd year: Anab, Arnav, Arjav, Anukul, Abhishek, Ritesh, Rajat, Sreenandan, Ayushman
Second Year: None