Skip to content
2 changes: 1 addition & 1 deletion Machine Learning/2025-26/README.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
## Session Reports for meets conducted between the years of 2024-25
## Session Reports for meets conducted between the years of 2025-26
49 changes: 49 additions & 0 deletions Machine Learning/2025-26/Session 04-07-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 04/07/2025

## Agenda
VGGNet and GoogLeNet

## Summary
1. The effect of the convolutional network depth on its accuracy.

2. Analyze the key design choices (small filters, increasing depth).

3. ⁠Training process of VGG (Preprocessing techniques, Choice of Hyperparameters, choice of training Scale S).

4. Testing process of VGG (Choice of test scale).

5. Different validation methods- Single Scale, Multi Scale, Dense evaluation and Multi-Crop.

6. Comparison with the state of art.

7. VGG performance on Localisation Test and Mean Average Precision(mAP).

8. Need of GoogLeNet, discussion on Hebbian Principle.

9. ⁠Need of 1x1 conv and why max pooling befor 1x1 conv.

10. Sparse and Deep connections.

11. Architecture detail of Inception Model.

12. Results of GoogLeNet.

## Agenda for the next session
* Resnet
* DenseNet

## Report Compiled by
Ritesh Kumbhare

## Attendees
Final year: Samyak Jha Sir.

3rd year: Mukil Sir, Dilshad Sir.

2nd year: Anab, Arnav, Arjav, Anukul, Abhishek, Ritesh, Rajat, Sreenandan, Ayushman
## Absentees
Second Year: None



31 changes: 31 additions & 0 deletions Machine Learning/2025-26/Session 06-09-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 06/09/2025

## Agenda
Object Detection Fundamentals, R-CNN, and Fast R-CNN

## Summary
1. ⁠Evolution of Object Detection: Transition from sliding window approaches to region proposal methods to manage computational complexity
2. Selective Search Algorithm: Discussion on using hierarchical grouping of similar pixels based on color, texture, size, and shape to generate ~2000 candidate regions.
3. ⁠R-CNN Architecture: Detailed walkthrough of the three-stage pipeline: Region Proposal, Feature Extraction (via CNN), and Classification/Bounding Box Regression.
4. Warping and Pre-processing: The necessity of resizing region proposals to a fixed 224 x 224 input size for the CNN, and the impact of aspect ratio distortion.
5. Feature Extraction: Leveraging pre-trained ImageNet models (like AlexNet) and fine-tuning them for the specific detection task.
6. Classification vs. Detection: Why SVMs (Support Vector Machines) were used for final classification instead of a Softmax layer in the original R-CNN paper.
7. ⁠Bounding Box Regression: Mathematical intuition behind refining the coordinates of the predicted box to better fit the ground truth.
8. ⁠Bottlenecks of R-CNN: Analysis of why the original model is slow (offline feature storage) and expensive in terms of disk space and inference time.
9. ⁠Introduction to Fast R-CNN: Solving the speed bottleneck by passing the entire image through the CNN once rather than processing 2000 individual crops.
10. ⁠RoI (Region of Interest) Pooling: How to extract fixed-length feature vectors from valid regions of a feature map to enable end-to-end training.

## Agenda for the next session
* Faster RCNN

## Report Compiled by
Anukul Tiwari

## Attendees
Third Year Attendees: Mukil M Sir, Harshvardhan Saini Sir

Second Year Attendees: Anab, Arnav, Ritesh, Rajat, Arjav, Abhishek, Ayushman, Sreenandan, Anukul

## Absentees
Second Year: None
29 changes: 29 additions & 0 deletions Machine Learning/2025-26/Session 07-10-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 07/10/2025

## Agenda
Faster R-CNN

## Summary
1. Introduction of a fully convolutional RPN.
2. Discussion on Anchor-based Proposal Generation,
3. ⁠Unified training of RPN and Fast R-CNN-style detector and,
4. ⁠Elimination of external proposal computation leads to substantial inference-time speedups.
5. ⁠Study of its Multi-task Loss Formulation.


## Agenda for the next session
* Masked RCNN
* YOLOv1
* YOLOv2

## Report Compiled by
Arnav Tripathi

## Attendees
Third Year Attendees: Mukil M Sir, Harshvardhan Saini Sir, Dilshad Sir, Mohd. Ashaz Khan Sir.

Second Year Attendees: Anab, Arnav, Ritesh, Rajat, Arjav, Abhishek, Ayushman, Sreenandan, Anukul

## Absentees
Second Year: None
36 changes: 36 additions & 0 deletions Machine Learning/2025-26/Session 08-08-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 08/08/2025

## Agenda
MobileNet v1 and Squeeze-and-Excitation Network

## Summary
1. Introduction to MobileNet v1 and its motivation for building lightweight and efficient convolutional neural networks.
2. Overview of the MobileNet v1 architecture and its layer-wise design.
3. Explanation of Depthwise Separable Convolutions and how they differ from standard convolutions.
4. ⁠Discussion on the reduction of computational cost and number of parameters using depthwise and pointwise convolutions.
5. ⁠Detailed discussion on the Width Multiplier used to scale the number of channels in the network.
6. ⁠Explanation of the Resolution Multiplier and its role in reducing input feature map resolution.
7. ⁠Discussion on how width and resolution multipliers help in controlling the trade-off between accuracy and efficiency.
8. ⁠Detailed discussion on the Squeeze operation using global average pooling to capture channel-wise statistics.
9. ⁠Explanation of the Excitation operation, where learned weights are used for channel re-weighting.
10. Discussion on how SE blocks can be integrated into existing CNN architectures.
11. Analysis of how the Squeeze-and-Excitation mechanism resembles attention mechanisms by emphasizing informative channels and suppressing less useful ones.


## Agenda for the next session
* MobileNet v2
* U-Net
* EfficientNet


## Report Compiled by
Ritesh Kumbhare

## Attendees
3rd year: Harshvardhan Sir, Mukil Sir, Dilshad Sir.

2nd year: Anab, Arnav, Arjav Anukul, Abhishek, Ritesh, Rajat, Sreenandan, Ayushman.

## Absentees
Second Year: None
32 changes: 32 additions & 0 deletions Machine Learning/2025-26/Session 17-08-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 17/08/2025

## Agenda
MobileNetV2, U-Net, and EfficientNet

## Summary
1. Analyzed the shift from standard residuals to Inverted Residuals in MobileNetV2 and why connecting thin bottleneck layers is more memory-efficient.
2. Discussed the intuition behind Linear Bottlenecks, specifically how non-linear activations like ReLU can destroy information in low-dimensional spaces.
3. Compared the concatenation method in U-Net skip connections to the summation used in ResNet, noting how it preserves spatial texture for precise localization.
4. Examined the Symmetric path of U-Net and how the decoder effectively reconstructs the image from the low-resolution context provided by the encoder.
5. Critiqued traditional manual scaling methods (just adding layers vs. just adding channels) and why they eventually lead to accuracy saturation.
6. Evaluated the Compound Scaling rule in EfficientNet as a method to balance depth, width, and resolution simultaneously for better FLOPs efficiency.
7. Explored the role of Neural Architecture Search (NAS) in creating the EfficientNet-B0 baseline and how it optimizes for both accuracy and latency.
8. Briefly touched upon the integration of Squeeze-and-Excitation blocks within the MBConv layers to help the model focus on the most important feature channels.


## Agenda for the next session
* RCNN

## Report Compiled by
Anab Farooq

## Attendees
Fourth Year Attendees: Karaka Prasanth Naidu Sir, Manav Jain Sir.

Third Year Attendees: Green Kedia Sir, Mukil M Sir, Harshvardhan Saini Sir, Daksh Mor Sir, Mohd. Ashaz Khan Sir, Priyam Pritam Panda Sir, Abhinav Jha Sir, Dilshad Sir

Second Year Attendees: Anab, Arnav, Ritesh, Rajat, Arjav, Abhishek, Ayushman, Sreenandan, Anukul

## Absentees
Second Year: None
31 changes: 31 additions & 0 deletions Machine Learning/2025-26/Session 17-10-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 17/10/2025

## Agenda
Masked RCNN, YOLOv1, YOLOv2

## Summary
1. How RoIAlign preserves exact spatial locations through bilinear interpolation.
2. Advantage of Decoupling
3. ⁠How Multi Task Learning allows model to focus on spatial layout?
4. Brief overview of techniques such as Feature Pyramid Networks (FPN), Data Distillation for unlabeled data
5. Reframing object detection as a single regression problem
6. ⁠Intuition behind YOLO working better at capturing global context than RCNNs
7. Mathematical intuition behind Multi-part loss function of YOLO
8. ⁠Analysis of systemic improvements in YOLOv2 like introduction of anchor boxes to improve recall


## Agenda for the next session
* SimCLR V1
* SimCLR V2

## Report Compiled by
Ayushman Dutta

## Attendees
Third Year Attendees: Mukil M Sir, Harshvardhan Saini Sir, Dilshad Sir, Green Sir

Second Year Attendees: Anab, Arnav, Ritesh, Rajat, Arjav, Abhishek, Ayushman, Sreenandan, Anukul

## Absentees
Second Year: None
28 changes: 28 additions & 0 deletions Machine Learning/2025-26/Session 19-12-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 19/12/2025

## Agenda
RNN, LSTMs, GRUs, Sequence modeling

## Summary
1. Introduced sequence modeling, looked at how traditional approaches fail to properly capture the positional relations.
2. Discussed how 1-D convolutions with dilated convolutions can also be used in sequence modeling.
3. Looked at how RNNs are better than neural networks using the unfolding mechanism to capture these relationships.
4. Examined how LSTMs are much better than RNNs due to the different gates, allowing it to accurately retain information without suffering from the vanishing gradient problem.
5. Constant Error Carousel and Bidirectional LSTMs.


## Agenda for the next session
* Tokenization
* Word Embeddings & Word2Vec

## Report Compiled by
Sreenandan Shashidharan

## Attendees
Third Year Attendees: Green Kedia Sir, Harshvardhan Saini Sir

Second Year Attendees: Anab, Arnav, Ritesh, Rajat, Arjav, Abhishek, Ayushman, Sreenandan, Anukul

## Absentees
Second Year: None
27 changes: 27 additions & 0 deletions Machine Learning/2025-26/Session 25-10-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 25/10/2025

## Agenda
SimCLR v1, SimCLR v2, Contrastive Learning, Self-Supervised Representation Learning

## Summary
1. The session began with an introduction to self-supervised learning and the motivation behind learning representations without labeled data. We discussed how traditional supervised learning relies heavily on annotated datasets and how contrastive learning provides an effective alternative.
2. SimCLR v1 was introduced, focusing on the core idea of contrastive loss (NT-Xent loss) and how positive and negative pairs are constructed using strong data augmentations. The importance of augmentation strategies such as random cropping, color jitter, Gaussian blur, and normalization was emphasized.
3. We then analyzed the SimCLR architecture, including the encoder backbone and the projection head, and discussed why representations are taken before the projection head during downstream tasks. The role of large batch sizes and temperature scaling in improving contrastive learning performance was also examined.
4. Following this, SimCLR v2 was discussed, highlighting improvements over v1 such as deeper and wider networks, better training strategies, and the introduction of semi-supervised fine-tuning. The concept of freezing the backbone and fine-tuning with limited labeled data was explained in detail.
5. The session concluded with a comparison between SimCLR v1 and v2, focusing on performance gains, training efficiency, and practical use cases in real-world vision tasks.

## Agenda for the next session
* RNN
* LSTM

## Report Compiled by
Rajat Shedshyal

## Attendees
Third Year Attendees: Mukil M Sir, Harshvardhan Saini Sir, Green Sir

Second Year Attendees: Anab, Arnav, Ritesh, Rajat, Arjav, Abhishek, Ayushman, Sreenandan, Anukul

## Absentees
Second Year: None
35 changes: 35 additions & 0 deletions Machine Learning/2025-26/Session 28-07-25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Session of Machine Learning Division of CyberLabs
Conducted on: 28/07/2025

## Agenda
ResNet, DenseNet and Kaggle Competition submissions

## Summary
1. ⁠Degradation problem in optimising very deep neural networks.
2. ⁠Benefit of using skip connections and identity mapping from previous layer.
3. Training process of ResNet (Stochastic Depth Regularisation)
4. ⁠Brief overview of Fischer Vectors
5. ⁠Dense Connectivity (and the benefit of concatenation over summation in this)
6. ⁠Intuition behind Growth rate, Bottleneck layers and Compression in transition layers.
7. ⁠Advantage of deterministic connection in DenseNet in preventing overfitting and ensuring good gradient flow.
8. ⁠Brief discussion on how Residual networks behave like ensembles of relatively shallow networks.
9. ⁠Analysis of Feature Reuse by observing average absolute filter weights of conv layers in a trained DenseNet.
10. Training data sources and Performance of DenseNet on various competitive datasets.

## Agenda for the next session
* Squeeze and Excitation Networks
* MobileNet V1
* MobileNet V2

## Report Compiled by
Ayushman Dutta

## Attendees
3rd year: Mukil Sir

2nd year: Anab, Arnav, Arjav, Anukul, Abhishek, Ritesh, Rajat, Sreenandan, Ayushman
## Absentees
Second Year: None



Loading