MIT 6.S965/6.5940 • Fall • 2022-2024
Instructor : Song Han(Associate Professor, MIT EECS)
Lecture notes for courses MIT 6.S965, Fall 2022 | MIT 6.5940, Fall 2023•2024
Course | Video | Slide | Note | Homework |
---|---|---|---|---|
MIT 6.5940 • 2024 • Fall | Videos | Slides | Notes | Lab 1 / Lab 2 / Lab 3 / Lab 4 / Lab 5 |
MIT 6.5940 • 2023 • Fall | Videos | Slides | Notes | - |
MIT 6.S965 • 2022 • Fall | Videos | Slides | Notes | Lab 4: Deployment on MCU |
-
Basic Terminologies, Shape of Tensors
Synapse(weight), Neuron(activation), Cell body
Fully-Connected layer, Convolution layer(padding, stride, receptive field, grouped convolution), Pooling layer
-
Metrics(latency, storage, energy)
Memory-Related(#parameters, model size, #activations), Computation(MACs, FLOP)
-
Pruning Granularity, Pruning Critertion
Unstructured/Structured pruning(Fine-grained/Pattern-based/Vector-level/Kernel-level/Channel-level)
Pruning Criterion: Magnitude(L1-norm, L2-norm), Sensitivity and Saliency(SNIP), Loss Change(First-Order, Second-Order Taylor Expansion)
Data-Aware Pruning Criterion: Average Percentage of Zero(APoZ), Reconstruction Error, Entropy
-
Automatic Pruning, Lottery Ticket Hypothesis
Finding Pruning Ratio: Reinforcement Learning based, Rule based, Regularization based, Meta-Learning based
Lottery Ticket Hypothesis(Winning Ticket, Iterative Magnitude Pruning, Scaling Limitation)
Pruning at Initialization(Connection Sensitivity, Gradient Flow)
-
System & Hardware Support for Fine-grained Sparsity
Efficient Inference Engine(EIE format: relative index, column pointer)
-
Sparse Matrix-Matrix Multiplication, GPU Support for Sparsity
Sparse Matrix-Matrix Multiplication(SpMM), CSR format
GPU Support for Sparsity: Hierarchical 1-Dimensional Tiling, Row Swizzle, M:N Sparsity, Block SpMM(Blocked-ELL format), PatDNN(FKW format)
-
Basic Concepts of Quantization
Numeric Data Types: Integer, Fixed-Point, Floating-Point(IEEE FP32/FP16, BF16, NVIDIA FP8), INT4 and FP4
Uniform vs Non-uniform quantization, Symmetric vs Asymmetric quantization
Linear Quantization: Integer-Arithmetic-Only Quantization, Sources of Quantization Error(clipping, rounding, scaling factor, zero point)
-
Vector Quantization(Deep compression: iterative pruning, K-means based quantization, Huffman encoding), Product Quantization
-
Weight Quantiztion: Per-Tensor Activation Per-Channel Activation, Group Quantization(Per-Vector, MX), Weight Equalization, Adative Rounding
Activation Quantization: During training(EMA), Calibration(Min-Max, KL-divergence, Mean Squared Error)
Bias Correction, Zero-Shot Quantization(ZeroQ)
-
Quantization-Aware Training, Low bit-width quantization
Fake quantization, Straight-Through Estimator
Binary Quantization(Deterministic, Stochastic, XNOR-Net), Ternary Quantization
-
Neural Architecture Search: basic concepts & manually-designed neural networks
input stem, stage, head
AlexNet, VGGNet, SqueezeNet(fire module), ResNet(bottleneck block, residual connection), ResNeXt(grouped convolution)
MobileNet(depthwise-separable convolution, width/resolution multiplier), MobileNetV2(inverted bottleneck block), ShuffleNet(channel shuffle), SENet(squeeze-and-excitation block), MobileNetV3(h-swish)
-
Neural Architecture Search: Search Space
Search Space: Macro, Chain-Structured, Cell-based(NASNet), Hierarchical(Auto-DeepLab, NAS-FPN)
design search space: Cumulative Error Distribution, FLOPs distribution, zero-cost proxy
-
Neural Architecture Search: Performance Estimation & Hardware-Aware NAS
Weight Inheritance, HyperNetwork, Weight Sharing(super-network, sub-network)
Performance Estimation Heuristics: Zen-NAS, GradSign
-
Knowledge Distillation(distillation loss, softmax temperature)
What to Match?: intermediate weights, features(attention maps), sparsity pattern, relational information
Distillation Scheme: Offline Distillation, Online Distillation, Self-Distillation
-
Applications: Object Detection, Semantic Segmentation, GAN, NLP
Tiny Neural Network: NetAug
-
MCUNetV1: TinyNAS, TinyEngine
MCUNetV2: MCUNetV2 architecture(MobileNetV2-RD), patch-based inference, joint automated search
-
Memory Hierarchy of Microcontroller, Primary Memory Format(NCHW, NHWC, CHWN)
Parallel Computing Techniques: Loop Unrolling, Loop Reordering, Loop Tiling, SIMD programming
Inference Optimization: Im2col, In-place depthwise convolution, appropriate data layout(pointwise, depthwise convolution), Winograd convolution
-
NLP Task(Discriminative, Generative), Pre-Transformer Era(RNN/LSTM, CNN)
Transformer: Tokenizer, Embedding, Multi-Head Attention(self-attention), Feed-Forward Network, Layer Normalization(Pre-Norm, Post-Norm), Positional Encoding
-
Types of Transformer-based Models: Encoder-Decoder(T5), Encoder-only(BERT), Decoder-only(GPT)
Relative Positional Encoding(ALiBi, RoPE, interpolating RoPE), KV cache optimization(Multi-query Attention, Grouped-query Attention), Gated Linear Unit
-
Quantization Difficulty of LLMs, Bottleneck of edge LLM Inference(Memory-bounded, Memory footprint of Weights)
Weight-activation Quantization: SmoothQuant(Activation Smoothing)
Weight-only Quantization: AWQ(1% Salient Weights, Activation-aware Scaling)
-
Efficient System Support for LLM Quantization
System for Edge: TinyChat(Hardware-aware Weight Packing, Kernel Fusion)
System for Cloud: Overhead in Quantized GEMM, QServe(SmoothAttention, Dequantization with Reg-Level Parallelism)
-
Weight Sparsity: Wanda
Contextual Sparsity: Deja Vu, Mixture-of-Experts
Attention Sparsity: SpAtten, H2O
-
Supervised Fine-Tuning, Reinforcement Learning from Human Feedback, Direct Preference Optimization
Parameter-Efficient Fine-Tuning: Additive(Adapter, Prompt/Prefix Tuning) Selective(BitFit), Reparameterized(LoRA)
PEFT Quantization: QLoRA, BitDelta
-
Vision Transformer, High-Resolution Dense Prediction, Segment Anything
Window Attention(Swin Transformer, FlatFormer), ReLU Linear Attention(EfficientViT), Sparse Attention(SparseViT)
-
2D CNNs for Video Understanding, 3D CNNs for Video Understanding(I3D), Temporal Shift Module(TSM)
Other Efficient Methods: Kernel Decomposition, Multi-Scale Modeling, Neural Architecture Search(X3D), Skipping Redundant Frames/Clips, Utilizing Spatial Redundancy
-
Generative Adversarial Networks (GANs)
GANs(Generator, Discriminator), Conditional/Unconditional GANs, Difficulties in GANs
Compress Generator(GAN Compression), Dynamic Cost GANs(Anycost GANs), Data-Efficient GANs(Differentiable Augmenatation)