A comprehensive Jupyter notebook demonstrating and comparing three modern generative modeling approaches: GANs, Diffusion Models, and Flow Matching. All models are trained to transform random noise into a target 2D distribution (8 Gaussians arranged in a circle), with rich visualizations showing their learning dynamics.
This project provides an educational, visual comparison of three fundamental generative modeling paradigms:
- Generative Adversarial Networks (GANs) - Adversarial training between generator and discriminator
- Diffusion Models - Iterative denoising through reverse diffusion process
- Flow Matching - Learning continuous normalizing flows via velocity field matching
All models work on a simple 2D target distribution, making it easy to visualize and understand how each approach learns to generate data.
Visualizations:
- Linear interpolation path visualization with velocity fields
- Flow progression snapshots at 9 timesteps
- Animated ODE integration
- Final comparison plots
pip install numpy matplotlib torch jupyter ipythonOpen 2d_viz.ipynb and run all cells sequentially. The notebook will train all three models from scratch and generate all visualizations.
GAN:
- Fast initial learning (first 100 iterations)
- Mode collapse risk requires careful tuning
- Final samples closely match target distribution
Diffusion Model:
- Stable training with simple MSE loss
- Smooth, gradual denoising process
- High-quality samples with 1000 denoising steps
Flow Matching:
- Direct velocity field learning
- Efficient ODE-based sampling (100 steps)
- Clean interpolation from noise to data
All three models successfully learn to generate the 8-mode circular distribution:
- Snapshots: Show progression from random noise to structured distribution
- Animations: Demonstrate the continuous transformation process
- Overlay plots: Confirm generated samples match the target distribution
The notebook produces the following key visualizations:
Training Evolution: 11 snapshots showing how the generator learns to transform random noise into the 8-mode circular distribution over 5000 iterations.
Final Results: Comparison of target distribution vs GAN-generated samples showing excellent match.
Denoising Progression: 9 snapshots showing the reverse diffusion process transforming pure noise into structured data through 1000 denoising steps.
Final Results: Side-by-side comparison demonstrating the diffusion model's ability to match the target distribution.
Flow Progression: ODE integration steps showing continuous transformation from noise to data via learned velocity fields.
Final Results: Comparison showing flow matching successfully generates samples matching the target distribution.
This notebook helps you understand:
- GAN Training: Adversarial dynamics, generator/discriminator balance, mode collapse
- Diffusion Models: Forward/reverse processes, noise scheduling, DDPM sampling
- Flow Matching: Continuous normalizing flows, velocity fields, ODE integration
- Comparative Analysis: Strengths and weaknesses of each approach
- 2D Visualization: How to visualize high-dimensional concepts in 2D
- GANs are fast but require careful tuning to avoid instabilities
- Diffusion Models are stable and high-quality but require many sampling steps
- Flow Matching offers a middle ground with efficient ODE-based sampling
- All three can model complex multimodal distributions effectively
- 2D visualization provides intuitive understanding of generative processes
- Goodfellow et al. (2014) - "Generative Adversarial Networks"
- Ho et al. (2020) - "Denoising Diffusion Probabilistic Models"
- Song et al. (2021) - "Score-Based Generative Modeling"
- Lipman et al. (2023) - "Flow Matching for Generative Modeling"
- Liu et al. (2023) - "Flow Straight and Fast"
- Training Time: All models train in minutes on CPU, seconds on GPU
- Sample Quality: All three achieve visually indistinguishable results
- Sample Speed:
- GAN: Instant (single forward pass)
- Diffusion: ~1-2 seconds (1000 steps)
- Flow Matching: ~0.2 seconds (100 ODE steps)





