Skip to content

Dannynis/DIffLearning

Repository files navigation

Generative Models 2D Visualization

A comprehensive Jupyter notebook demonstrating and comparing three modern generative modeling approaches: GANs, Diffusion Models, and Flow Matching. All models are trained to transform random noise into a target 2D distribution (8 Gaussians arranged in a circle), with rich visualizations showing their learning dynamics.

📊 Overview

This project provides an educational, visual comparison of three fundamental generative modeling paradigms:

  1. Generative Adversarial Networks (GANs) - Adversarial training between generator and discriminator
  2. Diffusion Models - Iterative denoising through reverse diffusion process
  3. Flow Matching - Learning continuous normalizing flows via velocity field matching

All models work on a simple 2D target distribution, making it easy to visualize and understand how each approach learns to generate data.

Visualizations:

  • Linear interpolation path visualization with velocity fields
  • Flow progression snapshots at 9 timesteps
  • Animated ODE integration
  • Final comparison plots

🚀 Getting Started

pip install numpy matplotlib torch jupyter ipython

Open 2d_viz.ipynb and run all cells sequentially. The notebook will train all three models from scratch and generate all visualizations.

📈 Key Results

Training Dynamics

GAN:

  • Fast initial learning (first 100 iterations)
  • Mode collapse risk requires careful tuning
  • Final samples closely match target distribution

Diffusion Model:

  • Stable training with simple MSE loss
  • Smooth, gradual denoising process
  • High-quality samples with 1000 denoising steps

Flow Matching:

  • Direct velocity field learning
  • Efficient ODE-based sampling (100 steps)
  • Clean interpolation from noise to data

Visual Comparisons

All three models successfully learn to generate the 8-mode circular distribution:

  1. Snapshots: Show progression from random noise to structured distribution
  2. Animations: Demonstrate the continuous transformation process
  3. Overlay plots: Confirm generated samples match the target distribution

📊 Generated Visualizations

The notebook produces the following key visualizations:

1. GAN Training Progression

Training Evolution: 11 snapshots showing how the generator learns to transform random noise into the 8-mode circular distribution over 5000 iterations.

GAN Training Progression

Final Results: Comparison of target distribution vs GAN-generated samples showing excellent match.

GAN Results

2. Diffusion Model Process

Denoising Progression: 9 snapshots showing the reverse diffusion process transforming pure noise into structured data through 1000 denoising steps.

Diffusion Model Progression

Final Results: Side-by-side comparison demonstrating the diffusion model's ability to match the target distribution.

Diffusion Results

3. Flow Matching Visualization

Flow Progression: ODE integration steps showing continuous transformation from noise to data via learned velocity fields.

Flow Matching Progression

Final Results: Comparison showing flow matching successfully generates samples matching the target distribution.

Flow Matching Results

🎓 Learning Objectives

This notebook helps you understand:

  1. GAN Training: Adversarial dynamics, generator/discriminator balance, mode collapse
  2. Diffusion Models: Forward/reverse processes, noise scheduling, DDPM sampling
  3. Flow Matching: Continuous normalizing flows, velocity fields, ODE integration
  4. Comparative Analysis: Strengths and weaknesses of each approach
  5. 2D Visualization: How to visualize high-dimensional concepts in 2D

🔍 Key Insights

  • GANs are fast but require careful tuning to avoid instabilities
  • Diffusion Models are stable and high-quality but require many sampling steps
  • Flow Matching offers a middle ground with efficient ODE-based sampling
  • All three can model complex multimodal distributions effectively
  • 2D visualization provides intuitive understanding of generative processes

References

GANs

  • Goodfellow et al. (2014) - "Generative Adversarial Networks"

Diffusion Models

  • Ho et al. (2020) - "Denoising Diffusion Probabilistic Models"
  • Song et al. (2021) - "Score-Based Generative Modeling"

Flow Matching

  • Lipman et al. (2023) - "Flow Matching for Generative Modeling"
  • Liu et al. (2023) - "Flow Straight and Fast"

📊 Performance Notes

  • Training Time: All models train in minutes on CPU, seconds on GPU
  • Sample Quality: All three achieve visually indistinguishable results
  • Sample Speed:
    • GAN: Instant (single forward pass)
    • Diffusion: ~1-2 seconds (1000 steps)
    • Flow Matching: ~0.2 seconds (100 ODE steps)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published