Skip to content
/ cookbook Public
forked from EleutherAI/cookbook

Deep learning for dummies. All the practical details and useful utilities that go into working with real models.

License

Notifications You must be signed in to change notification settings

SHMCU/cookbook

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 

Repository files navigation

The Cookbook

Deep learning for dummies, by Quentin Anthony, Jacob Hatef, Hailey Schoelkopf, and Stella Biderman

All the practical details and utilities that go into working with real models! If you're just getting started, we recommend jumping ahead to Basics for some introductory resources on transformers.

Table of Contents

Utilities

Calculations

For training/inference calculations (e.g. FLOPs, memory overhead, and parameter count)

Useful external calculators include

Cerebras Model Lab. User-friendly tool to apply Chinchilla scaling laws.

Transformer Training and Inference VRAM Estimator by Alexander Smirnov. A user-friendly tool to estimate VRAM overhead.

Benchmarks

Communication benchmarks

Transformer sizing and GEMM benchmarks

Reading List

Basics

LLM Visualizations. Clear LLM visualizations and animations for basic transformer understanding.

Annotated PyTorch Paper Implementations

Jay Alammar's blog contains many blog posts pitched to be accessible to a wide range of backgrounds. We recommend his posts the Illustrated Transformer, and the Illustrated GPT-2 in particular.

The Annotated Transformer by Sasha Rush, Austin Huang, Suraj Subramanian, Jonathan Sum, Khalid Almubarak, and Stella Biderman. A walk through of the seminal paper "Attention is All You Need" along with in-line implementations in PyTorch.

Transformer Explainer by Aeree Cho, Grace C. Kim, Alexander Karpekov, Alec Helbling, Jay Wang, Seongmin Lee, Benjamin Hoover, and Polo Chau. It runs a gpt-2 model live in your browser, with visualization and customization options.

How to do LLM Calculations

Transformers Math 101. A blog post from EleutherAI on training/inference memory estimations, parallelism, FLOP calculations, and deep learning datatypes.

Transformer Inference Arithmetic. A breakdown on the memory overhead, FLOPs, and latency of transformer inference

LLM Finetuning Memory Requirements by Alex Birch. A practical guide on the memory overhead of finetuning models.

Distributed Deep Learning

Everything about Distributed Training and Efficient Finetuning by Sumanth R Hegde. High-level descriptions and links on parallelism and efficient finetuning.

Efficient Training on Multiple GPUs by Hugging Face. Contains a detailed walk-through of model, tensor, and data parallelism along with the ZeRO optimizer.

Papers

Best Practices

ML-Engineering Repository. Containing community notes and practical details of everything deep learning training led by Stas Bekman

Common HParam Settings by Stella Biderman. Records common settings for model training hyperparameters and her current recommendations for training new models.

Data and Model Directories

Directory of LLMs by Stella Biderman. Records details of trained LLMs including license, architecture type, and dataset.

Data Provenance Explorer A tool for tracing and filtering on data provenance for the most popular open source finetuning data collections.

Minimal Repositories for Educational Purposes

Large language models are frequently trained using very complex codebases due to the need to optimize things to work at scale and support a wide variety of configurable options. This can make them less useful pedagogical tools, so some people have developed striped-down so-called "Minimal Implementations" that are sufficient for smaller scale work and more pedagogically useful.

GPT Inference

GPT Training

Architecture-Specific Examples

RWKV

Contributing

If you found a bug, typo, or would like to propose an improvement please don't hesitate to open an Issue or contribute a PR.

Cite As

If you found this repository helpful, please consider citing it using

@misc{anthony2024cookbook,
    title = {{The EleutherAI Model Training Cookbook}},
    author = {Anthony, Quentin and Hatef, Jacob and Schoelkopf, Hailey and Biderman, Stella},
    howpublished = {GitHub Repo},
    url = {https://github.com/EleutherAI/cookbook},
    year = {2024}
}

About

Deep learning for dummies. All the practical details and useful utilities that go into working with real models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 86.3%
  • C++ 11.8%
  • Cuda 1.0%
  • C 0.8%
  • Jupyter Notebook 0.1%
  • Makefile 0.0%