Free Transformer: A Llama-style decoder architecture with explicit latent plans, conditional VAE training, and benchmark comparisons against standard Transformers.
Designed for efficient PyTorch training on modern GPUs with full FSDP support and modern optimizations.
π Complete Documentation | π Quick Start Guide | ποΈ Architecture Details
Traditional autoregressive Transformers generate each token by conditioning only on the sequence so far ("reactive" behavior).
Free Transformer introduces a latent planning mechanismβfirst choosing a stochastic abstract plan (Z), then generating tokens to fit that plan.
This scalable conditional VAE architecture maintains high-level coherence, improves controllable generation, and enables richer sequence modeling.
- Llama-style backbone: RMSNorm, SwiGLU, RoPE, Grouped-Query Attention (GQA)
- Latent Planning: Explicit plan variable
Zwith differentiable binary coding - Conditional VAE: Reconstruction + KL loss with free bits regularization
- FSDP Support: Multi-GPU training with PyTorch Fully Sharded Data Parallel
- Mixed Precision: Automatic Mixed Precision (AMP) with gradient scaling
- Memory Efficient: Gradient checkpointing and optimized attention patterns
- Modern Optimizations: bfloat16, efficient parameter sharding
- Flexible Training: Switchable inference/training flows with mode selection
- Synthetic + Real Data: Fast prototyping with built-in synthetic data generation
- Comprehensive Testing: Unit/integration tests, benchmark comparisons
- Quality Assurance: Type checking, linting, formatting, CI-ready
- Extensible API: Modular classes, CLI scripts, YAML configuration
- Docker Support: Containerized demos and development environment
- Documentation: API references, architecture guides, examples
pip install free-transformerUsing UV (recommended):
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone and install
git clone https://github.com/udapy/free-transformer.git
cd free-transformer
uv venv --python 3.12
source .venv/bin/activate
uv pip install -e ".[dev]"Using pip:
git clone https://github.com/udapy/free-transformer.git
cd free-transformer
pip install -e ".[dev]"OR
uv run pip install free-transformerπ Detailed installation instructions: Installation Guide
The fastest way to try Free Transformer:
git clone https://github.com/udapy/free-transformer.git
cd free-transformer
docker-compose up free-transformer-demofrom free_transformer import FreeTransformer, ModelConfig
# Create and train a model
config = ModelConfig(vocab_size=1000, hidden_dim=128, num_layers=6, latent_dim=8)
model = FreeTransformer(config)
# Training mode
import torch
tokens = torch.randint(0, 1000, (2, 128))
logits, z_logits = model(tokens, mode='training')
# Generation
generated = model.generate(tokens[:, :10], max_new_tokens=20)# Generate synthetic data and run demo
make demo
# Train models separately
make train-baseline # Standard Transformer
make train-free # Free Transformer
make compare # Compare resultsπ― Complete tutorial: Quick Start Guide
-
Generate Small Synthetic Data
make generate-data-small
-
Train Baseline Transformer
make train-baseline
-
Train Free Transformer
make train-free
-
Run Model Comparison
make compare
Or run the full pipeline:
make demoCheck results in:
checkpoints/baseline/checkpoints/free/results/comparison/results.json
| Feature | Standard Transformer | Free Transformer |
|---|---|---|
| Generation | Reactive (token-by-token) | Plan-then-generate |
| Coherence | Local | Global + Local |
| Controllability | Limited | High (via plan manipulation) |
| Training | Cross-entropy loss | Conditional VAE loss |
| Memory | Baseline | +10-15% (inference) |
| Speed | Baseline | -5-10% (inference) |
π¬ Detailed comparison: Architecture Overview
free-transformer/
βββ src/free_transformer/
β βββ model.py
β βββ baseline.py
β βββ encoder.py
β βββ latent.py
β βββ injection.py
β βββ losses.py
β βββ synthetic_data.py
β βββ train_utils.py
β βββ config.py
βββ examples/
β βββ train_baseline.py
β βββ train_free.py
β βββ eval_compare.py
β βββ generate_data.py
βββ configs/
β βββ baseline.yaml
β βββ free_transformer.yaml
βββ docker/
β βββ demo.sh
β βββ README.md
βββ tests/
β βββ unit/
β βββ integration/
β βββ test_comparison.py
βββ docs/
βββ Dockerfile
βββ Dockerfile.cpu
βββ docker-compose.yml
βββ Makefile
βββ pyproject.toml
βββ .python-version
βββ LICENSE
βββ README.md
Run all tests:
make testQuality checks:
make quality# FSDP training with automatic GPU detection
make train-free-fsdp
# Custom distributed training
torchrun --nproc_per_node=auto examples/train_free.py --use-fsdp- HuggingFace datasets integration
- Built-in synthetic data generation
- Custom data loading pipelines
- Modular components for easy customization
- Custom loss functions and training schedules
- Plugin system for new features
π Learn more: Training Guide | Multi-GPU Setup
- π Getting Started - Installation and setup
- ποΈ Architecture - How Free Transformer works
- π― Training Guide - Training best practices
- π API Reference - Complete API documentation
- π‘ Examples - Code examples and tutorials
- β FAQ - Frequently asked questions
# Serve documentation locally
make docs-serve
# Open http://127.0.0.1:8000MIT License β see LICENSE
We welcome contributions! Please see our Contributing Guide for details.
git clone https://github.com/udapy/free-transformer.git
cd free-transformer
make install-all # Install with all dependencies
make test # Run tests
make quality # Check code quality- β
Tests pass:
make test - β
Code quality:
make quality - β
Documentation builds:
make docs-build
π Full guidelines: Contributing Guide
Can I use this for real-world (non-synthetic) data?
Yes! Edit configs and use HuggingFace datasets.
How do I run distributed training?
Use provided CLI flags or edit config. See docs and Makefile.
How do I change architecture parameters?
Edit YAML config files for layer size, latent dim, number of blocks, etc.
Can I run this without installing dependencies locally?
Yes! Use Docker: docker-compose up free-transformer-demo for a complete demo.
What if I don't have a GPU?
Use the CPU Docker image: make docker-build-cpu && make docker-run-cpu
If you use Free Transformer in your research, please cite:
@software{free_transformer,
title={Free Transformer: Explicit Latent Planning for Autoregressive Generation},
author={Phalak, Uday},
year={2024},
url={https://github.com/udapy/free-transformer},
version={0.1.0}
}- π¦ PyPI Package
- π Documentation
- π Issues
- π¬ Discussions
Free Transformer - Bringing explicit planning to autoregressive generation
Documentation β’ PyPI β’ GitHub
