In Google Colab:
from google.colab import drive
drive.mount('/content/drive')Then CD into the Colab Notebooks:
!cd /content/drive/MyDrive/Colab\ Notebooks
Clone this repo:
!git clone https://github.com/tatwan/handson-genai.git
This intensive three-day training course combines theory with extensive hands-on practice to teach participants how to build production-ready generative AI applications using state-of-the-art models and techniques.
Designed for developers, data engineers/analysts, and tech product owners, this course covers the complete GenAI lifecycle from foundations to deployment, including modern topics like diffusion models, multimodal AI, RAG systems, AI agents, and production optimization.
Course Duration: 3 Days
Last Updated: February 23, 2026
Total Labs: 32 hands-on notebooks
handson-genai/
├── Module_01_ML_Foundations/ # Day 1
│ └── 01_intro_to_ml_concepts.ipynb
├── Module_02_Deep_Learning/ # Day 1
│ ├── 01_neural_network_basics.ipynb
│ └── 02_image_classification_pytorch.ipynb
├── Module_03_Generative_AI/ # Day 1
│ ├── 01_intro_to_generative_ai.ipynb
│ ├── 02_autoencoders.ipynb
│ └── 03_diffusion_models.ipynb
├── Module_04_NLP/ # Day 2
│ ├── 01_intro_to_nlp.ipynb
│ ├── 02_tokenization.ipynb
│ └── 03_embeddings.ipynb
├── Module_05_LLMs/ # Day 2
│ ├── 01_openai_ollama.ipynb
│ ├── 02_huggingface_tour.ipynb
│ ├── 03_bert_gpt.ipynb
│ ├── 04_gradio_ui.ipynb
│ └── 05_multimodal_models.ipynb
├── Module_06_Prompting/ # Day 2
│ ├── 01_prompting_techniques.ipynb
│ ├── 02_function_calling.ipynb
│ ├── 03_function_calling_langchain.ipynb
│ └── 04_react_agent.ipynb
├── Module_07_RAG/ # Day 3
│ ├── 01_rag_langchain.ipynb
│ ├── 02_rag_llamaindex.ipynb
│ └── 03_rag_evaluation.ipynb
├── Module_08_Fine_Tuning/ # Day 3
│ ├── 01_transfer_learning.ipynb
│ ├── 02_sentiment_analysis.ipynb
│ ├── 02_fine_tuning_openai.ipynb
│ ├── 03_summarization.ipynb
│ ├── 04_sampling_techniques.ipynb
│ └── 05_Fine_Tuning_LLM_Healthcare.ipynb
├── Module_09_Optimization/ # Day 3
│ ├── 01_intro_to_optimization.ipynb
│ ├── 02_knowledge_distillation.ipynb
│ ├── 03_pruning.ipynb
│ ├── 04_quantization.ipynb
│ └── 05_benchmarking.ipynb
└── Module_10_Capstone/ # Day 3
└── capstone_dialogue_system.ipynb
- Machine Learning vs rule-based programming
- Supervised and unsupervised learning with examples
- ML model development workflow: preprocessing, features, overfitting, evaluation
- Lab:
01_intro_to_ml_concepts.ipynb
- Fundamental concepts of neural networks
- Optimizers, gradient descent, and backpropagation
- Deep learning frameworks: TensorFlow and PyTorch
- Labs:
- Introduction to Generative AI and applications
- Probabilistic sampling and latent space concepts
- Autoencoders and Variational Autoencoders (VAEs)
- Diffusion models for image generation
- Labs:
- Tokenization and text preprocessing
- Vectorization and embeddings (Word2vec)
- Labs:
- Pre-trained models: BERT and GPT
- Working with OpenAI and Ollama APIs
- Hugging Face ecosystem tour
- Building UIs with Gradio
- Multimodal AI: Vision-language models (GPT-4V)
- Labs:
- Zero-shot, few-shot, and chain-of-thought prompting
- Function calling and tool use (OpenAI and LangChain)
- ReAct agents for autonomous workflows
- Labs:
- RAG architecture with LangChain and LlamaIndex
- Vector databases (Chroma) and semantic search
- RAG evaluation and observability with MLflow
- Labs:
- Transfer learning and fine-tuning strategies
- LoRA and Parameter-Efficient Fine-Tuning (PEFT)
- Sentiment analysis with DistilBERT
- Summarization fine-tuning
- Catastrophic forgetting prevention
- Sampling techniques: Temperature, Top-P, Top-K
- Healthcare LLM fine-tuning
- Labs:
- Production challenges: memory, cost, latency
- Knowledge distillation (teacher-student training)
- Model pruning (structured and unstructured)
- Quantization (FP16, INT8, INT4, GPTQ, AWQ)
- Benchmarking optimized models
- Deployment strategies for production
- Labs:
- Build a complete RAG-based dialogue system
- Integrate multiple techniques learned throughout the course
- Lab:
capstone_dialogue_system.ipynb
By the end of this course, you will be able to:
✅ Build applications with modern LLMs (GPT-4, Claude, Llama, Mistral)
✅ Generate images with diffusion models (Stable Diffusion)
✅ Create multimodal applications using vision-language models (GPT-4V)
✅ Fine-tune models efficiently using LoRA and PEFT techniques
✅ Implement RAG systems with vector databases (Chroma)
✅ Create AI agents with function calling and ReAct patterns
✅ Optimize models for production (distillation, pruning, quantization)
✅ Deploy GenAI applications with proper evaluation and monitoring
✅ Use industry-standard tools (Hugging Face, LangChain, LlamaIndex, MLflow)
- Python Programming: Solid understanding of Python, including data structures, control flow, functions, and libraries like NumPy and Pandas
- Machine Learning Fundamentals: Familiarity with supervised/unsupervised learning, model evaluation, and scikit-learn
- Deep Learning Basics: Understanding of neural networks recommended but not required
- API Experience: Helpful to have worked with REST APIs (not required)
• 32 Hands-on Labs: Practical notebooks covering every major topic
• Production-Focused: Learn deployment and optimization techniques
• Modern Tools: Work with 2026 industry-standard frameworks
• Complete Lifecycle: From model selection to production deployment
• Real-World Projects: Build a complete RAG-based dialogue system
Previous demos and labs are available in the _archive/ folder for reference.
