Skip to content

Latest commit

 

History

History
93 lines (74 loc) · 4.7 KB

File metadata and controls

93 lines (74 loc) · 4.7 KB

Hands-On Generative Artificial Intelligence

Last Updated: Dec 16, 2024 Course Duration: 3 Days

Course Overview

  • Hands-on Generative Al is an interactive three-day training course that offers a comprehensive learning experience for developers, data engineers/analysts, and tech product owners.
  • The course is specifically designed to equip participants with the essential skills and in-depth knowledge required to harness the power of generative Al effectively.
  • By combining theory with extensive hands-on practice, this course ensures that participants gain a deep understanding of generative Al concepts and the ability to apply them to various domains.
  • Students will learn how to generate realistic and novel outputs, such as images, music, text, and more, using state-of-the-art algorithms and frameworks.

Prerequisites

  • Python Programming: Participants should have a solid understanding of Python programming, including knowledge of data structures, control flow, functions, and libraries commonly used in data analysis and machine learning, such as NumPy, Pandas, and scikit-learn.
  • Data Analysis and Machine Learning: Familiarity with data analysis concepts, exploratory data analysis (EDA), and machine learning algorithms is essential.
  • Deep Learning Basics: Basic knowledge of deep learning concepts is recommended.

Course Outline

Foundations of Al and Machine Learning

  • Machine Learning vs rule-based programming.
  • Supervised and unsupervised learning, including examples and applications in real-world scenarios.
  • An overview of ML model development and evaluation: data preprocessing, feature engineering, overfit, and model evaluation metrics.
  • Hands-on Lab (optional): Training and evaluating a classifier.

Deep Learning Primer

  • Fundamental concepts of deep learning, data types and volumes.
  • Overview of neural network structures and common architectures.
  • Optimizers, gradient descent, and backpropagation algorithms.
  • Optional demo: Tensorflow playground.
  • Deep learning frameworks: TensorFlow and PyTorch.
  • Hands-on Lab: Image classification using TensorFlow or PyTorch.

Overview of Generative Al

  • Introduction to Generative Al and its applications.
  • Basic principles of generative models and their architectural components.
  • Demo: A simple example of probabilistic sampling to create simulated data.
  • Autoencoders: latent space, and representation learning.
  • Hands-on Lab: Autoencoders. Understanding latent space using the MNIST dataset.
  • Variational Autoencoders (VAEs) and probabilistic sampling techniques.
  • Hands-on Lab: Training a VAE to generate fake images of handwritten digits.

NLP: Understanding Language as Data

  • Introduction to NLP techniques and applications.
  • Tokenization and vectorization (Bag-of-Words and its limitations).
  • Embeddings: mathematical text representation in a continuous vector space.
  • Hands-on Lab: Find similar documents using word2vec.

Large Language Models (LLMS)

  • NLP and text generation before the introduction of pre-trained LLMs.
  • Overview of pre-trained models including BERT and GPT.
  • Demo: GPT as a probabilistic autoregressive model (OpenAl Playground).
  • Other notable LLMs and their applications.
  • Demo: A Tour of Huggingface.
  • Hands-on Lab: Introduction to BERT and GPT.

Language generation tasks and Prompting Techniques

  • Generative tasks: text completion, dialogue systems, summarization, code generation, and prompt refinement.
  • Prompt engineering and prompting techniques.
  • Hands-on Lab (no code): Prompting techniques for summarisation, code generation and text labelling.

Adapting Pre-trained Models for Specific NLP Tasks

  • Transfer learning and full fine-tuning strategies for LLMs.
  • Considerations for cost and potential catastrophic forgetfulness.
  • Using Hugging Face's transformers library for fine-tuning.
  • Sampling techniques.
  • Hands-on Lab: Fine tuning BERT for sentiment analysis.
  • Hands-on Lab: Customize Generative LLM Output with Temperature, Top-P, Top-K, and Beam Search.

Retrieval-Augmented Generation (RAG) & Deployment

  • Strategies for deploying generative models: quantization, pruning, and distillation techniques.
  • Hands-on Lab (optional): model distillation.

Capstone Project

  • Developing a Dialogue System with RAG.

Timing Guide

Day 1

  • Foundations of Al and Machine Learning
  • Deep Learning Primer

Day 2

  • Overview of Generative Al
  • NLP: Understanding Language as Data

Day 3

  • Large Language Models (LLMS)
  • Language generation tasks and Prompting Techniques
  • Adapting Pre-trained Models for Specific NLP Tasks & Capstone Project