Dive deep into the world of generative AI foundation models, exploring their transformative potential across scientific disciplines through a hands-on, accessible approach.
By the end of this workshop, participants will:
- Develop a comprehensive understanding of generative AI foundation models
- Acquire practical skills for integrating AI technologies into research workflows
- Demonstrate proficiency in prompt engineering across multiple disciplines
- Critically evaluate and apply multimodal AI tools to complex research challenges
- Build confidence in navigating and deploying AI technologies
- Create innovative research approaches using generative AI methodologies
- Crafting precise, context-specific prompts
- Extracting maximum value from foundation models
- Developing discipline-specific interaction strategies
- Understanding AI model architectures
- Managing computational resources
- Scaling AI applications from local to HPC environments
- Integrating text, image, and data-based models
- Cross-modal research technique development
- Solving interdisciplinary research challenges
- Implementing AI tools in research workflows
- Performance optimization techniques
- Handling model limitations and biases
- Utilizing AI for research code development
- Debugging and improving computational methods
- Automating repetitive research tasks
- Recognizing and mitigating AI biases
- Ensuring research integrity
- Responsible AI use across disciplines
- Deploying AI models in advanced computing environments
- Resource management and optimization
- Scaling computational research capabilities
- Explore large language models and multimodal AI systems
- Examine key models: GPT, BERT, DALL-E, Stable Diffusion
- Analyze model architectures, capabilities, and limitations
- Understand transfer learning and model adaptability
- Bridging interdisciplinary research challenges
- Transforming data analysis and hypothesis generation
- Expanding computational research capabilities
- Democratizing advanced AI technologies
- Crafting effective prompts across disciplines
- Extracting maximum value from foundation models
- Developing discipline-specific interaction strategies
- Handling complex research queries
- Integrating text, image, and data-based models
- Cross-modal research techniques
- Practical implementation strategies
- Solving interdisciplinary research challenges
- Understanding model biases
- Ensuring research integrity
- Responsible AI deployment
- Ethical considerations in AI-assisted research
- Local to high-performance computing deployments
- Resource management strategies
- Scaling AI model applications
- Performance optimization techniques
- Graduate students across all disciplines
- Researchers seeking AI integration
- Academics exploring computational technologies
- Interdisciplinary innovation seekers
- Confident foundation model utilization
- Advanced research methodology skills
- Computational thinking transformation
- Practical AI deployment capabilities
Empowering researchers to leverage generative AI as a powerful, flexible research companion across scientific domains.
Instructors: Nick Eddy / Carlos Lizárraga / Enrique Noriega/ Mithun Paul
-
Registration to attend in person or online.
-
When: Thursdays at 1PM.
-
Where: Albert B. Weaver Science-Engineering Library. Room 212
(Program not definitive!)
Calendar
Date | Title | Topic Description | Wiki/Slides | YouTube | Instructor |
---|---|---|---|---|---|
01/30/2025 | Scaling up Ollama: Local, CyVerse, HPC | In this hands-on workshop, participants will learn to deploy and scale large language models using Ollama across various computational environments—from laptops to supercomputing clusters—to master practical AI capabilities. | video | Enrique Noriega | |
02/06/2025 | Using AI Verde | This practical introduction shows how to effectively use U of A Generative AI Verde for academic research, writing, and problem-solving. Participants will learn to harness AI Verde's capabilities while gaining a clear understanding of its limitations and ethical implications. | video | Nick Eddy | |
02/13/2025 | Best practices of Prompt Engineering using AI Verde | A hands-on session that teaches practical prompt engineering techniques to optimize U of A Generative AI Verde's performance for academic and professional applications. | Slides | video | Mithun Paul |
02/20/2025 | Quick RAG application using AI Verde / HPC | A hands-on session demonstrating how to build a basic Retrieval-Augmented Generation (RAG) system with the U of A Generative AI Verde API. Participants will learn to enhance AI responses by integrating custom knowledge bases. | Slides | video | Mithun Paul |
02/27/2025 | Multimodal Q&A+OCR in AI Verde | A hands-on technical session exploring U of A Generative AI's multimodal capabilities that combines vision and text processing for enhanced document analysis and automated question-answering with OCR technology. | video | Nick Eddy | |
03/06/2025 | SQL specialized query code generation | A hands-on session teaching participants how to use Large Language Models to craft, optimize, and validate complex SQL queries, emphasizing real-world database operations and industry best practices. | Slides, Code | video | Enrique Noriega |
03/13/2025 | NO Session | Spring Break | |||
03/20/2025 | Function calling with LLMs | There are two ways to implement function calling with open-source large language models (LLMs). When an LLM doesn't natively support function calling, you can combine prompt engineering, fine-tuning, and constrained decoding. | video | Enrique Noriega | |
03/27/2025 | Code generation assistants | Large Language Models (LLMs) now serve as powerful code generation assistants, streamlining and enhancing software development. They generate code snippets, propose solutions, and translate code between programming languages. | video | Nick Eddy |
Date | Title | Topic Description | YouTube | Instructor |
---|---|---|---|---|
09/05/2024 | Hugging Face Models (NLP) | Hugging Face offers a vast array of pre-trained models for Natural Language Processing (NLP) tasks. These models cover a wide spectrum of applications, from text generation and translation to sentiment analysis and question answering. | video | Enrique Noriega |
09/12/2024 | Hugging Face Models (Computer Vision) | Hugging Face has significantly expanded its offerings beyond NLP to encompass a robust collection of computer vision models. You can find pre-trained models for a wide range of tasks, from basic image classification to complex image generation. | video | Enrique Noriega |
09/19/2024 | Hugging Face Models (Multimodal) | Hugging Face offers a diverse range of multimodal models, capable of processing and understanding multiple data modalities such as text, images, and audio. These models are at the forefront of AI research and development, enabling innovative applications. | video | Enrique Noriega |
09/26/2024 | Running LLM locally: Ollama | Ollama is an open-source platform designed to make running large language models (LLMs) on your local machine accessible and efficient. It acts as a bridge between the complex world of LLMs and users who want to experiment and interact with these models without relying on cloud-based services. | video | Carlos Lizárraga |
10/03/2024 | Introduction to LangChain | Langchain is an open-source Python library that provides a framework for developing applications powered by large language models (LLMs). It simplifies the process of building complex LLM-based applications by offering tools and abstractions to connect LLMs with other data sources and systems. | video | Enrique Noriega |
10/10/2024 | Getting Started with Phi-3 | Phi-3 is a series of small language models (SLMs) developed by Microsoft. Unlike larger language models (LLMs) that require substantial computational resources, Phi-3 models offer impressive performance while being significantly smaller and more efficient. | video | Enrique Noriega |
10/17/2024 | Getting started with Gemini | Gemini is a large language model (LLM) developed by Google AI. It's designed to be exceptionally versatile, capable of handling a wide range of tasks and modalities, including text, code, audio, and images. This makes it a significant advancement in the field of artificial intelligence. | video | Enrique Noriega |
10/24/2024 | Introduction to Gradio | Gradio is an open-source Python library that allows you to quickly create user interfaces for your machine learning models, APIs, or any Python function. It simplifies the process of building interactive demos and web applications without requiring extensive knowledge of JavaScript, CSS, or web development. | video | Enrique Noriega |
10/31/2024 | Introduction to RAG | Retrieval-Augmented Generation. It's a technique that enhances the capabilities of Large Language Models (LLMs) by combining them with external knowledge sources. | video | Enrique Noriega |
11/15/2024 | Dense Passage Retrieval | video | Mithun Paul |
Created: 06/10/2024 (C. Lizárraga)
Updated: 02/24/2025 (C. Lizárraga)
DataLab, Data Science Institute, University of Arizona.