Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,26 +31,26 @@ This is AMD's official repository of playbooks for AMD developer platforms. Each
| **Running LLMs with PyTorch and AMD ROCm™ software** | Run powerful language models locally with PyTorch and ROCm |
| **Running and Serving LLMs with LM Studio** | Set up LM Studio to run and serve large language models |
| **Automating Workflows with n8n and Local LLMs** | Build an AI-powered news summarizer using n8n and Lemonade |
| **Local LLM Coding with VSCode and Qwen3-Coder** | Use VSCode with locally-running Qwen3-Coder for private code assistance |
| **Local LLM Coding with VS Code and Qwen3-Coder** | Use VS Code with locally-running Qwen3-Coder for private code assistance |
| **Generating Images with ComfyUI and Z Image Turbo** | Create AI-generated images using ComfyUI with Z Image Turbo |
| **Chat with LLMs in Open WebUI** | Set up Open WebUI to chat with local LLMs |
| **Fine-tune LLMs with PyTorch and AMD ROCm™ software** | Fine-tune large language models using PyTorch and ROCm |
| **Using Lemonade Across CPU, GPU, and NPU** | Learn how to use the Lemonade framework across CPU, GPU, and NPU |
| **Optimized Fine-tuning with Unsloth** | Memory-efficient LoRA fine-tuning with Unsloth |
| **Speech-to-Speech Translation** | Build a real-time speech-to-speech translation system |

## Coming Soon

| Playbook | Description |
|----------|-------------|
| **Chat with LLMs in Open WebUI** | Set up Open WebUI to chat with local LLMs |
| **Fine-tune LLMs with PyTorch and ROCm** | Fine-tune large language models using PyTorch and ROCm |
| **Using Lemonade Across CPU, GPU, and NPU** | Learn how to use the Lemonade framework across CPU, GPU, and NPU |
| **Local Computer Vision with Ryzen™ AI NPU** | Build local perception capabilities using CVML SDK on Ryzen AI and ROCm |
| **Clustering Two Devices with llama.cpp RPC** | Distributed inference using RPC server across two AMD devices with llama.cpp |
| **Getting Started with Ollama** | Install Ollama and run LLMs locally from the terminal, desktop app, or REST API |
| **Getting Started Creating Agents with GAIA** | Build and deploy AI agents using the GAIA framework |
| **Fine-tuning LLMs with LLaMA-Factory** | LoRA fine-tuning of large language models using LLaMA-Factory |
| **Custom GPU Kernels with PyTorch ROCm** | Write and optimize custom GPU kernels using PyTorch and ROCm |
| **Optimized Fine-tuning with Unsloth** | Memory-efficient LoRA fine-tuning with Unsloth |
| **Quick Start on vLLM** | Run inference and serving using vLLM |
| **Clustering with RCCL** | Multi-node cluster using two AMD devices with RCCL |
| **Speech-to-Speech Translation** | Build a real-time speech-to-speech translation system |

## AMD AI Developer Program

Expand Down
Loading