Skip to content

Commit b76d375

Browse files
Merge branch 'main' into cvml-sdk-updates
2 parents a712d0c + 4d39150 commit b76d375

34 files changed

Lines changed: 534 additions & 284 deletions

File tree

.github/scripts/fetch_github_issues.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212

1313
def fetch_github_issues():
1414
"""Fetch GitHub issues from the repository"""
15-
repo = "amd/halo_playbooks"
15+
repo = "amd/playbooks"
1616
token = os.environ.get("GITHUB_TOKEN", "")
1717

1818
headers = {

.github/workflows/test-playbooks.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ jobs:
139139
- name: Setup Python
140140
uses: actions/setup-python@v5
141141
with:
142-
python-version: ${{ matrix.playbook == 'open-webui-chat' && '3.12' || '3.13' }}
142+
python-version: ${{ (matrix.playbook == 'open-webui-chat' || matrix.playbook == 'vllm-inference') && '3.12' || '3.13' }}
143143

144144
- name: Install test dependencies
145145
run: |

README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,26 +31,26 @@ This is AMD's official repository of playbooks for AMD developer platforms. Each
3131
| **Running LLMs with PyTorch and AMD ROCm™ software** | Run powerful language models locally with PyTorch and ROCm |
3232
| **Running and Serving LLMs with LM Studio** | Set up LM Studio to run and serve large language models |
3333
| **Automating Workflows with n8n and Local LLMs** | Build an AI-powered news summarizer using n8n and Lemonade |
34-
| **Local LLM Coding with VSCode and Qwen3-Coder** | Use VSCode with locally-running Qwen3-Coder for private code assistance |
34+
| **Local LLM Coding with VS Code and Qwen3-Coder** | Use VS Code with locally-running Qwen3-Coder for private code assistance |
3535
| **Generating Images with ComfyUI and Z Image Turbo** | Create AI-generated images using ComfyUI with Z Image Turbo |
36+
| **Chat with LLMs in Open WebUI** | Set up Open WebUI to chat with local LLMs |
37+
| **Fine-tune LLMs with PyTorch and AMD ROCm™ software** | Fine-tune large language models using PyTorch and ROCm |
38+
| **Using Lemonade Across CPU, GPU, and NPU** | Learn how to use the Lemonade framework across CPU, GPU, and NPU |
39+
| **Optimized Fine-tuning with Unsloth** | Memory-efficient LoRA fine-tuning with Unsloth |
40+
| **Speech-to-Speech Translation** | Build a real-time speech-to-speech translation system |
3641

3742
## Coming Soon
3843

3944
| Playbook | Description |
4045
|----------|-------------|
41-
| **Chat with LLMs in Open WebUI** | Set up Open WebUI to chat with local LLMs |
42-
| **Fine-tune LLMs with PyTorch and ROCm** | Fine-tune large language models using PyTorch and ROCm |
43-
| **Using Lemonade Across CPU, GPU, and NPU** | Learn how to use the Lemonade framework across CPU, GPU, and NPU |
4446
| **Local Computer Vision with Ryzen™ AI NPU** | Build local perception capabilities using CVML SDK on Ryzen AI and ROCm |
4547
| **Clustering Two Devices with llama.cpp RPC** | Distributed inference using RPC server across two AMD devices with llama.cpp |
4648
| **Getting Started with Ollama** | Install Ollama and run LLMs locally from the terminal, desktop app, or REST API |
4749
| **Getting Started Creating Agents with GAIA** | Build and deploy AI agents using the GAIA framework |
4850
| **Fine-tuning LLMs with LLaMA-Factory** | LoRA fine-tuning of large language models using LLaMA-Factory |
4951
| **Custom GPU Kernels with PyTorch ROCm** | Write and optimize custom GPU kernels using PyTorch and ROCm |
50-
| **Optimized Fine-tuning with Unsloth** | Memory-efficient LoRA fine-tuning with Unsloth |
5152
| **Quick Start on vLLM** | Run inference and serving using vLLM |
5253
| **Clustering with RCCL** | Multi-node cluster using two AMD devices with RCCL |
53-
| **Speech-to-Speech Translation** | Build a real-time speech-to-speech translation system |
5454

5555
## AMD AI Developer Program
5656

assets/banner.png

-133 KB
Loading

playbooks/core/comfyui-image-gen/README.md

Lines changed: 16 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,24 @@ This tutorial teaches you how to use ComfyUI with the Z Image Turbo model on you
2525
## Installing Dependencies
2626

2727
<!-- @os:windows -->
28-
<!-- @require:comfyui,driver -->
28+
<!-- @require:driver,comfyui -->
2929
<!-- @os:end -->
3030

3131
<!-- @os:linux -->
32-
<!-- @require:comfyui,rocm,driver,pytorch -->
32+
33+
<!-- @device:halo,stx,krk,rx7900xt,rx9070xt -->
34+
#### Create a Virtual Environment
35+
On Linux, open a terminal in the directory of your choice and run the following prompt to create a venv:
36+
37+
```bash
38+
sudo apt update
39+
sudo apt install -y python3-venv
40+
python3 -m venv llm-env
41+
source llm-env/bin/activate
42+
```
43+
<!-- @device:end -->
44+
45+
<!-- @require:driver,rocm,pytorch,comfyui -->
3346
<!-- @os:end -->
3447

3548
<!-- @os:windows -->
@@ -301,7 +314,7 @@ To launch ComfyUI on Windows, simply click the ComfyUI shortcut on your Desktop.
301314

302315
To launch ComfyUI:
303316

304-
1. Navigate to `/usr/local/bin/ComfyUI/` (or to the appropriate folder if installed manually)
317+
1. Ensure you are within the ComfyUI directory.
305318
2. Run `python3 main.py --use-pytorch-cross-attention`
306319

307320
ComfyUI starts a local web server. Open your browser to `http://127.0.0.1:8188` to access the interface.

playbooks/core/lmstudio-rocm-llms/README.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ LM Studio is a powerful GUI-based wrapper for [llama.cpp](https://github.com/ggm
3232
Learn how to start chatting with a ChatGPT-grade LLM completely locally.
3333

3434
1. Open LMStudio.
35-
2. Press `Ctrl + L` to open the Model Loader, select `Manually chose model load parameters`, and click on `GPT-OSS 120B`
35+
2. Press `Ctrl + L` to open the Model Loader, select `Manually choose model load parameters`, and click on `GPT-OSS 120B`
3636
3. Make sure "show advanced settings" is checked.
3737
4. Change `Context Length` as desired. Higher context length means more model memory, but more system memory used. Recommended for this playbook is 4096.
3838
5. Make sure `GPU Offload` is set to maximum and `Flash Attention` is On
@@ -80,7 +80,7 @@ LM Studio also offers an OpenAI compliant endpoint in the form of LM Studio Serv
8080

8181
To set up LM Studio Server, use the following instructions:
8282

83-
1. On the left hand side, click on the `Developer` tab (command line icon) or `CTRL + 2` and then click on `Server Settings`.
83+
1. On the left hand side, click on the `Developer` tab (command line icon) or `Ctrl + 2` and then click on `Server Settings`.
8484
2. (Optional): If you want to serve the model over your LAN, check `Serve on Local Network`. If you want to use with a website or extensive calling within VS Code, check `Enable CORS`.
8585
3. On the upper left corner, make sure the server is running by clicking on the toggle button in front of `Status`.
8686
4. An OpenAI compliant endpoint will now be running. The address is typically at http://127.0.0.1:1234
@@ -120,7 +120,7 @@ This model will now be accessible through the LM Studio Server endpoint and will
120120
Having just created the OpenAI Compatible endpoint, let's look at how to integrate this into a Python developer environment (such as VSCode) and use your system as a local API Provider.
121121

122122
1. Create a Python virtual environment:
123-
<!-- @device:halo_box_ -->
123+
<!-- @device:halo_box -->
124124
<!-- @os:windows -->
125125
On Windows, open a terminal in the directory of your choice and follow the commands to create a venv.
126126
```bash
@@ -186,22 +186,22 @@ Having just created the OpenAI Compatible endpoint, let's look at how to integra
186186
)
187187
print("Attempting to connect to local LM Studio server...")
188188
189-
try:
190-
# Create a simple chat completion request
191-
completion = client.chat.completions.create(
192-
model="local-model", # The model identifier is optional in local mode
193-
messages=[
194-
{"role": "system", "content": "You are a helpful coding assistant."},
195-
{"role": "user", "content": "Explain Python decorators in 1 sentence"}
196-
],
197-
temperature=0.7,
198-
)
199-
# Print the response
200-
print("\nConnection Successful! Server Response:\n")
201-
print(completion.choices[0].message.content)
202-
203-
except Exception as e:
204-
print(f"\nConnection Failed: {e}. Ensure LM Studio server is running on port 1234.")
189+
try:
190+
# Create a simple chat completion request
191+
completion = client.chat.completions.create(
192+
model="local-model", # The model identifier is optional in local mode
193+
messages=[
194+
{"role": "system", "content": "You are a helpful coding assistant."},
195+
{"role": "user", "content": "Explain Python decorators in 1 sentence"}
196+
],
197+
temperature=0.7,
198+
)
199+
# Print the response
200+
print("\nConnection Successful! Server Response:\n")
201+
print(completion.choices[0].message.content)
202+
203+
except Exception as e:
204+
print(f"\nConnection Failed: {e}. Ensure LM Studio server is running on port 1234.")
205205
```
206206
<!-- @os:windows -->
207207
<!-- @test:id=lmstudio-ping-endpoint-windows timeout=300 hidden=True -->

playbooks/core/pytorch-rocm-llms/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ This tutorial uses PyTorch powered by AMD ROCm™ software to run models that ca
2323

2424
### Create a Virtual Environment
2525

26-
<!-- @device:halo_box_ -->
26+
<!-- @device:halo_box -->
2727
<!-- @os:windows -->
2828
On Windows, open a terminal in the directory of your choice and follow the commands to create a venv with ROCm+Pytorch already installed.
2929
<!-- @test:id=create-venv timeout=60 -->
@@ -86,10 +86,10 @@ source llm-env/bin/activate
8686

8787
### Installing Basic Dependencies
8888
<!-- @os:linux -->
89-
<!-- @require:rocm,pytorch,driver -->
89+
<!-- @require:driver,rocm,pytorch -->
9090
<!-- @os:end -->
9191
<!-- @os:windows -->
92-
<!-- @require:pytorch,driver -->
92+
<!-- @require:driver,pytorch -->
9393
<!-- @os:end -->
9494

9595
### Installing Additional Dependencies

playbooks/dependencies/comfyui.md

Lines changed: 4 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -17,47 +17,24 @@ SPDX-License-Identifier: MIT
1717
<!-- @os:end -->
1818

1919
<!-- @os:linux -->
20-
21-
#### Create a Virtual Environment
22-
<!-- @device:halo_box_ -->
23-
On Linux, open a terminal in the directory of your choice and run the following prompt to create a venv with ROCm+Pytorch already installed:
24-
25-
```bash
26-
sudo apt update
27-
sudo apt install -y python3-venv
28-
python3 -m venv llm-env --system-site-packages
29-
source llm-env/bin/activate
30-
```
31-
<!-- @device:end -->
32-
33-
<!-- @device:halo,stx,krk,rx7900xt,rx9070xt -->
34-
On Linux, open a terminal in the directory of your choice and run the following prompt to create a venv:
35-
36-
```bash
37-
sudo apt update
38-
sudo apt install -y python3-venv
39-
python3 -m venv llm-env
40-
source llm-env/bin/activate
41-
```
42-
<!-- @device:end -->
43-
4420
#### Clone ComfyUI
4521
```bash
4622
git clone https://github.com/Comfy-Org/ComfyUI.git
4723
```
4824

49-
#### Optionally checkout a specific version
25+
#### (Optional) Checkout a specific version
5026
```bash
51-
git checkout v0.17.2
27+
git checkout v0.19.2
5228
```
5329

5430
#### Install ComfyUI requirements
5531

5632
With the Python virtual environment activated, run:
5733
```bash
34+
cd ComfyUI
5835
pip install -r requirements.txt
5936
```
6037

61-
> **Note**: See [ComfyUI GitHub](https://github.com/comfyanonymous/ComfyUI) for more information.
38+
> **Note**: See [ComfyUI GitHub](https://github.com/comfy-org/ComfyUI) for more information.
6239
6340
<!-- @os:end -->

playbooks/dependencies/driver.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,8 @@ Copyright Advanced Micro Devices, Inc.
44
SPDX-License-Identifier: MIT
55
-->
66

7-
### AMD GPU Driver
8-
97
<!-- @os:windows -->
8+
### AMD GPU Driver
109

1110
Update to the latest AMD GPU driver using `AMD Software: Adrenalin Edition™`.
1211

@@ -22,10 +21,12 @@ Get-CimInstance Win32_VideoController | Select-Object Name, DriverVersion
2221
<!-- @os:end -->
2322

2423
<!-- @os:linux -->
24+
<!-- @device:rx7900xt,rx9070xt -->
25+
### AMD GPU Driver
2526

2627
Download and install the latest AMD GPU driver for Linux:
2728

28-
1. Visit the [AMD Linux Drivers](https://amd.com/en/support/download/linux-drivers.html) page.
29+
1. Visit the [AMD Linux Drivers](https://www.amd.com/en/support/download/linux-drivers.html) page.
2930
2. Follow the installation instructions provided on the download page.
3031

3132
<!-- @test:id=amd-gpu-visible-linux timeout=60 hidden=True -->
@@ -39,4 +40,5 @@ test -d /opt/rocm
3940
test -e /opt/rocm/lib/libroctx64.so.4 -o -e /opt/rocm/lib/libroctx64.so
4041
```
4142
<!-- @test:end -->
43+
<!-- @device:end -->
4244
<!-- @os:end -->

playbooks/dependencies/rocm.md

Lines changed: 28 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -6,54 +6,47 @@ SPDX-License-Identifier: MIT
66

77
### ROCm
88

9-
<!-- @device:halo_box,halo,stx -->
10-
#### 1. Install AMD ROCm™ software on Linux (Ubuntu 24.04)
11-
12-
These steps install the **system ROCm 7.2.1 runtime** on Ubuntu 24.04.
13-
> Note: ROCm is a **system-wide install** on Linux.
14-
9+
**Add the current user to the render and video groups.**
1510
```bash
16-
sudo apt update
17-
wget https://repo.radeon.com/amdgpu-install/7.2.1/ubuntu/noble/amdgpu-install_7.2.1.70201-1_all.deb
18-
sudo apt install ./amdgpu-install_7.2.1.70201-1_all.deb
19-
sudo amdgpu-install -y --usecase=rocm --no-dkms
11+
sudo usermod -a -G render,video $LOGNAME
2012
```
21-
22-
#### 2. Set the correct user permissions
13+
**Restart your system to apply the settings.**
2314
```bash
24-
sudo usermod -aG render,video $USER
15+
sudo reboot
2516
```
26-
27-
#### 3. Reboot the system
17+
**Install ROCm in the created virtual environment.**
18+
> **Note**: Ensure the virtual environment is active before proceeding.
19+
<!-- @device:halo,halo_box -->
2820
```bash
29-
sudo reboot
21+
python -m pip install --upgrade pip
22+
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx1151/ "rocm[libraries,devel]"
23+
3024
```
31-
This is important for the runtime stack and permissions to settle.
25+
<!-- @device:end -->
3226

33-
#### 4. Verify that ROCm is installed correctly and usable
27+
<!-- @device:krk -->
3428

35-
<!-- @test:id=verify-linux-rocm-installation timeout=180 -->
3629
```bash
37-
# Check ROCm path (paths should exist)
38-
ls -l /opt/rocm
39-
ls -l /opt/rocm/lib/libroctx64.so*
30+
python -m pip install --upgrade pip
31+
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx1152/ "rocm[libraries,devel]"
4032

41-
# Check ROCm device files (Device files owned by the render group should be visible)
42-
ls -l /dev/kfd
43-
ls -l /dev/dri/renderD*
33+
```
34+
<!-- @device:end -->
4435

45-
# Check user groups ($USER should be listed in both render and video)
46-
id
47-
groups
36+
<!-- @device:stx -->
4837

49-
# Check ROCm with rocminfo ('Permission denied' error should NOT be seen)
50-
rocminfo | sed -n '1,120p'
38+
```bash
39+
python -m pip install --upgrade pip
40+
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx1150/ "rocm[libraries,devel]"
5141

52-
# Check installed ROCm version
53-
cat /opt/rocm/.info/version
5442
```
55-
<!-- @test:end -->
56-
57-
Refer this [official documentation](https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/install/installryz/native_linux/install-ryzen.html) for more info.
43+
<!-- @device:end -->
5844

45+
<!-- @device:rx7900xt,rx9070xt -->
46+
```bash
47+
python -m pip install --upgrade pip
48+
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx120x-all/ "rocm[libraries,devel]"
49+
```
5950
<!-- @device:end -->
51+
52+
For further installation help, please see this [link](https://rocm.docs.amd.com/en/7.12.0-preview/install/rocm.html?fam=ryzen&gpu=max-pro-395&os=ubuntu&os-version=24.04&i=pip).

0 commit comments

Comments
 (0)