Skip to content
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 15 additions & 2 deletions playbooks/core/comfyui-image-gen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,24 @@ This tutorial teaches you how to use ComfyUI with the Z Image Turbo model on you
## Installing Dependencies

<!-- @os:windows -->
<!-- @require:comfyui,driver -->
<!-- @require:driver,comfyui -->
<!-- @os:end -->

<!-- @os:linux -->
<!-- @require:comfyui,rocm,driver,pytorch -->

Comment thread
sreeram-11 marked this conversation as resolved.
<!-- @device:halo,stx,krk,rx7900xt,rx9070xt -->
#### Create a Virtual Environment
On Linux, open a terminal in the directory of your choice and run the following prompt to create a venv:

```bash
sudo apt update
sudo apt install -y python3-venv
python3 -m venv llm-env
Comment thread
sreeram-11 marked this conversation as resolved.
source llm-env/bin/activate
```
<!-- @device:end -->

<!-- @require:driver,rocm,pytorch,comfyui -->
<!-- @os:end -->

<!-- @os:windows -->
Expand Down
4 changes: 2 additions & 2 deletions playbooks/core/lmstudio-rocm-llms/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ LM Studio is a powerful GUI-based wrapper for [llama.cpp](https://github.com/ggm
Learn how to start chatting with a ChatGPT-grade LLM completely locally.

1. Open LMStudio.
2. Press `Ctrl + L` to open the Model Loader, select `Manually chose model load parameters`, and click on `GPT-OSS 120B`
2. Press `Ctrl + L` to open the Model Loader, select `Manually choose model load parameters`, and click on `GPT-OSS 120B`
3. Make sure "show advanced settings" is checked.
4. Change `Context Length` as desired. Higher context length means more model memory, but more system memory used. Recommended for this playbook is 4096.
5. Make sure `GPU Offload` is set to maximum and `Flash Attention` is On
Expand Down Expand Up @@ -80,7 +80,7 @@ LM Studio also offers an OpenAI compliant endpoint in the form of LM Studio Serv

To set up LM Studio Server, use the following instructions:

1. On the left hand side, click on the `Developer` tab (command line icon) or `CTRL + 2` and then click on `Server Settings`.
1. On the left hand side, click on the `Developer` tab (command line icon) or `Ctrl + 2` and then click on `Server Settings`.
2. (Optional): If you want to serve the model over your LAN, check `Serve on Local Network`. If you want to use with a website or extensive calling within VS Code, check `Enable CORS`.
3. On the upper left corner, make sure the server is running by clicking on the toggle button in front of `Status`.
4. An OpenAI compliant endpoint will now be running. The address is typically at http://127.0.0.1:1234
Expand Down
6 changes: 3 additions & 3 deletions playbooks/core/pytorch-rocm-llms/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ This tutorial uses PyTorch powered by AMD ROCm™ software to run models that ca

### Create a Virtual Environment

<!-- @device:halo_box_ -->
<!-- @device:halo_box -->
<!-- @os:windows -->
On Windows, open a terminal in the directory of your choice and follow the commands to create a venv with ROCm+Pytorch already installed.
<!-- @test:id=create-venv timeout=60 -->
Expand Down Expand Up @@ -86,10 +86,10 @@ source llm-env/bin/activate

### Installing Basic Dependencies
<!-- @os:linux -->
<!-- @require:rocm,pytorch,driver -->
<!-- @require:driver,rocm,pytorch -->
<!-- @os:end -->
<!-- @os:windows -->
<!-- @require:pytorch,driver -->
<!-- @require:driver,pytorch -->
<!-- @os:end -->

### Installing Additional Dependencies
Expand Down
28 changes: 2 additions & 26 deletions playbooks/dependencies/comfyui.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,38 +17,14 @@ SPDX-License-Identifier: MIT
<!-- @os:end -->

<!-- @os:linux -->

#### Create a Virtual Environment
<!-- @device:halo_box_ -->
On Linux, open a terminal in the directory of your choice and run the following prompt to create a venv with ROCm+Pytorch already installed:

```bash
sudo apt update
sudo apt install -y python3-venv
python3 -m venv llm-env --system-site-packages
source llm-env/bin/activate
```
<!-- @device:end -->

<!-- @device:halo,stx,krk,rx7900xt,rx9070xt -->
On Linux, open a terminal in the directory of your choice and run the following prompt to create a venv:

```bash
sudo apt update
sudo apt install -y python3-venv
python3 -m venv llm-env
source llm-env/bin/activate
```
<!-- @device:end -->
Comment thread
sreeram-11 marked this conversation as resolved.

#### Clone ComfyUI
```bash
git clone https://github.com/Comfy-Org/ComfyUI.git
```

#### Optionally checkout a specific version
#### (Optional) Checkout a specific version
```bash
git checkout v0.17.2
git checkout v0.19.2
```

#### Install ComfyUI requirements
Expand Down
8 changes: 5 additions & 3 deletions playbooks/dependencies/driver.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,8 @@ Copyright Advanced Micro Devices, Inc.
SPDX-License-Identifier: MIT
-->

### AMD GPU Driver

<!-- @os:windows -->
### AMD GPU Driver

Update to the latest AMD GPU driver using `AMD Software: Adrenalin Edition™`.

Expand All @@ -22,10 +21,12 @@ Get-CimInstance Win32_VideoController | Select-Object Name, DriverVersion
<!-- @os:end -->

<!-- @os:linux -->
<!-- @device:rx7900xt,rx9070xt -->
### AMD GPU Driver

Download and install the latest AMD GPU driver for Linux:

1. Visit the [AMD Linux Drivers](https://amd.com/en/support/download/linux-drivers.html) page.
1. Visit the [AMD Linux Drivers](https://www.amd.com/en/support/download/linux-drivers.html) page.
2. Follow the installation instructions provided on the download page.
Comment thread
adamlam2-amd marked this conversation as resolved.

<!-- @test:id=amd-gpu-visible-linux timeout=60 hidden=True -->
Expand All @@ -39,4 +40,5 @@ test -d /opt/rocm
test -e /opt/rocm/lib/libroctx64.so.4 -o -e /opt/rocm/lib/libroctx64.so
```
<!-- @test:end -->
<!-- @device:end -->
<!-- @os:end -->
66 changes: 28 additions & 38 deletions playbooks/dependencies/rocm.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,54 +6,44 @@ SPDX-License-Identifier: MIT

### ROCm

<!-- @device:halo_box,halo,stx -->
#### 1. Install AMD ROCm™ software on Linux (Ubuntu 24.04)

These steps install the **system ROCm 7.2.1 runtime** on Ubuntu 24.04.
> Note: ROCm is a **system-wide install** on Linux.

**Add the current user to the render and video groups.** Restart your system following these commands.
Comment thread
adamlam2-amd marked this conversation as resolved.
Outdated
Comment thread
adamlam2-amd marked this conversation as resolved.
Outdated
```bash
sudo apt update
wget https://repo.radeon.com/amdgpu-install/7.2.1/ubuntu/noble/amdgpu-install_7.2.1.70201-1_all.deb
sudo apt install ./amdgpu-install_7.2.1.70201-1_all.deb
sudo amdgpu-install -y --usecase=rocm --no-dkms
sudo usermod -a -G render,video $LOGNAME
```

#### 2. Set the correct user permissions
#### Install ROCm into a virtual environment
Comment thread
adamlam2-amd marked this conversation as resolved.
Outdated
<!-- @device:halo,halo_box -->
<!-- @test:id=install-rocm timeout=300 setup=activate-venv -->
Comment thread
sreeram-11 marked this conversation as resolved.
Outdated
```bash
sudo usermod -aG render,video $USER
```
python -m pip install --upgrade pip
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx1151/ "rocm[libraries,devel]"
Comment thread
adamlam2-amd marked this conversation as resolved.

#### 3. Reboot the system
Comment thread
sreeram-11 marked this conversation as resolved.
```bash
sudo reboot
```
This is important for the runtime stack and permissions to settle.

#### 4. Verify that ROCm is installed correctly and usable
<!-- @test:end -->
Comment thread
sreeram-11 marked this conversation as resolved.
Outdated
<!-- @device:end -->

<!-- @test:id=verify-linux-rocm-installation timeout=180 -->
<!-- @device:krk -->
<!-- @test:id=install-pytorch timeout=300 setup=activate-venv -->
Comment thread
sreeram-11 marked this conversation as resolved.
Outdated
Comment thread
adamlam2-amd marked this conversation as resolved.
Outdated
```bash
# Check ROCm path (paths should exist)
ls -l /opt/rocm
ls -l /opt/rocm/lib/libroctx64.so*

# Check ROCm device files (Device files owned by the render group should be visible)
ls -l /dev/kfd
ls -l /dev/dri/renderD*
python -m pip install --upgrade pip
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx1152/ "rocm[libraries,devel]"

# Check user groups ($USER should be listed in both render and video)
id
groups

# Check ROCm with rocminfo ('Permission denied' error should NOT be seen)
rocminfo | sed -n '1,120p'

# Check installed ROCm version
cat /opt/rocm/.info/version
```
<!-- @test:end -->
Comment thread
sreeram-11 marked this conversation as resolved.
Outdated
<!-- @device:end -->

Refer this [official documentation](https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/install/installryz/native_linux/install-ryzen.html) for more info.
Comment thread
adamlam2-amd marked this conversation as resolved.
<!-- @device:stx -->
<!-- @test:id=install-pytorch timeout=300 setup=activate-venv -->
Comment thread
sreeram-11 marked this conversation as resolved.
Outdated
Comment thread
adamlam2-amd marked this conversation as resolved.
Outdated
```bash
python -m pip install --upgrade pip
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx1150/ "rocm[libraries,devel]"

```
<!-- @test:end -->
Comment thread
sreeram-11 marked this conversation as resolved.
Outdated
<!-- @device:end -->

<!-- @device:rx7900xt,rx9070xt -->
```bash
python -m pip install --upgrade pip
python -m pip install --index-url https://repo.amd.com/rocm/whl/gfx120x-all/ "rocm[libraries,devel]"
```
<!-- @device:end -->
Loading