Skip to content

main to custom gpu kernels branch#174

Closed
sdevinenamd wants to merge 21 commits intocustom-gpu-kernelsfrom
main
Closed

main to custom gpu kernels branch#174
sdevinenamd wants to merge 21 commits intocustom-gpu-kernelsfrom
main

Conversation

@sdevinenamd
Copy link
Copy Markdown
Collaborator

No description provided.

kovtcharov-amd and others added 21 commits March 17, 2026 16:35
* initial commit of gaia-agents

* Improve GAIA playbook docs and GPU detection

* fix npu ready, requires an updated amd-gaia package (>v0.16.1), coming soon.

* Version instructions

---------

Co-authored-by: Kalin Ovtcharov <kalin@extropolis.ai>
Co-authored-by: Daniel Holanda <holand.daniel@gmail.com>
* Added tests for Windows, and the Z-Image workflow JSON

* Windows Desktop Installer, Linux git

* Added comfyui-sync-requirements-windows test

* Installing comfyui-frontend-package and verifying installation

* Updating models copy into AppData folder

* Updating the rocm-smi test for Linux

* Updating workflows to python 3.12

* Revert "Updating workflows to python 3.12"

This reverts commit 18b5480.

* Ensure Python can find ROCm shared libraries

* Python 3.12 and pip install requests

* Updating all workflows to use Python 3.12

* Update playbooks.json with required_platforms
* Added automated tests for the lmstudio-rocm-llms playbook

* LM Studio Model Key updated

Updated model key from `openai/gpt-oss-120b` to `gpt-oss-120b`.

* Increase max_tokens, reduced code (combined bash and powershell)

* Updating model loading and unloading logic

* Preventing CI failure even if there is no model to unload

* Remove model unloading

* Update lmstudio-chat-gpt-oss test

* Enable Kracken on Windows and Linux

* Removing KRK, adding a test to stop lms server

* Enabling Linux Halo

* Added model unloading tests, unique model identifier for every load

* Update lmstudio.md as on main

---------

Co-authored-by: Daniel Holanda <holand.daniel@gmail.com>
Co-authored-by: Daniel Holanda <holand.daniel@gmail.com>
* minor tips and fixes

* minor fix

* improving pytorch rocm playbook

* small fix for tests

* other changes

---------

Co-authored-by: Daniel Holanda <holand.daniel@gmail.com>
* Add AMD copyright to key files

* fix formatting issue

* fix formatting issue

---------

Co-authored-by: Daniel Holanda <holand.daniel@gmail.com>
* refactor shown platforms

* rename shown to supported

* Add device mapping

* minor bug fix
Co-authored-by: Daniel Holanda <holand.daniel@gmail.com>
* Differentiate between halo and halo box

* device mapping

* Progress

* good selection
* add llama factory finetuning playbook

* update llama factory playbook

* update llama factory playbook

* correct an typing error

* mark pytorch setup as an optional step

* add webUI tool info

* updated some content

* updated

* update finetuning time

* small updates

* remove bitsandbytes and docker setup

* add the dependency info

* correct typo issue

* ui and text formatting

* ui

* update playbook json file

* add supported platforms

---------

Co-authored-by: Adam Lam <adamlam2@amd.com>
Co-authored-by: Daniel Holanda <holand.daniel@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants