Releases: GPUOpen-LibrariesAndSDKs/MiniDXNN
Releases · GPUOpen-LibrariesAndSDKs/MiniDXNN
v0.3.0
MiniDXNN v0.3.0
Upgraded MiniDXNN from Cooperative Vector (Shader Model 6.9) to LinAlg Matrix (Shader Model 6.10), with input encoding support and a new texture compression example.
Features
- HLSL MLP with LinAlg Matrix (
include/minidxnn/hlsl/mlp.hlsl): Forward and backward passes using Shader Model 6.10 LinAlg Matrix operations - Texture compression example (
example/03_texture_compression_with_input_encoding): MLP training with positional and grid input encoding for texture compression - Matrix conversion: Host-side matrix format conversion via
GetLinearAlgebraMatrixConversionDestinationInfo/ConvertLinearAlgebraMatrix - C++ fallback (
include/minidxnn/cpp/hlsl_compat.hpp): Updated CPU execution path compatible with LinAlg Matrix API
Changes from v0.2.0
- Migrated from Cooperative Vector (SM 6.9) to LinAlg Matrix (SM 6.10)
- Added
03_texture_compression_with_input_encodingexample with positional and grid encoding - Added matrix conversion utilities for host-side weight format conversion
- Added LinAlg Matrix MLP guide (
docs/linalg_matrix_mlp.md) - Updated Agility SDK to 1.720-preview, DXC to v1.10.2605.2
- Updated examples and unit tests for SM 6.10 API
Requirements
- Windows 11 with Developer Mode
- GPU supports Shader Model 6.10 and LinAlg Matrix in D3D12 (AMD Radeon™ RX 9000 Series GPUs or equivalent NVIDIA)
- CMake 3.21+, Visual Studio 2022 (C++20)
- Agility SDK 1.720-preview, DXC v1.10.2605.2
- Python 3.8+ with PyTorch (optional, for reference training)
License
MIT — Copyright (c) 2026 Advanced Micro Devices, Inc.
v0.2.0
MiniDXNN v0.2.0
MLP training support for MiniDXNN — GPU-accelerated forward and backward passes using DirectX 12 Cooperative Vector, with a C++ fallback path.
Features
- HLSL MLP training (
include/minidxnn/hlsl/mlp.hlsl): Forward and backward passes withmininn::forward()andmininn::backward() - Texture training example (
example/02_texture_training): End-to-end MLP training on GPU with SGD, Adam, and Lion optimizers - C++ fallback (
include/minidxnn/cpp/hlsl_compat.hpp): CPU execution path that compilesmlp.hlslas C++ for environments without Cooperative Vector support
Changes from v0.1.0
- Added MLP backward pass in HLSL
- Added
02_texture_trainingexample with GPU training pipeline - Added C++ fallback infrastructure (
hlsl_compat.hpp) - Added training-related unit tests (atomic operations, MLP training)
- Restructured HLSL header path from
include/hlsl/toinclude/minidxnn/hlsl/ - Updated documentation and README
Requirements
- Windows 11 with Developer Mode
- GPU supports Shader Model 6.9 and Cooperative Vector in D3D12 (AMD Radeon™ RX 9000 Series GPUs or equivalent NVIDIA)
- CMake 3.21+, Visual Studio 2022 (C++20)
- Agility SDK 1.717.1-preview, DXC v1.8.2505.1
- Python 3.8+ with PyTorch (optional, for reference training)
License
MIT — Copyright (c) 2026 Advanced Micro Devices, Inc.
v0.1.0
MiniDXNN v0.1.0
Initial release of MiniDXNN — a header-only HLSL library for GPU-accelerated MLP inference using DirectX 12 Cooperative Vector.
Features
- HLSL MLP inference (
include/hlsl/mlp.hlsl): Configurable forward pass withmininn::forward() - Texture inference example: Train with PyTorch, export weights, run GPU inference
Requirements
- Windows 10/11 with Developer Mode
- GPU supports Shader Model 6.9 and Cooperative Vector in D3D12 (AMD Radeon™ RX 9000 Series GPUs or equivalent NVIDIA)
- CMake 3.21+, Visual Studio 2022 (C++20)
- Agility SDK 1.717.1-preview, DXC v1.8.2505.1
License
MIT — Copyright (c) 2026 Advanced Micro Devices, Inc.