Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
615eba2
install cuda-cudart-dev-*
mitchdz Oct 23, 2025
ed9a7e6
update torch/torchvision to cu12 versions
mitchdz Oct 23, 2025
7720916
Merge remote-tracking branch 'origin/main' into test-publishing
mitchdz Oct 23, 2025
603e74f
use venv instead of --break-system-packages
mitchdz Oct 24, 2025
10a7ccc
remove cuda_major 11, add 13
mitchdz Oct 25, 2025
587f6bc
fix missing slash
mitchdz Oct 25, 2025
f23bf26
add note about cuda 12.x driver; split pyproject into *.cu{12,3}
mitchdz Oct 26, 2025
d101bca
update pyproject.toml.cu13 deps
mitchdz Oct 26, 2025
5d61f7a
copy all pyproject templates
mitchdz Oct 26, 2025
12de0fa
bump cupy
mitchdz Oct 26, 2025
f4e2ffe
explcitly copy cu13 for pyproject template for metapackage build
mitchdz Oct 26, 2025
5f7d16f
remove bad cp
mitchdz Oct 27, 2025
5fe3f62
make docs formatting happy
mitchdz Oct 27, 2025
36dd70f
Merge remote-tracking branch 'origin/main' into test-publishing
mitchdz Oct 27, 2025
6b27b5a
fix weird formatting for CUDA driver version
mitchdz Oct 27, 2025
ccb8859
update pyproject metadata to accurately reflect capabilities
mitchdz Oct 27, 2025
470301b
copy cuda 13 pyproject template for test in devenv
mitchdz Oct 27, 2025
e82745e
unitary_compiliation pin huggingface-hub to 0.36.0
mitchdz Oct 27, 2025
94d6db9
Merge remote-tracking branch 'origin/main' into test-publishing
mitchdz Oct 27, 2025
8ed8ff5
add debug messaging for python metapackage
mitchdz Oct 28, 2025
17bacb3
Merge remote-tracking branch 'origin/main' into test-publishing
mitchdz Oct 28, 2025
8bee2eb
add symlink for pyproject.toml template to be cu13 by default
mitchdz Oct 28, 2025
051a56b
add cublas* to dynlibs
mitchdz Oct 28, 2025
9b5182f
true out when cp pyproject. Same as symlink so will error.
mitchdz Oct 28, 2025
04879c2
place MANIFEST.in properly
mitchdz Oct 28, 2025
e09453a
ignore license check for new MANIFEST file
mitchdz Oct 28, 2025
2d4f052
make spellechecker happy
mitchdz Oct 28, 2025
b856a7d
make spellcheck happy
mitchdz Oct 28, 2025
c3a7334
link more nvidia libraries for cu13
mitchdz Oct 28, 2025
8b7a5e2
Compatability fixes for cupy 13.5+
1tnguyen Jul 14, 2025
6a97ce1
Merge remote-tracking branch 'origin/main' into test-publishing
mitchdz Oct 29, 2025
8b19f7e
upgrade cupy to 13.6.0 for cu12
mitchdz Oct 31, 2025
4db45ec
install nvidia-curand-cu in docker image
mitchdz Oct 31, 2025
4c97e8c
Merge remote-tracking branch 'origin/main' into test-publishing
mitchdz Oct 31, 2025
f65740d
Merge remote-tracking branch 'origin/main' into test-publishing
mitchdz Oct 31, 2025
03c2595
remove CUDA 11 references from docs
mitchdz Oct 31, 2025
6d70560
make code formatting happy
mitchdz Oct 31, 2025
33f1924
make spell format happy
mitchdz Oct 31, 2025
fb3b3b3
Merge branch 'main' into test-publishing
mitchdz Oct 31, 2025
6925dbb
finish removing cu11, and add cuda-quantum-cu13 to setup.py
mitchdz Oct 31, 2025
e330c0e
appease the code formatting gods
mitchdz Oct 31, 2025
5fdfafe
Update python/README.md.in
mitchdz Oct 31, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions .github/workflows/docker_images.yml
Original file line number Diff line number Diff line change
Expand Up @@ -494,9 +494,7 @@ jobs:

platform_tag=${{ needs.metadata.outputs.platform_tag }}
cuda_major=`echo ${{ inputs.cuda_version }} | cut -d . -f1`
if [ "$cuda_major" == "11" ]; then
deprecation_notice="**Note**: Support for CUDA 11 will be removed in future releases. Please update to CUDA 12."
fi
deprecation_notice=""
image_tag=${platform_tag:+$platform_tag-}${cuda_major:+cu${cuda_major}-}
if ${{ github.event.pull_request.number != '' }} || [ -n "$(echo ${{ github.ref_name }} | grep pull-request/)" ]; then
pr_number=`echo ${{ github.ref_name }} | grep -o [0-9]*`
Expand Down
45 changes: 19 additions & 26 deletions .github/workflows/publishing.yml
Original file line number Diff line number Diff line change
Expand Up @@ -964,24 +964,15 @@ jobs:
dpkg -i cuda-keyring_1.1-1_all.deb
cuda_version_suffix="$(echo ${{ matrix.cuda_version }} | tr . -)"
apt-get update
if [ $(echo ${{ matrix.cuda_version }} | cut -d . -f1) -gt 11 ]; then
apt-get install -y --no-install-recommends \
cuda-cudart-$cuda_version_suffix \
cuda-nvrtc-$cuda_version_suffix \
libnvjitlink-$cuda_version_suffix \
libcurand-$cuda_version_suffix \
libcublas-$cuda_version_suffix \
libcusparse-$cuda_version_suffix \
libcusolver-$cuda_version_suffix
else
apt-get install -y --no-install-recommends \
cuda-cudart-$cuda_version_suffix \
cuda-nvrtc-$cuda_version_suffix \
libcurand-$cuda_version_suffix \
libcublas-$cuda_version_suffix \
libcusparse-$cuda_version_suffix \
libcusolver-$cuda_version_suffix
fi
apt-get install -y --no-install-recommends \
cuda-cudart-$cuda_version_suffix \
cuda-cudart-dev-$cuda_version_suffix \
cuda-nvrtc-$cuda_version_suffix \
libnvjitlink-$cuda_version_suffix \
libcurand-$cuda_version_suffix \
libcublas-$cuda_version_suffix \
libcusparse-$cuda_version_suffix \
libcusolver-$cuda_version_suffix

- name: Runtime dependencies (dnf)
if: startsWith(matrix.os_image, 'redhat')
Expand Down Expand Up @@ -1048,7 +1039,7 @@ jobs:
strategy:
matrix:
platform: ['amd64-gpu-a100', 'arm64-gpu-a100']
cuda_major: ['', '11', '12']
cuda_major: ['', '12', '13']
fail-fast: false

runs-on: linux-${{ matrix.platform }}-latest-1
Expand Down Expand Up @@ -1086,20 +1077,22 @@ jobs:
# These simple steps are only expected to work for
# test cases that don't require MPI.
# Create clean python3 environment.
apt-get update && apt-get install -y --no-install-recommends python3 python3-pip
mkdir -p /tmp/packages && mv /tmp/wheels/* /tmp/packages && rmdir /tmp/wheels
apt-get update && apt-get install -y --no-install-recommends python3 python3-pip python3-venv

python3 -m pip install pypiserver
server=`find / -name pypi-server -executable -type f`
$server run -p 8080 /tmp/packages &
# Make a place for local wheels
mkdir -p /tmp/packages && mv /tmp/wheels/* /tmp/packages && rmdir /tmp/wheels

# Create and activate virtual environment
python3 -m venv /opt/cudaq-venv
source /opt/cudaq-venv/bin/activate

if [ -n "${{ matrix.cuda_major }}" ]; then
pip install cuda-quantum-cu${{ matrix.cuda_major }}==${{ needs.assets.outputs.cudaq_version }} -v \
--extra-index-url http://localhost:8080
--find-links "file:///tmp/packages"
else
pip install --upgrade pip
pip install cudaq==${{ needs.assets.outputs.cudaq_version }} -v \
--extra-index-url http://localhost:8080 \
--find-links "file:///tmp/packages" \
2>&1 | tee /tmp/install.out

if [ -z "$(cat /tmp/install.out | grep -o 'Autodetection succeeded')" ]; then
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/python_metapackages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ jobs:
package_name=cudaq
cuda_version_requirement="12.x or 13.x"
cuda_version_conda=12.4.0 # only used as example in the install script
deprecation_notice="**Note**: Support for CUDA 11 will be removed in future releases. Please update to CUDA 12."
deprecation_notice=""
cat python/README.md.in > python/metapackages/README.md
for variable in package_name cuda_version_requirement cuda_version_conda deprecation_notice; do
sed -i "s/.{{[ ]*$variable[ ]*}}/${!variable}/g" python/metapackages/README.md
Expand Down
1 change: 1 addition & 0 deletions .licenserc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ header:
- 'include/cudaq/Optimizer/CodeGen/OptUtils.h'
- 'lib/Optimizer/CodeGen/OptUtils.cpp'
- 'runtime/cudaq/algorithms/optimizers/nlopt/nlopt-src'
- 'python/metapackages/MANIFEST.in'

comment: on-failure

Expand Down
26 changes: 9 additions & 17 deletions docker/build/assets.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,8 @@ RUN source /cuda-quantum/scripts/configure_build.sh && \

## [Python support]
FROM prereqs AS python_build
ADD "pyproject.toml" /cuda-quantum/pyproject.toml
# Bring all possible templates into the image, then pick the exact one
ADD pyproject.toml.cu* /cuda-quantum/
ADD "python" /cuda-quantum/python
ADD "cmake" /cuda-quantum/cmake
ADD "include" /cuda-quantum/include
Expand All @@ -186,22 +187,13 @@ RUN dnf install -y --nobest --setopt=install_weak_deps=False ${PYTHON}-devel &&
${PYTHON} -m ensurepip --upgrade && \
${PYTHON} -m pip install numpy build auditwheel patchelf

RUN cd /cuda-quantum && source scripts/configure_build.sh && \
if [ "${CUDA_VERSION#12.}" != "${CUDA_VERSION}" ]; then \
cublas_version=12.0 && \
cusolver_version=11.4 && \
cuda_runtime_version=12.0 && \
cuda_nvrtc_version=12.0 && \
cupy_version=13.4.1 && \
sed -i "s/-cu13/-cu12/g" pyproject.toml && \
sed -i "s/-cuda13/-cuda12/g" pyproject.toml && \
sed -i -E "s/cupy-cuda[0-9]+x/cupy-cuda12x/g" pyproject.toml && \
sed -i -E "s/(cupy-cuda[0-9]+x? ~= )[0-9\.]*/\1${cupy_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cublas-cu[0-9]* ~= )[0-9\.]*/\1${cublas_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cusolver-cu[0-9]* ~= )[0-9\.]*/\1${cusolver_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cuda-nvrtc-cu[0-9]* ~= )[0-9\.]*/\1${cuda_nvrtc_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cuda-runtime-cu[0-9]* ~= )[0-9\.]*/\1${cuda_runtime_version}/g" pyproject.toml; \
fi && \
RUN cd /cuda-quantum && \
. scripts/configure_build.sh && \
case "${CUDA_VERSION%%.*}" in \
12) cp pyproject.toml.cu12 pyproject.toml || true ;; \
13) cp pyproject.toml.cu13 pyproject.toml || true ;; \
*) echo "Unsupported CUDA_VERSION=${CUDA_VERSION}"; exit 1 ;; \
esac && \
# Needed to retrigger the LLVM build, since the MLIR Python bindings
# are not built in the prereqs stage.
rm -rf "${LLVM_INSTALL_PREFIX}" && \
Expand Down
3 changes: 3 additions & 0 deletions docker/release/cudaq.ext.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@ RUN if [ -x "$(command -v pip)" ]; then \
pip install --no-cache-dir mpi4py~=3.1; \
fi; \
fi
RUN cuda_version_suffix=$(echo ${CUDA_VERSION} | tr . -) && \
pip install nvidia-curand-cu${cuda_version_suffix}

# Make sure that apt-get remains updated at the end!;
# If we don't do that, then apt-get will get confused when some CUDA
# components are already installed but not all of them.
Expand Down
25 changes: 8 additions & 17 deletions docker/release/cudaq.wheel.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -37,23 +37,14 @@ RUN echo "Building MLIR bindings for python${python_version}" && \
LLVM_CMAKE_CACHE=/cmake/caches/LLVM.cmake LLVM_SOURCE=/llvm-project \
bash /scripts/build_llvm.sh -c Release -v

# Patch the pyproject.toml file to change the CUDA version if needed
RUN cd cuda-quantum && sed -i "s/README.md.in/README.md/g" pyproject.toml && \
if [ "${CUDA_VERSION#12.}" != "${CUDA_VERSION}" ]; then \
cublas_version=12.0 && \
cusolver_version=11.4 && \
cuda_runtime_version=12.0 && \
cuda_nvrtc_version=12.0 && \
cupy_version=13.4.1 && \
sed -i "s/-cu13/-cu12/g" pyproject.toml && \
sed -i "s/-cuda13/-cuda12/g" pyproject.toml && \
sed -i -E "s/cupy-cuda[0-9]+x/cupy-cuda12x/g" pyproject.toml && \
sed -i -E "s/(cupy-cuda[0-9]+x? ~= )[0-9\.]*/\1${cupy_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cublas-cu[0-9]* ~= )[0-9\.]*/\1${cublas_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cusolver-cu[0-9]* ~= )[0-9\.]*/\1${cusolver_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cuda-nvrtc-cu[0-9]* ~= )[0-9\.]*/\1${cuda_nvrtc_version}/g" pyproject.toml && \
sed -i -E "s/(nvidia-cuda-runtime-cu[0-9]* ~= )[0-9\.]*/\1${cuda_runtime_version}/g" pyproject.toml; \
fi
# Configure the build based on the CUDA version
RUN cd /cuda-quantum && \
. scripts/configure_build.sh && \
case "${CUDA_VERSION%%.*}" in \
12) cp pyproject.toml.cu12 pyproject.toml || true ;; \
13) cp pyproject.toml.cu13 pyproject.toml || true ;; \
*) echo "Unsupported CUDA_VERSION=${CUDA_VERSION}"; exit 1 ;; \
esac

# Create the README
RUN cd cuda-quantum && cat python/README.md.in > python/README.md && \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
"source": [
"# Install the relevant packages.\n",
"\n",
"!pip install matplotlib==3.8.4 torch==2.0.1+cu118 torchvision==0.15.2+cu118 scikit-learn==1.4.2 -q --extra-index-url https://download.pytorch.org/whl/cu118"
"!pip install matplotlib==3.8.4 torch==2.9.0+cu126 torchvision==0.24.0+cu126 scikit-learn==1.4.2 -q --extra-index-url https://download.pytorch.org/whl/cu126"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions docs/sphinx/using/install/data_center_install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -255,8 +255,8 @@ Python-specific tools:

.. note::

The wheel build by default is configured to depend on CUDA 12. To build a wheel for CUDA 11,
you need to adjust the dependencies and project name in the `pyproject.toml` file.
The wheel build by default is configured to depend on CUDA 13. To build a wheel for CUDA 12,
you need to copy the `pyproject.toml.cu12` file as `pyproject.toml`.

From within the folder where you cloned the CUDA-Q repository, run the following
command to build the CUDA-Q Python wheel:
Expand Down
12 changes: 7 additions & 5 deletions docs/sphinx/using/install/local_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -834,10 +834,10 @@ by running the command
.. note::

Please check if you have an existing installation of the `cuda-quantum`,
`cudaq-quantum-cu11`, or `cuda-quantum-cu12` package,
`cudaq-quantum-cu12`, or `cuda-quantum-cu13` package,
and uninstall it prior to installing `cudaq`. The `cudaq` package supersedes the
`cuda-quantum` package and will install a suitable binary distribution (either
`cuda-quantum-cu11` or `cuda-quantum-cu12`) for your system. Multiple versions
`cuda-quantum-cu12` or `cuda-quantum-cu13`) for your system. Multiple versions
of a CUDA-Q binary distribution will conflict with each other and not work properly.

If you previously installed the CUDA-Q pre-built binaries, you should first uninstall your
Expand Down Expand Up @@ -892,9 +892,11 @@ The following table summarizes the required components.
* - NVIDIA GPU with Compute Capability
- 7.5+
* - CUDA
- 12.x (Driver 525.60.13+), 13.x (Driver 580.65.06+)

Detailed information about supported drivers for different CUDA versions and be found `here <https://docs.nvidia.com/deploy/cuda-compatibility/>`__.
- • 12.x (Driver 525.60.13+) – For GPUs that support CUDA Forward Compatibility
• 12.6+ (Driver 560.35.05+) – For all GPUs with supported architecture
• 13.x (Driver 580.65.06+)

Detailed information about supported drivers for different CUDA versions and be found `here <https://docs.nvidia.com/deploy/cuda-compatibility/>`__. For more information on GPU forward capabilities, please refer to `this page <https://docs.nvidia.com/deploy/cuda-compatibility/forward-compatibility.html>`__.

.. note::

Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/using/quick_start.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Install CUDA-Q

To develop CUDA-Q applications using C++, please make sure you have a C++ toolchain installed
that supports C++20, for example `g++` version 11 or newer.
Download the `install_cuda_quantum` file for your processor architecture and CUDA version (`_cu11` suffix for CUDA 11 and `_cu12` suffix for CUDA 12)
Download the `install_cuda_quantum` file for your processor architecture and CUDA version (`_cu12` suffix for CUDA 12 and `_cu13` suffix for CUDA 13)
from the assets of the respective `GitHub release <https://github.com/NVIDIA/cuda-quantum/releases>`__;
that is, the file with the `aarch64` extension for ARM processors, and the one with `x86_64` for, e.g., Intel and AMD processors.

Expand Down
83 changes: 0 additions & 83 deletions pyproject.toml

This file was deleted.

1 change: 1 addition & 0 deletions pyproject.toml
Loading
Loading