Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
fa6ff04
Updated Backends Section
mawolf2023 Dec 16, 2024
0bc2771
Removed Logos
mawolf2023 Dec 16, 2024
6c4305e
larger image text size
mawolf2023 Dec 16, 2024
a6efc5d
Added Table and cloud section
mawolf2023 Dec 17, 2024
818e8ea
Added python/c++ tabs and Efrat's comments
mawolf2023 Dec 17, 2024
de37d2f
Updated table, Backend Figure, And condensed fp64
mawolf2023 Dec 19, 2024
eead204
fixed typo
mawolf2023 Dec 20, 2024
878079f
Merge branch 'main' into backends
schweitzpgi Jan 2, 2025
a55163c
Update docs/sphinx/using/backends/backends.rst
mawolf2023 Jan 16, 2025
d9bf00f
Review Changes 1/17
mawolf2023 Jan 17, 2025
e57067a
Figure fix
mawolf2023 Jan 17, 2025
0ff31d4
DCO Remediation Commit for Mark Wolf <mawolf@nvidia.com>
mawolf2023 Jan 22, 2025
cb47c97
Merge branch 'main' into backends
mawolf2023 Jan 27, 2025
2f2cf12
Merge branch 'main' into backends
bmhowe23 Jan 27, 2025
fa09854
* Merging with mainline
khalatepradnya Jan 29, 2025
4291f8c
Merge branch 'main' into backends
khalatepradnya Jan 29, 2025
bfb3ae1
DCO Remediation Commit for Pradnya Khalate <pkhalate@nvidia.com>
khalatepradnya Jan 29, 2025
0af0e29
* Fix spellings
khalatepradnya Jan 29, 2025
2601da2
Photonics plus multi-gpu examples and some ref updates
Jan 31, 2025
efc48e6
Merge branch 'main' into backends
khalatepradnya Jan 31, 2025
52dae08
* Fix links for docs generation
khalatepradnya Jan 31, 2025
34c85d9
* Spelling fixes
khalatepradnya Feb 1, 2025
a62c415
* Few more corrections to spellings
khalatepradnya Feb 1, 2025
e3052dd
Merge branch 'main' into backends
khalatepradnya Feb 1, 2025
4f66ce8
white figure backgrounds
Feb 3, 2025
e49ecb9
Images with white backgrounds
Feb 3, 2025
1da46f2
Merge branch 'main' into backends
khalatepradnya Feb 3, 2025
a3d0950
new orca logo
Feb 4, 2025
34e5176
new orca logo
Feb 4, 2025
300b715
Merge branch 'main' into backends
khalatepradnya Feb 4, 2025
8a601da
Update docs/sphinx/using/backends/sims/photonics.rst
mawolf2023 Feb 4, 2025
817e0ff
Update docs/sphinx/using/backends/sims/photonics.rst
mawolf2023 Feb 4, 2025
6583944
Update docs/sphinx/using/examples/multi_gpu_workflows.rst
mawolf2023 Feb 4, 2025
414f2ee
Update docs/sphinx/using/examples/multi_gpu_workflows.rst
mawolf2023 Feb 4, 2025
3c66feb
Update docs/sphinx/using/examples/multi_gpu_workflows.rst
mawolf2023 Feb 4, 2025
390213d
edits 2/4
Feb 4, 2025
af50c0d
Adding Efrat's edits 2/4
Feb 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/workflows/config/spelling_allowlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -84,11 +84,13 @@ POSIX
PSIRT
Pauli
Paulis
Photonic
Photonics
PyPI
Pygments
QAOA
QCaaS
QEC
QIR
QIS
QPP
Expand All @@ -109,6 +111,7 @@ SLED
SLES
SLURM
SVD
Sqale
Stim
Superpositions
Superstaq
Expand Down Expand Up @@ -260,6 +263,7 @@ parallelizing
parameterization
performant
photonic
photonics
precompute
precomputed
prepend
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/applications/python/vqe_advanced.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -480,7 +480,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, run the code again (the three previous cells) and specify `num_qpus` to be more than one if you have access to multiple GPUs and notice resulting speedup. Thanks to CUDA-Q, this code could be used without modification in a setting where multiple physical QPUs were availible."
"Now, run the code again (the three previous cells) and specify `num_qpus` to be more than one if you have access to multiple GPUs and notice resulting speedup. Thanks to CUDA-Q, this code could be used without modification in a setting where multiple physical QPUs were available."
]
},
{
Expand Down
171 changes: 0 additions & 171 deletions docs/sphinx/examples/python/executing_photonic_kernels.ipynb

This file was deleted.

41 changes: 34 additions & 7 deletions docs/sphinx/examples/python/measuring_kernels.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -69,32 +69,59 @@
"id": "fb5dd767-5db7-4847-b04e-ae5695066800",
"metadata": {},
"source": [
"### Midcircuit Measurement and Conditional Logic\n",
"### Mid-circuit Measurement and Conditional Logic\n",
"\n",
"In certain cases, it it is helpful for some operations in a quantum kernel to depend on measurement results following previous operations. This is accomplished in the following example by performing a Hadamard on qubit 0, then measuring qubit 0 and savig the result as `b0`. Then, an if statement performs a Hadamard on qubit 1 only if `b0` is 1. Measuring this qubit 1 verifies this process as a 1 is the result 25% of the time."
"In certain cases, it it is helpful for some operations in a quantum kernel to depend on measurement results following previous operations. This is accomplished in the following example by performing a Hadamard on qubit 0, then measuring qubit 0 and saving the result as `b0`. Then, qubit 0 can be reset and used later in the computation. In this case it is flipped ot a 1. Finally, an if statement performs a Hadamard on qubit 1 if `b0` is 1. \n",
"\n",
"The results show qubit 0 is one, indicating the reset worked, and qubit 1 has a 75/25 distribution, demonstrating the mid-circuit measurement worked as expexted."
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 6,
"id": "44001a51-3733-472c-8bc1-ee694e957708",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{ \n",
" __global__ : { 10:728 11:272 }\n",
" b0 : { 0:505 1:495 }\n",
"}\n",
"\n"
]
}
],
"source": [
"@cudaq.kernel\n",
"def kernel():\n",
" q = cudaq.qvector(2)\n",
" \n",
" h(q[0])\n",
" b0 = mz(q[0])\n",
" reset(q[0])\n",
" x(q[0])\n",
" \n",
" if b0:\n",
" h(q[1])\n",
" mz(q[1])"
" h(q[1]) \n",
"\n",
"print(cudaq.sample(kernel))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d525be71-a745-43a5-a7ca-a2720c536f8c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,4 @@ You are browsing the documentation for |version| version of CUDA-Q. You can find
Other Versions <versions.rst>

.. |---| unicode:: U+2014 .. EM DASH
:trim:
:trim:
2 changes: 1 addition & 1 deletion docs/sphinx/releases.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ The full change log can be found `here <https://github.com/NVIDIA/cuda-quantum/r

**0.7.0**

The 0.7.0 release adds support for using :doc:`NVIDIA Quantum Cloud <using/backends/nvqc>`,
The 0.7.0 release adds support for using :doc:`NVIDIA Quantum Cloud <using/backends/cloud/nvqc>`,
giving you access to our most powerful GPU-accelerated simulators even if you don't have an NVIDIA GPU.
With 0.7.0, we have furthermore greatly increased expressiveness of the Python and C++ language frontends.
Check out our `documentation <https://nvidia.github.io/cuda-quantum/0.7.0/using/quick_start.html>`__
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,59 +16,67 @@
exit(0)

np.random.seed(1)
cudaq.set_target("nvidia", option="mqpu")
cudaq.set_target("nvidia")

qubit_count = 5
sample_count = 10000
h = spin.z(0)
parameter_count = qubit_count

# Below we run a circuit for 10000 different input parameters.
# prepare 10000 different input parameter sets.
parameters = np.random.default_rng(13).uniform(low=0,
high=1,
size=(sample_count,
parameter_count))

kernel, params = cudaq.make_kernel(list)

qubits = kernel.qalloc(qubit_count)
qubits_list = list(range(qubit_count))
@cudaq.kernel
def kernel(params: list[float]):

qubits = cudaq.qvector(5)

for i in range(5):
rx(params[i], qubits[i])


for i in range(qubit_count):
kernel.rx(params[i], qubits[i])
# [End prepare]

# [Begin single]
import timeit
import time

start_time = time.time()
cudaq.observe(kernel, h, parameters)
end_time = time.time()
print(end_time - start_time)

timeit.timeit(lambda: cudaq.observe(kernel, h, parameters),
number=1) # Single GPU result.
# [End single]

# [Begin split]
print('We have', parameters.shape[0],
'parameters which we would like to execute')
print('There are', parameters.shape[0], 'parameter sets to execute')

xi = np.split(
parameters,
4) # We split our parameters into 4 arrays since we have 4 GPUs available.
4) # Split the parameters into 4 arrays since 4 GPUs are available.

print('We split this into', len(xi), 'batches of', xi[0].shape[0], ',',
print('Split parameters into', len(xi), 'batches of', xi[0].shape[0], ',',
xi[1].shape[0], ',', xi[2].shape[0], ',', xi[3].shape[0])
# [End split]

# [Begin multiple]
# Timing the execution on a single GPU vs 4 GPUs,
# one will see a 4x performance improvement if 4 GPUs are available.
# one will see a nearly 4x performance improvement if 4 GPUs are available.

cudaq.set_target("nvidia", option="mqpu")
asyncresults = []
num_gpus = cudaq.num_available_gpus()

start_time = time.time()
for i in range(len(xi)):
for j in range(xi[i].shape[0]):
qpu_id = i * num_gpus // len(xi)
asyncresults.append(
cudaq.observe_async(kernel, h, xi[i][j, :], qpu_id=qpu_id))

result = [res.get() for res in asyncresults]
end_time = time.time()
print(end_time - start_time)
# [End multiple]
Loading
Loading