Skip to content

Commit af50c0d

Browse files
author
mawolf2023
committed
Adding Efrat's edits 2/4
Merge branch 'backends' of https://github.com/mawolf2023/cuda-quantum into backends
2 parents 390213d + 3c66feb commit af50c0d

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/sphinx/using/backends/sims/photonics.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ Hong-Ou-Mandel effect.
202202

203203
Executing Photonics Kernels
204204
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
205-
In order to execute a photonics kernel, you need to specific a photonics simulator backend like :code:`orca-photonics` used in the example below.
205+
In order to execute a photonics kernel, you need to specify a photonics simulator backend like :code:`orca-photonics` used in the example below.
206206
There are two ways to execute photonics kernels :code:`sample` and :code:`get_state`
207207

208208

@@ -282,7 +282,7 @@ The :code:`get_state` command can be used to generate statistics about the quant
282282
# Compute the statevector of the kernel
283283
result = cudaq.get_state(kernel, qumode_count)
284284
285-
print(np.array(result))k
285+
print(np.array(result))
286286
287287
288288
.. parsed-literal::

docs/sphinx/using/examples/multi_gpu_workflows.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ You can run a state vector simulation using your CPU with the :code:`qpp-cpu` ba
2828
2929
{ 00:475 11:525 }
3030
31-
As the number of qubits increases to even modest size, the CPU simulation will become impractically slow. By switching to the :code:`nvidia` backend, you can accelerate the same code on a single GPU and achieve a speedup of up to **2500x**. If you have a GPU available, this the default backend to ensure maximum productivity.
31+
As the number of qubits increases to even modest size, the CPU simulation will become impractically slow. By switching to the :code:`nvidia` backend, you can accelerate the same code on a single GPU and achieve a speedup of up to **425x**. If you have a GPU available, this the default backend to ensure maximum productivity.
3232

3333
.. literalinclude:: ../../snippets/python/using/examples/multi_gpu_workflows/multiple_targets.py
3434
:language: python
@@ -69,7 +69,7 @@ Parallel execution over multiple QPUs (`mqpu`)
6969
Batching Hamiltonian Terms
7070
^^^^^^^^^^^^^^^^^^^^^^^^^^^
7171

72-
Multiple GPUs can also come in handy for cases where applications might benefit from multiple QPUs running asynchronously. The `mqpu` backend uses multiple GPUs to simulate each QPU so you can test and accelerate quantum applications with parallelization.
72+
Multiple GPUs can also come in handy for cases where applications might benefit from multiple QPUs running in parallel. The `mqpu` backend uses multiple GPUs to simulate QPUs so you can accelerate quantum applications with parallelization.
7373

7474

7575
.. image:: images/mqpu.png
@@ -152,7 +152,7 @@ Multi-QPU + Other Backends (`remote-mqpu`)
152152
-------------------------------------------
153153

154154

155-
The `mqpu` backend can be extended so that each parallel simulated QPU can be simulated with backends other than :code:`nvidia`. This provides a way to simulate larger scale circuits and execute parallel algorithms. This accomplished by launching remotes servers which each simulated a QPU.
155+
The `mqpu` backend can be extended so that each parallel simulated QPU run backends other than :code:`nvidia`. This provides a way to simulate larger scale circuits and execute parallel algorithms. This accomplished by launching remotes servers which each simulated a QPU.
156156
The code example below demonstrates this using the :code:`tensornet-mps` backend which allows sampling of a 40 qubit circuit too larger for state vector simulation. In this case, the target is specified as :code:`remote-mqpu` while an additional :code:`backend` is specified for the simulator used for each QPU.
157157

158158
The default approach uses one GPU per QPU and can both launch and close each server automatically. This is accomplished by specifying :code:`auto_launch` and :code"`url` within :code:`cudaq.set_target`. Running the script below will then sample the 40 qubit circuit using two QPUs each running :code:`tensornet-mps`.

0 commit comments

Comments
 (0)