Skip to content

Commit 50d75a9

Browse files
committed
English suggestions for hybrid QNN
1 parent 25f01e0 commit 50d75a9

File tree

1 file changed

+25
-19
lines changed

1 file changed

+25
-19
lines changed

algorithms/qml/hybrid_qnn/hybrid_qnn_for_subset_majority.ipynb

Lines changed: 25 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -21,20 +21,20 @@
2121
"id": "55d39605-9fa4-4aab-b709-23889e9771d0",
2222
"metadata": {},
2323
"source": [
24-
"Neural networks is one of the major branches in machine learning, with wide usa in applications and research. A neural network, or more generally a deep neural network, is a parametric function of a specific structure (inspired from biological neural networks in biology), which is trained to capture specific functionality.\n",
24+
"Neural networks is one of the major branches in machine learning, with wide use in applications and research. A neural network—or, more generally, a deep neural networkis a parametric function of a specific structure (inspired by neural networks in biology), which is trained to capture specific functionality.\n",
2525
"\n",
2626
"In its most basic form, a neural network for learning a function $\\vec{f}: \\mathbb{R}^N\\rightarrow \\mathbb{R}^M$ looks as follows:\n",
2727
"1. There is an input vector of size $N$ (red circles in Fig. 1).\n",
28-
"2. Each entry of the input goes into a hidden layer of size $K$, where each neuron (blue circles in Fig. 1) is defined with some \"activation function\" $y^{k}(\\vec{w}^{(1)}; \\vec{x})$ for $k=1,\\dots,K$, and $\\vec{w}^{(1)}$ are parameters.\n",
29-
"3. Finally, the output of the hidden layer is sent to the output layer (green circles in Fig. 1) $\\tilde{f}^{m}(\\vec{w}^{(2)};\\vec{y})$ for $m=1,\\dots,M$, and $\\vec{w}^{(2)}$ are parameters.\n",
28+
"2. Each entry of the input goes into a hidden layer of size $K$, where each neuron (blue circles in Fig. 1) is defined with an \"activation function\" $y^{k}(\\vec{w}^{(1)}; \\vec{x})$ for $k=1,\\dots,K$, and $\\vec{w}^{(1)}$ are parameters.\n",
29+
"3. The output of the hidden layer is sent to the output layer (green circles in Fig. 1) $\\tilde{f}^{m}(\\vec{w}^{(2)};\\vec{y})$ for $m=1,\\dots,M$, and $\\vec{w}^{(2)}$ are parameters.\n",
3030
"\n",
3131
"The output $\\vec{\\tilde{f}}$ is thus a parametric function (in $\\vec{w}^{(1)},\\,\\vec{w}^{(2)}$), which can be trained to capture the target function $\\vec{f}$.\n",
3232
"\n",
3333
"\n",
3434
"![png](neural_network.png)\n",
3535
"\n",
3636
"<center>\n",
37-
"<figcaption align = \"middle\"> Figure 1. A single layer classical neural network (taken from Wikipedia). Here the input size is $N=3$, the output size is $M=3$, and the hidden layer has $L=4$ neurons. </figcaption>\n",
37+
"<figcaption align = \"middle\"> Figure 1. A single layer classical neural network (from Wikipedia). Here, the input size is $N=3$, the output size is $M=3$, and the hidden layer has $L=4$ neurons. </figcaption>\n",
3838
"</center>"
3939
]
4040
},
@@ -45,7 +45,7 @@
4545
"jp-MarkdownHeadingCollapsed": true
4646
},
4747
"source": [
48-
"**Deep neural networks** - Deep neural networks are similar to the description above, having more than one hidden layer. This provides a more complex structure that can capture more complex functionalities."
48+
"**Deep neural networks** are similar to the description above, having more than one hidden layer. This provides a more complex structure that can capture more complex functionalities."
4949
]
5050
},
5151
{
@@ -55,16 +55,16 @@
5555
"jp-MarkdownHeadingCollapsed": true
5656
},
5757
"source": [
58-
"### Quantum Neural Networks\n",
58+
"## Quantum Neural Networks\n",
5959
"\n",
60-
"The idea of a quantum neural network refers to combining parametric circuits as a replacement to all, or part of, classical layers in classical neural networks. The basic object in QNN is thus a **quantum layer**. A quantum layer has a classical input and it returns a classical output. The output is obtained by running a quantum program. A quantum layer is thus composed of three parts:\n",
60+
"The idea of a quantum neural network refers to combining parametric circuits as a replacement for all or some of the classical layers in classical neural networks. The basic object in QNN is thus a **quantum layer**, which has a classical input and returns a classical output. The output is obtained by running a quantum program. A quantum layer is thus composed of three parts:\n",
6161
"1. A quantum part that encodes the input: This is a parametric quantum function for representing the entries of a single data point. There are three canonical ways to encode a data vector of size $N$: [angle-encoding](https://github.com/Classiq/classiq-library/blob/main/functions/qmod_library_reference/classiq_open_library/variational_data_encoding/variational_data_encoding.ipynb) using $N$ qubits, [dense angle-encoding](https://github.com/Classiq/classiq-library/blob/main/functions/qmod_library_reference/classiq_open_library/variational_data_encoding/variational_data_encoding.ipynb) using $\\lceil N/2\\rceil$ qubits, and amplitude-encoding using $\\lceil\\log_2N\\rceil$ qubits.\n",
6262
"2. A quantum ansatz part: This is a parametric quantum function, whose parameters are trained as the weights in classical layers.\n",
6363
"3. A postprocess classical part, for returning an output classical vector.\n",
6464
"\n",
6565
"The integration of quantum layers in classical neural networks may offer reduction in resources for a given functionality, as the network (or part of it) is expressed via the Hilbert space, providing different expressibility compared to classical networks.\n",
6666
"\n",
67-
"This notebook demonstrates QNN by treating a specific function, the subset majority, for which we construct, train, and verify a hybrid classical-quantum neural network. The notebook assumes familiarity with Classiq and NN with PyTorch. See [QML guide with Classiq](https://github.com/Classiq/classiq-library/blob/main/tutorials/documentation_materials/user_guide/qml_with_classiq_guide/qml_with_classiq_guide.ipynb) ."
67+
"This notebook demonstrates QNN by treating a specific functionthe subset majorityfor which we construct, train, and verify a hybrid classical-quantum neural network. The notebook assumes familiarity with Classiq and NN with PyTorch. See the [QML guide with Classiq](https://github.com/Classiq/classiq-library/blob/main/tutorials/documentation_materials/user_guide/qml_with_classiq_guide/qml_with_classiq_guide.ipynb)."
6868
]
6969
},
7070
{
@@ -75,7 +75,7 @@
7575
"## Example: Hybrid Neural Network for the Subset Majority Function\n",
7676
"\n",
7777
"\n",
78-
"For an integer $N$ and a given subset of indices $S \\subset \\{0,1,\\dots,N\\}$ we define the subset majority function, $M_{S}:\\{0,1\\}^{\\times N}\\rightarrow \\{0,1\\}$ that acts on binary strings of size $N$ as follows: it returns 1 if the number of ones within the a substring according to $S$ are larger than $|S|//2$, and 0 otherwise,\n",
78+
"For an integer $N$ and a given subset of indices $S \\subset \\{0,1,\\dots,N\\}$ we define the subset majority function, $M_{S}:\\{0,1\\}^{\\times N}\\rightarrow \\{0,1\\}$ that acts on binary strings of size $N$ as follows: it returns 1 if the number of ones within the substring according to $S$ is larger than $|S|//2$, and 0 otherwise,\n",
7979
"$$\n",
8080
"M_S(\\vec{b}) = \\left\\{ \\begin{array}{l l }\n",
8181
"1 & \\text{if } \\sum_{j\\in S} b_{j}>|S|//2, \\\\\n",
@@ -84,7 +84,7 @@
8484
"\\right .\n",
8585
"$$\n",
8686
"\n",
87-
"For example, we consider $N=7$ and $S=\\{0,1,4\\}$.\n",
87+
"For example, we consider $N=7$ and $S=\\{0,1,4\\}$:\n",
8888
"* The string 0101110 corresponds to the substring 011, for which the number of ones is 2(>1). Therefore, $M_S(0101110)=1$.\n",
8989
"* The string 0011111 corresponds to the substring 001, for which the number of ones is 1(=1). Therefore, $M_S(0101110)=0$."
9090
]
@@ -94,7 +94,7 @@
9494
"id": "ce5f70ee-11d1-44f9-9f12-219515f1250e",
9595
"metadata": {},
9696
"source": [
97-
"### Generating data for a specific example\n",
97+
"### Generating Data for a Specific Example\n",
9898
"\n",
9999
"Let us consider a specific example for our demonstration. We choose $N=10$ and generate all possible data of $2^N$ bit strings. We also take a specific subset $S=\\{1, 3, 4, 6, 7, 9\\}$."
100100
]
@@ -208,15 +208,15 @@
208208
"id": "a5dd682a-73c5-4954-9fc2-934ae187b920",
209209
"metadata": {},
210210
"source": [
211-
"### Constructing a hybrid network"
211+
"### Constructing a Hybrid Network"
212212
]
213213
},
214214
{
215215
"cell_type": "markdown",
216216
"id": "583a05af-fde0-4bc1-9d03-6574cd96cc8a",
217217
"metadata": {},
218218
"source": [
219-
"We will build the following hybrid neural network:\n",
219+
"We build the following hybrid neural network:\n",
220220
"\n",
221221
"**Data flattening $\\rightarrow$ A classical linear layer of size 10 to 4 with `Tanh` activation $\\rightarrow$ A qlayer of size 4 to 2 $\\rightarrow$ a classical linear layer of size 2 to 1 with `ReLU` activation.**"
222222
]
@@ -226,15 +226,21 @@
226226
"id": "371cb77f-efac-4203-976a-94d426d9826c",
227227
"metadata": {},
228228
"source": [
229-
"The classical layers can be defined with PyTorch built-in functions. The quantum layer is constructed with (1) a [dense angle-encoding](https://github.com/Classiq/classiq-library/blob/main/functions/qmod_library_reference/classiq_open_library/variational_data_encoding/variational_data_encoding.ipynb) function, (2) a simple ansatz with RY and RZZ rotations, and (3) a postprocess that is based on a measurement per qubit."
229+
"The classical layers can be defined with PyTorch built-in functions. The quantum layer is constructed with \n",
230+
"\n",
231+
"(1) a [dense angle-encoding](https://github.com/Classiq/classiq-library/blob/main/functions/qmod_library_reference/classiq_open_library/variational_data_encoding/variational_data_encoding.ipynb) function \n",
232+
"\n",
233+
"(2) a simple ansatz with RY and RZZ rotations \n",
234+
"\n",
235+
"(3) a postprocess that is based on a measurement per qubit"
230236
]
231237
},
232238
{
233239
"cell_type": "markdown",
234240
"id": "e8b2e803-7d75-474e-ae31-bfb4de049ae4",
235241
"metadata": {},
236242
"source": [
237-
"#### The quantum layer"
243+
"#### The Quantum Layer"
238244
]
239245
},
240246
{
@@ -328,7 +334,7 @@
328334
"id": "88b4321f-ccfe-4b99-b480-475720157356",
329335
"metadata": {},
330336
"source": [
331-
"#### The full hybrid network"
337+
"#### The Full Hybrid Network"
332338
]
333339
},
334340
{
@@ -391,15 +397,15 @@
391397
"id": "d822e3b0-3ee7-49a7-86ac-86d2cd6fbc93",
392398
"metadata": {},
393399
"source": [
394-
"### Training and verifying the networks"
400+
"### Training and Verifying the Networks"
395401
]
396402
},
397403
{
398404
"cell_type": "markdown",
399405
"id": "fcfcb0fe-65f0-426a-a5b8-6c3ca56f33f2",
400406
"metadata": {},
401407
"source": [
402-
"We define some hyperparameters such as loss function and optimization method, and a training function."
408+
"We define some hyperparameters such as loss function and optimization method, and a training function:"
403409
]
404410
},
405411
{
@@ -532,7 +538,7 @@
532538
"id": "c7307920-333a-4fc1-8a26-b76e266925ec",
533539
"metadata": {},
534540
"source": [
535-
"### Training and verifying the network"
541+
"### Training and Verifying the Network"
536542
]
537543
},
538544
{

0 commit comments

Comments
 (0)