Skip to content

Commit 21789b7

Browse files
committed
English suggestions for autoencoder
1 parent 25f01e0 commit 21789b7

File tree

1 file changed

+26
-22
lines changed

1 file changed

+26
-22
lines changed

algorithms/qml/quantum_autoencoder/quantum_autoencoder.ipynb

Lines changed: 26 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,13 @@
1313
"id": "201ce387",
1414
"metadata": {},
1515
"source": [
16-
"### Classical encoders:\n",
16+
"## Encoder Types\n",
17+
"\n",
18+
"### Classical Encoders\n",
1719
"Refer to encoding/compressing classical data into a smaller sized data via deterministic algorithm. For example, JPEG is essentially an algorithm which compresses images into a smaller sized images.\n",
1820
"\n",
19-
"### Classical auto-encoders:\n",
20-
"One can use machine-learning technics and train a variational network for compressing data. In general, an auto-encoder network looks as follows:\n",
21+
"### Classical Autoencoders\n",
22+
"One can use machine-learning technics and train a variational network for compressing data. In general, an autoencoder network looks as follows:\n",
2123
"\n",
2224
"<center>\n",
2325
"<img src=\"https://docs.classiq.io/resources/Autoencoder_structure.png\" style=\"width:50%\">\n",
@@ -35,8 +37,8 @@
3537
"id": "5be9dbfd",
3638
"metadata": {},
3739
"source": [
38-
"### Quantum auto-encoders:\n",
39-
"In a similar fashion to the classical counterpart, quantum auto-encoder refers to \"compressing\" quantum data stored initially on $n$ qubits into a smaller quantum register of $m<n$ qubits, via variational circuit. However, quantum computing is reversibale, and thus qubits cannot be \"erased\". Therefore, alternatively, a quantum autoencoder tries to acheive the following transformation from uncoded quantum register of size $n$ to a coded one of size $m$:\n",
40+
"### Quantum Autoencoders\n",
41+
"In a similar fashion to the classical counterpart, quantum autoencoder refers to \"compressing\" quantum data stored initially on $n$ qubits into a smaller quantum register of $m<n$ qubits, via variational circuit. However, quantum computing is reversibale, and thus qubits cannot be \"erased\". Therefore, alternatively, a quantum autoencoder tries to acheive the following transformation from uncoded quantum register of size $n$ to a coded one of size $m$:\n",
4042
"$$\n",
4143
"|\\psi\\rangle_n \\rightarrow |\\psi'\\rangle_m|0\\rangle_{n-m}\n",
4244
"$$\n",
@@ -53,11 +55,11 @@
5355
"id": "e40c41f0",
5456
"metadata": {},
5557
"source": [
56-
"# Training of quantum auto encoders\n",
58+
"## Training of Quantum Autoencoders\n",
5759
"\n",
5860
"To train a quantum auto encoder one should define a proper cost function. Below we propose two common approaches, one using a swap test and the other uses Hamiltonian measurements. We focus on the swap test case, and comment on the other approach at the end of this notebook.\n",
5961
"\n",
60-
"## The swap test\n",
62+
"### The Swap Test\n",
6163
"\n",
6264
"The swap test is a quantum function which checks the overlap between two quantum states: the inputs of the function are two quantum registers of the same size, $|\\psi_1\\rangle, \\,|\\psi_2\\rangle$, and it returns as an output a single \"test\" qubit whose state encodes the overlap between the two inputs: $|q\\rangle_{\\rm test} = \\alpha|0\\rangle + \\sqrt{1-\\alpha^2}|1\\rangle$, with\n",
6365
"$$\n",
@@ -81,7 +83,7 @@
8183
"id": "443577d1",
8284
"metadata": {},
8385
"source": [
84-
"## Quantum neural network for quantum auto encoder\n",
86+
"### Quantum Neural Networks for Quantum Autoencoders\n",
8587
"\n",
8688
"The quantum auto encoder can be built as a quantum neural network, having the following three parts:\n",
8789
"\n",
@@ -102,7 +104,7 @@
102104
"id": "6a685c3b",
103105
"metadata": {},
104106
"source": [
105-
"# Pre-user-defined functions which will be used to construct the quantum layer\n",
107+
"## Pre-user-defined Functions That Construct the Quantum Layer\n",
106108
"\n",
107109
"As a first step we build some user-defined functions which allow us flexible modeling. We have three functions:\n",
108110
"1. `angle_encoding`: This function loads data of size `num_qubits` on `num_qubits` qubits via RY gates. It has an output port named `qpv`.\n",
@@ -208,7 +210,7 @@
208210
"id": "493d4499",
209211
"metadata": {},
210212
"source": [
211-
"# An example: auto encoder for domain wall data"
213+
"## Example: Autoencoder for Domain Wall Data"
212214
]
213215
},
214216
{
@@ -224,7 +226,7 @@
224226
"id": "535d9181",
225227
"metadata": {},
226228
"source": [
227-
"## The data"
229+
"### The Data"
228230
]
229231
},
230232
{
@@ -263,7 +265,7 @@
263265
"id": "0a09e977",
264266
"metadata": {},
265267
"source": [
266-
"## The quantum program"
268+
"### The Quantum Program"
267269
]
268270
},
269271
{
@@ -382,7 +384,7 @@
382384
"id": "5a41ccaa",
383385
"metadata": {},
384386
"source": [
385-
"## The network\n",
387+
"### The Network\n",
386388
"\n",
387389
"The network for training contains only a quantum layer. The corresponding quantum program was already defined above, what is left is to define some execution preferences and the classical post-process. The classical output is defined as $1-\\alpha^2$, with $\\alpha$ being the probability of the test qubit to be at state 0."
388390
]
@@ -494,7 +496,7 @@
494496
"id": "7590c55d",
495497
"metadata": {},
496498
"source": [
497-
"## Creating dataset\n",
499+
"### Creating the Dataset\n",
498500
"\n",
499501
"The cost function we would like to minimize is $|1-\\alpha^2|$ for all our training data. Looking at the qlayer output this means that we should define the corresponding labels as $0$."
500502
]
@@ -551,7 +553,7 @@
551553
"id": "608173a0",
552554
"metadata": {},
553555
"source": [
554-
"## Define the training"
556+
"### Defining the Training"
555557
]
556558
},
557559
{
@@ -596,7 +598,7 @@
596598
"id": "ee98061f",
597599
"metadata": {},
598600
"source": [
599-
"## Setting some hyper-parameters\n",
601+
"### Setting Hyper-parameters\n",
600602
"\n",
601603
"The L1 loss function fits the intended cost function we aim to minimize."
602604
]
@@ -625,7 +627,7 @@
625627
"id": "e38d76f1",
626628
"metadata": {},
627629
"source": [
628-
"## Training\n",
630+
"### Training\n",
629631
"\n",
630632
"In this demo we will initialize the network with trained parameters, and run only 1 epoch for demonstration. A reasonable training with the above hyper-parameters can be achieved with $\\sim 40$ epochs. To train the network from the beginning uncomment the following code line:"
631633
]
@@ -685,7 +687,7 @@
685687
"id": "7fb11e18",
686688
"metadata": {},
687689
"source": [
688-
"## Verification\n",
690+
"### Verification\n",
689691
"\n",
690692
"Once we trained our network, we can build a new network with the trained variables. We can thus verify our encoder by taking only the encoding block, changing post_process, etc.\n",
691693
"\n",
@@ -708,7 +710,7 @@
708710
"id": "812d2bc5",
709711
"metadata": {},
710712
"source": [
711-
"### We start with building the quantum layer for the validator"
713+
"### Building the Quantum Layer for the Validator"
712714
]
713715
},
714716
{
@@ -840,7 +842,9 @@
840842
"id": "7f3f9983",
841843
"metadata": {},
842844
"source": [
843-
"### Next, we define the classical output of the network. For the validator post-process we take the output with the maximal counts."
845+
"### Defining the Classical Output of the Network \n",
846+
"\n",
847+
"For the validator postprocessing, we take the output with the maximal counts:"
844848
]
845849
},
846850
{
@@ -890,7 +894,7 @@
890894
"id": "39a94e36",
891895
"metadata": {},
892896
"source": [
893-
"### We create the network and assign the trained parameters"
897+
"### Creating the Network and Assigning the Trained Parameters"
894898
]
895899
},
896900
{
@@ -989,7 +993,7 @@
989993
"id": "6dca60d3",
990994
"metadata": {},
991995
"source": [
992-
"# Usage for anomaly detection"
996+
"## Detecting Anomalies"
993997
]
994998
},
995999
{

0 commit comments

Comments
 (0)