|
13 | 13 | "id": "201ce387", |
14 | 14 | "metadata": {}, |
15 | 15 | "source": [ |
16 | | - "### Classical encoders:\n", |
| 16 | + "## Encoder Types\n", |
| 17 | + "\n", |
| 18 | + "### Classical Encoders\n", |
17 | 19 | "Refer to encoding/compressing classical data into a smaller sized data via deterministic algorithm. For example, JPEG is essentially an algorithm which compresses images into a smaller sized images.\n", |
18 | 20 | "\n", |
19 | | - "### Classical auto-encoders:\n", |
20 | | - "One can use machine-learning technics and train a variational network for compressing data. In general, an auto-encoder network looks as follows:\n", |
| 21 | + "### Classical Autoencoders\n", |
| 22 | + "One can use machine-learning technics and train a variational network for compressing data. In general, an autoencoder network looks as follows:\n", |
21 | 23 | "\n", |
22 | 24 | "<center>\n", |
23 | 25 | "<img src=\"https://docs.classiq.io/resources/Autoencoder_structure.png\" style=\"width:50%\">\n", |
|
35 | 37 | "id": "5be9dbfd", |
36 | 38 | "metadata": {}, |
37 | 39 | "source": [ |
38 | | - "### Quantum auto-encoders:\n", |
39 | | - "In a similar fashion to the classical counterpart, quantum auto-encoder refers to \"compressing\" quantum data stored initially on $n$ qubits into a smaller quantum register of $m<n$ qubits, via variational circuit. However, quantum computing is reversibale, and thus qubits cannot be \"erased\". Therefore, alternatively, a quantum autoencoder tries to acheive the following transformation from uncoded quantum register of size $n$ to a coded one of size $m$:\n", |
| 40 | + "### Quantum Autoencoders\n", |
| 41 | + "In a similar fashion to the classical counterpart, quantum autoencoder refers to \"compressing\" quantum data stored initially on $n$ qubits into a smaller quantum register of $m<n$ qubits, via variational circuit. However, quantum computing is reversibale, and thus qubits cannot be \"erased\". Therefore, alternatively, a quantum autoencoder tries to acheive the following transformation from uncoded quantum register of size $n$ to a coded one of size $m$:\n", |
40 | 42 | "$$\n", |
41 | 43 | "|\\psi\\rangle_n \\rightarrow |\\psi'\\rangle_m|0\\rangle_{n-m}\n", |
42 | 44 | "$$\n", |
|
53 | 55 | "id": "e40c41f0", |
54 | 56 | "metadata": {}, |
55 | 57 | "source": [ |
56 | | - "# Training of quantum auto encoders\n", |
| 58 | + "## Training of Quantum Autoencoders\n", |
57 | 59 | "\n", |
58 | 60 | "To train a quantum auto encoder one should define a proper cost function. Below we propose two common approaches, one using a swap test and the other uses Hamiltonian measurements. We focus on the swap test case, and comment on the other approach at the end of this notebook.\n", |
59 | 61 | "\n", |
60 | | - "## The swap test\n", |
| 62 | + "### The Swap Test\n", |
61 | 63 | "\n", |
62 | 64 | "The swap test is a quantum function which checks the overlap between two quantum states: the inputs of the function are two quantum registers of the same size, $|\\psi_1\\rangle, \\,|\\psi_2\\rangle$, and it returns as an output a single \"test\" qubit whose state encodes the overlap between the two inputs: $|q\\rangle_{\\rm test} = \\alpha|0\\rangle + \\sqrt{1-\\alpha^2}|1\\rangle$, with\n", |
63 | 65 | "$$\n", |
|
81 | 83 | "id": "443577d1", |
82 | 84 | "metadata": {}, |
83 | 85 | "source": [ |
84 | | - "## Quantum neural network for quantum auto encoder\n", |
| 86 | + "### Quantum Neural Networks for Quantum Autoencoders\n", |
85 | 87 | "\n", |
86 | 88 | "The quantum auto encoder can be built as a quantum neural network, having the following three parts:\n", |
87 | 89 | "\n", |
|
102 | 104 | "id": "6a685c3b", |
103 | 105 | "metadata": {}, |
104 | 106 | "source": [ |
105 | | - "# Pre-user-defined functions which will be used to construct the quantum layer\n", |
| 107 | + "## Pre-user-defined Functions That Construct the Quantum Layer\n", |
106 | 108 | "\n", |
107 | 109 | "As a first step we build some user-defined functions which allow us flexible modeling. We have three functions:\n", |
108 | 110 | "1. `angle_encoding`: This function loads data of size `num_qubits` on `num_qubits` qubits via RY gates. It has an output port named `qpv`.\n", |
|
208 | 210 | "id": "493d4499", |
209 | 211 | "metadata": {}, |
210 | 212 | "source": [ |
211 | | - "# An example: auto encoder for domain wall data" |
| 213 | + "## Example: Autoencoder for Domain Wall Data" |
212 | 214 | ] |
213 | 215 | }, |
214 | 216 | { |
|
224 | 226 | "id": "535d9181", |
225 | 227 | "metadata": {}, |
226 | 228 | "source": [ |
227 | | - "## The data" |
| 229 | + "### The Data" |
228 | 230 | ] |
229 | 231 | }, |
230 | 232 | { |
|
263 | 265 | "id": "0a09e977", |
264 | 266 | "metadata": {}, |
265 | 267 | "source": [ |
266 | | - "## The quantum program" |
| 268 | + "### The Quantum Program" |
267 | 269 | ] |
268 | 270 | }, |
269 | 271 | { |
|
382 | 384 | "id": "5a41ccaa", |
383 | 385 | "metadata": {}, |
384 | 386 | "source": [ |
385 | | - "## The network\n", |
| 387 | + "### The Network\n", |
386 | 388 | "\n", |
387 | 389 | "The network for training contains only a quantum layer. The corresponding quantum program was already defined above, what is left is to define some execution preferences and the classical post-process. The classical output is defined as $1-\\alpha^2$, with $\\alpha$ being the probability of the test qubit to be at state 0." |
388 | 390 | ] |
|
494 | 496 | "id": "7590c55d", |
495 | 497 | "metadata": {}, |
496 | 498 | "source": [ |
497 | | - "## Creating dataset\n", |
| 499 | + "### Creating the Dataset\n", |
498 | 500 | "\n", |
499 | 501 | "The cost function we would like to minimize is $|1-\\alpha^2|$ for all our training data. Looking at the qlayer output this means that we should define the corresponding labels as $0$." |
500 | 502 | ] |
|
551 | 553 | "id": "608173a0", |
552 | 554 | "metadata": {}, |
553 | 555 | "source": [ |
554 | | - "## Define the training" |
| 556 | + "### Defining the Training" |
555 | 557 | ] |
556 | 558 | }, |
557 | 559 | { |
|
596 | 598 | "id": "ee98061f", |
597 | 599 | "metadata": {}, |
598 | 600 | "source": [ |
599 | | - "## Setting some hyper-parameters\n", |
| 601 | + "### Setting Hyper-parameters\n", |
600 | 602 | "\n", |
601 | 603 | "The L1 loss function fits the intended cost function we aim to minimize." |
602 | 604 | ] |
|
625 | 627 | "id": "e38d76f1", |
626 | 628 | "metadata": {}, |
627 | 629 | "source": [ |
628 | | - "## Training\n", |
| 630 | + "### Training\n", |
629 | 631 | "\n", |
630 | 632 | "In this demo we will initialize the network with trained parameters, and run only 1 epoch for demonstration. A reasonable training with the above hyper-parameters can be achieved with $\\sim 40$ epochs. To train the network from the beginning uncomment the following code line:" |
631 | 633 | ] |
|
685 | 687 | "id": "7fb11e18", |
686 | 688 | "metadata": {}, |
687 | 689 | "source": [ |
688 | | - "## Verification\n", |
| 690 | + "### Verification\n", |
689 | 691 | "\n", |
690 | 692 | "Once we trained our network, we can build a new network with the trained variables. We can thus verify our encoder by taking only the encoding block, changing post_process, etc.\n", |
691 | 693 | "\n", |
|
708 | 710 | "id": "812d2bc5", |
709 | 711 | "metadata": {}, |
710 | 712 | "source": [ |
711 | | - "### We start with building the quantum layer for the validator" |
| 713 | + "### Building the Quantum Layer for the Validator" |
712 | 714 | ] |
713 | 715 | }, |
714 | 716 | { |
|
840 | 842 | "id": "7f3f9983", |
841 | 843 | "metadata": {}, |
842 | 844 | "source": [ |
843 | | - "### Next, we define the classical output of the network. For the validator post-process we take the output with the maximal counts." |
| 845 | + "### Defining the Classical Output of the Network \n", |
| 846 | + "\n", |
| 847 | + "For the validator postprocessing, we take the output with the maximal counts:" |
844 | 848 | ] |
845 | 849 | }, |
846 | 850 | { |
|
890 | 894 | "id": "39a94e36", |
891 | 895 | "metadata": {}, |
892 | 896 | "source": [ |
893 | | - "### We create the network and assign the trained parameters" |
| 897 | + "### Creating the Network and Assigning the Trained Parameters" |
894 | 898 | ] |
895 | 899 | }, |
896 | 900 | { |
|
989 | 993 | "id": "6dca60d3", |
990 | 994 | "metadata": {}, |
991 | 995 | "source": [ |
992 | | - "# Usage for anomaly detection" |
| 996 | + "## Detecting Anomalies" |
993 | 997 | ] |
994 | 998 | }, |
995 | 999 | { |
|
0 commit comments