Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 49 additions & 45 deletions algorithms/qml/qsvm/qsvm.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3,46 +3,40 @@
{
"cell_type": "markdown",
"id": "ab3ff61d-ae04-48d4-b0ea-325281390a77",
"metadata": {
"jp-MarkdownHeadingCollapsed": true
},
"metadata": {},
"source": [
"# QSVM\n",
"Quantum Support Vector Machines is the Quantum version of SVM - a data classification method which separates the data using a hyperplane.\n",
"Quantum Support Vector Machines is the quantum version of SVM; i.e., a data classification method that separates the data using a hyperplane.\n",
"\n",
"This algorithm takes the following steps:\n",
"- Map the data into a different hyperspace (since the data may be non-linearly-seperable in the original space)\n",
" - In the case of QSVM - mapping the classical data into Hilbert space.\n",
"- Calculate the kernel matrix\n",
"The QSVM algorithm takes these steps:\n",
"1. Maps the data into a different hyperspace (since the data may be non-linearly-separable in the original space). For QSVM, it maps the classical data into a Hilbert space.\n",
"2. Calculates the kernel matrix:\n",
" - The kernel entries are the fidelities between different feature vectors\n",
" - In the case of QSVM - this is done on a Quantum computer.\n",
"- Optimize the dual problem (this is always done classicaly)\n",
" - For QSVM, this is done on a quantum computer.\n",
"3. Optimizes the dual problem (this is always done classically):\n",
"$$ L_D(\\alpha) = \\sum_{i=1}^t \\alpha_i - \\frac{1}{2} \\sum_{i,j=1}^t y_i y_j \\alpha_i \\alpha_j K(\\vec{x}_i \\vec{x}_j) $$\n",
" - Where $t$ is the amount of data points\n",
" - where $t$ is the number of data points\n",
" - the $\\vec{x}_i$s are the data points\n",
" - $y_i$ is the label $\\in \\{-1,1\\}$ of each data point\n",
" - $K(\\vec{x}_i \\vec{x}_j)$ is the kernel matrix element between the $i$ and $j$ datapoints\n",
" - and we optimize over the $\\alpha$s\n",
" - We expect most of the $\\alpha$s to be $0$. The $\\vec{x}_i$s that correspond to non-zero $\\alpha_i$ are called the Support Vectors.\n",
"- Finally, we may predict unlabeled data by calculating the kernel matrix of the new datum with respect to the support vectors\n",
" - $K(\\vec{x}_i \\vec{x}_j)$ is the kernel matrix element between the $i$ and $j$ data points\n",
" - optimized over the $\\alpha$s\n",
" - We expect most of the $\\alpha$s to be $0$. The $\\vec{x}_i$s that correspond to non-zero $\\alpha_i$ are called the support vectors.\n",
"4. Predicts unlabeled data by calculating the kernel matrix of the new datum with respect to the support vectors:\n",
"$$ \\text{Predicted Label}(\\vec{s}) = \\text{sign} \\left( \\sum_{i=1}^t y_i \\alpha_i^* K(\\vec{x}_i , \\vec{s}) + b \\right) $$\n",
" - Where $\\vec{s}$ is the datapoint to be classified\n",
" - where $\\vec{s}$ is the data point to be classified\n",
" - $\\alpha_i^*$ are the optimized $\\alpha$s\n",
" - And $b$ is the bias.\n",
" - $b$ is the bias\n",
"\n",
"\n",
"Reference:\n",
"\n",
"[1] Havlíček, V., Córcoles, A.D., Temme, K. et al. Supervised learning with quantum-enhanced feature spaces. Nature 567, 209-212 (2019). https://doi.org/10.1038/s41586-019-0980-2"
"Reference [[1](#learning)]"
]
},
{
"cell_type": "markdown",
"id": "fbbcced5-c83b-4cd7-be25-2055f1da451a",
"metadata": {},
"source": [
"# Code\n",
"First, we start with the relevant imports"
"## Coding QSVM\n",
"We start coding with the relevant imports:"
]
},
{
Expand All @@ -66,7 +60,7 @@
"source": [
"Next, we generate data.\n",
"\n",
"In this example, we take a 2D input space, and a binary classification (i.e. only 2 groups of data points)"
"This example takes a 2D input space and a binary classification (i.e., only two groups of data points):"
]
},
{
Expand Down Expand Up @@ -114,7 +108,7 @@
"id": "564b0996-8dd2-4bea-b367-4e569ecab5ae",
"metadata": {},
"source": [
"Plotting the data.\n",
"Now we plot the data.\n",
"\n",
"Note that the data is expected to be normalized to within $ 0 $ to $ 2 \\pi $."
]
Expand Down Expand Up @@ -190,17 +184,17 @@
"id": "9c86cb0d-cf35-4be0-84d2-f9d871615001",
"metadata": {},
"source": [
"### Define our Feature Map\n",
"When constructing a `QSVM` model, one must supply which feature map will be used.\n",
"## Defining the Feature Map\n",
"When constructing a `QSVM` model, we must supply the feature map to use.\n",
"\n",
"A feature map is a way to encode classical data into quantum.\n",
"Here, we chose to encode the data onto the surface of the bloch sphere.\n",
"This can be defined as:\n",
"Here, we choose to encode the data onto the surface of the Bloch sphere.\n",
"This can be defined as\n",
"```\n",
"R_X(x[0] / 2)\n",
"R_Z(x[1])\n",
"```\n",
"Where `x` is the 2D input vector, and the circuit takes a single qubit per data-point. This creates a state which is: $\\cos(x[0]/4)|0\\rangle + e^{x[1]/4}\\sin(x[0]/4)|1\\rangle$ (up to a global phase). We define a quantum function that generalizes the Bloch sphere mapping to input vector of any dimension (also known as \"dense angle encoding\" in the field of Quantum Nueral Networks): each pair of entries in the vector is mapped to a Bloch sphere, in the case of odd size we apply a single RX gate on an extra qubit."
"where `x` is the 2D input vector and the circuit takes a single qubit per data point. This creates a state that is $\\cos(x[0]/4)|0\\rangle + e^{x[1]/4}\\sin(x[0]/4)|1\\rangle$ (up to a global phase). We define a quantum function that generalizes the Bloch sphere mapping to an input vector of any dimension (also known as \"dense angle encoding\" in the field of quantum neural networks). Each pair of entries in the vector is mapped to a Bloch sphere. If there is an odd size, we apply a single RX gate on an extra qubit."
]
},
{
Expand All @@ -224,11 +218,11 @@
"id": "50162836-27be-4a64-91a7-b0439fa66300",
"metadata": {},
"source": [
"### Define the Data\n",
"I addition to the feature map, we need to prepare our data.\n",
"## Defining the Data\n",
"In addition to the feature map, we need to prepare our data.\n",
"\n",
"The `train_input` and `test_input` datasets consisting of data and its labels. The labels is a 1D array where the value of the label correspond to each data point and can be basically anything - such as (0, 1) , (3, 5) , or ('A', 'B').\n",
"The `predict_input` consists only of data point (without labels).\n"
"The `train_input` and `test_input` datasets consisting of data and its labels. The labels are a 1D array where the value of the label corresponds to each data point and can be basically anything, such as (0, 1), (3, 5), or ('A', 'B').\n",
"The `predict_input` consists only of data points (without labels).\n"
]
},
{
Expand All @@ -251,7 +245,7 @@
"id": "1a8410eb-61f8-4934-afd6-64b1d341593f",
"metadata": {},
"source": [
"### Construct a model\n",
"## Constructing a Model\n",
"We can now construct the QSVM model using the `bloch_feature_map` function, and its inverse:"
]
},
Expand Down Expand Up @@ -284,9 +278,8 @@
"id": "b29d8f67",
"metadata": {},
"source": [
"### Synthesize our model and explore the generated quantum circuit\n",
"Once we constructed our qsvm model - we synthesize and view the quantum circuit that encodes our data.\n",
"For this we will use `classiq` built-in `synthesize` and `show` functions:"
"## Synthesizing the Model and Exploring the Generated Quantum Circuit\n",
"Once we have constructed our QSVM model, we synthesize and view the quantum circuit that encodes our data using the Classiq built-in `synthesize` and `show` functions:"
]
},
{
Expand All @@ -313,12 +306,13 @@
"id": "9106df09",
"metadata": {},
"source": [
"### Execute QSVM\n",
"The first step in QSVM is the training.\n",
"The second step in QSVM is to test the training process.\n",
"The last QSVM step, which may be applied multiple times on different datasets, is prediction: the prediction process takes unlabeled data, and returns its predicted labels.\n",
"## Executing QSVM\n",
"Steps in QSVM execution:\n",
"1. Training.\n",
"2. Testing the training process.\n",
"3. Predicting, by taking unlabeled data and returning its predicted labels. This may be applied multiple times on different datasets.\n",
"\n",
"Next, we define classical functions for applying these three parts of execution, where the latter is done via `ExecutionSession` and `batch_sample`."
"Next, we define classical functions for applying the three parts of the execution process, where the third uses `ExecutionSession` and `batch_sample`."
]
},
{
Expand Down Expand Up @@ -486,7 +480,7 @@
"id": "3e2b345c-648d-4fee-b990-c01509a48446",
"metadata": {},
"source": [
"We can view the classification accuracy through `test_score`"
"We can view the classification accuracy through `test_score`:"
]
},
{
Expand All @@ -512,7 +506,7 @@
"id": "87b7a490-1d80-4c5f-8a59-d227fe8be03c",
"metadata": {},
"source": [
"Since this data was previously generated, we also know the real labels, and can print them for comparison."
"Since this data was previously generated, we also know the real labels and can print them for comparison."
]
},
{
Expand Down Expand Up @@ -574,6 +568,16 @@
"plt.ylim(plot_range)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "87ed87a8-427f-45ea-83f0-b7d6677b9137",
"metadata": {},
"source": [
"## Reference\n",
"\n",
"<a id='learning'>[1]</a> Havl&#237;&#269;ek, V., C&#243;rcoles, A.D., Temme, K. et al. Supervised learning with quantum-enhanced feature spaces. Nature 567, 209-212 (2019). https://doi.org/10.1038/s41586-019-0980-2"
]
}
],
"metadata": {
Expand Down