Skip to content
This repository was archived by the owner on Nov 10, 2023. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added challenges/xanadu challenge/Adagrad Optimizer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
386 changes: 386 additions & 0 deletions challenges/xanadu challenge/QForce.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,386 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# QForce - Xanadu Challenge\n",
"\n",
"- Agustín Nelson Medina Colmenero\n",
"- Joselyn Guerrero Cabrera\n",
"- Oswaldo Emmanuel Fonseca Uribe\n",
"- Saúl Ruano Sánchez\n",
"\n",
"## Challenge Statement\n",
"\n",
"In this challenge, we were provided with a varitational quantum circuit in PenyLane that depends on a set of trainable parameters. The circuit outputs a single number as the expectation value of a fixed measurement.\n",
"\n",
"### Objective\n",
"\n",
"Find the minimum expectation value this circuit can produce by optimizing its parameters. This will require converting the circuit into a QNode.\n",
"\n",
"## The knowledge we needed to understand\n",
"\n",
"PennyLane offers seamless integration between classical and quantum computations. Code up quantum circuits in PennyLane, compute gradients of quantum circuits, and connect them easily to the top scientific computing and machine learning libraries; currently, four libraries are supported: NumPy, PyTorch, JAX, and TensorFlow; we used the first one, NumPy, and the optimizers that PennyLane framework offers.\n",
"\n",
"### What is an Optimizer?\n",
"\n",
"Optimizers are objects which can be used to automatically update the parameters of a quantum or hybrid machine learning model. \n",
"\n",
"As a team, we used optimizers from NumPy because of its facility and our familiarity as a team with this library."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Abording the challenge\n",
"\n",
"We were provided with an challenge templte, with the variational quantum circuit as is showed next:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import pennylane as qml\n",
"from pennylane import numpy as np\n",
"\n",
"WIRES = 2\n",
"LAYERS = 5\n",
"NUM_PARAMETERS = LAYERS * WIRES * 3\n",
"\n",
"def variational_circuit(params,hamiltonian):\n",
" parameters = params.reshape((LAYERS, WIRES, 3))\n",
" qml.templates.StronglyEntanglingLayers(parameters, wires=range(WIRES))\n",
" return qml.expval(qml.Hermitian(hamiltonian, wires = [0,1]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are different hyperparameters depending on the optimizer, to look for the minimum expectation value of the provided variational quantum circuit we use 4 different optimizers and we train them with a series of increasing parameters to find the optimal ones. How we handled each is described below, along with a brief explanation of the optimizer and expectation values and their efficiency."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### A little bit of libraries here..."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"import scienceplots\n",
"plt.style.use(['science', 'no-latex'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### AdagradOptimizer"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def optimize_circuit(hamiltonian, stepsize,min):\n",
" hamiltonian = np.array(hamiltonian, requires_grad = False)\n",
" hamiltonian = np.array(hamiltonian,float).reshape((2 ** WIRES), (2 ** WIRES))\n",
"\n",
" dev = qml.device(\"default.qubit\", wires = WIRES)\n",
" w = qml.numpy.random.rand(NUM_PARAMETERS)\n",
"\n",
" @qml.qnode(dev)\n",
" def cost(wei):\n",
" return variational_circuit(wei,hamiltonian)\n",
" \n",
" # Adagrad Optimizer\n",
"\n",
" qml.AdagradOptimizer(stepsize=stepsize)\n",
" \n",
" steps = 400\n",
"\n",
" params = w\n",
"\n",
" for i in range(steps):\n",
" params = opt.step(cost, params)\n",
"\n",
" if (cost(params) - min) < 5*1e-9:\n",
" return i\n",
"\n",
"in1=np.array([0.863327072347624,0.0167108057202516,0.07991447085492759,0.0854049026262154,\n",
" 0.0167108057202516,0.8237963773906136,-0.07695947154193797,0.03131548733285282,\n",
" 0.07991447085492759,-0.07695947154193795,0.8355417021014687,-0.11345916130631205,\n",
" 0.08540490262621539,0.03131548733285283,-0.11345916130631205,0.758156886827099])\n",
"#Expected output: 0.61745341\n",
"in2=np.array([0.32158897156285354,-0.20689268438270836,0.12366748295758379,-0.11737425017261123,\n",
" -0.20689268438270836,0.7747346055276305,-0.05159966365446514,0.08215539696259792,\n",
" 0.12366748295758379,-0.05159966365446514,0.5769050487087416,0.3853362904758938,\n",
" -0.11737425017261123,0.08215539696259792,0.3853362904758938,0.3986256655167206])\n",
"#Expected output: 0.00246488"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = np.linspace(0.1, 0.15, 20)\n",
"\n",
"iterations_1 = []\n",
"iterations_2 = []\n",
"\n",
"for step in steps:\n",
" i_1 = optimize_circuit(in1, step,0,\"Adagrad\",0.61745341)\n",
" i_2 = optimize_circuit(in2, step,0,\"Adagrad\",0.00246488)\n",
" iterations_1.append(i_1)\n",
" iterations_2.append(i_2)\n",
"\n",
"fig = plt.figure(figsize=(6,5))\n",
"plt.plot(steps, iterations_1, 'b.',label='Test Input 1')\n",
"plt.plot(steps, iterations_2, 'g.',label='Test Input 2')\n",
"plt.legend(loc='upper left')\n",
"plt.xlabel('Step Size', fontsize = 12)\n",
"plt.ylabel('Steps', fontsize = 12)\n",
"plt.title('Steps to Converge vs Stepsize AdagradOptimizer');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"![Adagrad Optimizer Stepsize Comparison](Adagrad%20Optimizer.png)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### AdamOptimizer\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### GradientDescentOptimizer\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def optimize_circuit(hamiltonian, stepsize,min):\n",
" hamiltonian = np.array(hamiltonian, requires_grad = False)\n",
" hamiltonian = np.array(hamiltonian,float).reshape((2 ** WIRES), (2 ** WIRES))\n",
"\n",
" dev = qml.device(\"default.qubit\", wires = WIRES)\n",
" w = qml.numpy.random.rand(NUM_PARAMETERS)\n",
"\n",
" @qml.qnode(dev)\n",
" def cost(wei):\n",
" return variational_circuit(wei,hamiltonian)\n",
" \n",
" # Gradient Descent Optimizer\n",
"\n",
" qml.GradientDescentOptimizer(stepsize=stepsize)\n",
" \n",
" steps = 400\n",
"\n",
" params = w\n",
"\n",
" for i in range(steps):\n",
" params = opt.step(cost, params)\n",
"\n",
" if (cost(params) - min) < 5*1e-9:\n",
" return i\n",
"\n",
"in1=np.array([0.863327072347624,0.0167108057202516,0.07991447085492759,0.0854049026262154,\n",
" 0.0167108057202516,0.8237963773906136,-0.07695947154193797,0.03131548733285282,\n",
" 0.07991447085492759,-0.07695947154193795,0.8355417021014687,-0.11345916130631205,\n",
" 0.08540490262621539,0.03131548733285283,-0.11345916130631205,0.758156886827099])\n",
"#Expected output: 0.61745341\n",
"in2=np.array([0.32158897156285354,-0.20689268438270836,0.12366748295758379,-0.11737425017261123,\n",
" -0.20689268438270836,0.7747346055276305,-0.05159966365446514,0.08215539696259792,\n",
" 0.12366748295758379,-0.05159966365446514,0.5769050487087416,0.3853362904758938,\n",
" -0.11737425017261123,0.08215539696259792,0.3853362904758938,0.3986256655167206])\n",
"#Expected output: 0.00246488"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = np.linspace(0.1, 1, 20)\n",
"iterations = []\n",
"\n",
"for step in steps:\n",
" i = optimize_circuit(in1, step,0,\"Grad\",0.61745341)\n",
" iterations.append(i)\n",
"\n",
"fig = plt.figure(figsize=(5,4))\n",
"plt.plot(steps, iterations, 'k.')\n",
"plt.xlabel('Step Size', fontsize = 12)\n",
"plt.ylabel('Steps', fontsize = 12)\n",
"plt.title('Steps to Converge vs Stepsize Gradient Descendent');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### MomentumOptimizer"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def optimize_circuit(hamiltonian, stepsize, momentum, min):\n",
" hamiltonian = np.array(hamiltonian, requires_grad = False)\n",
" hamiltonian = np.array(hamiltonian,float).reshape((2 ** WIRES), (2 ** WIRES))\n",
"\n",
" dev = qml.device(\"default.qubit\", wires = WIRES)\n",
" w = qml.numpy.random.rand(NUM_PARAMETERS)\n",
"\n",
" @qml.qnode(dev)\n",
" def cost(wei):\n",
" return variational_circuit(wei,hamiltonian)\n",
" \n",
" # Momentum Optimizer\n",
"\n",
" qml.MomentumOptimizer(stepsize=stepsize)\n",
" \n",
" steps = 400\n",
"\n",
" params = w\n",
"\n",
" for i in range(steps):\n",
" params = opt.step(cost, params)\n",
"\n",
" if (cost(params) - min) < 5*1e-9:\n",
" return i\n",
"\n",
"in1=np.array([0.863327072347624,0.0167108057202516,0.07991447085492759,0.0854049026262154,\n",
" 0.0167108057202516,0.8237963773906136,-0.07695947154193797,0.03131548733285282,\n",
" 0.07991447085492759,-0.07695947154193795,0.8355417021014687,-0.11345916130631205,\n",
" 0.08540490262621539,0.03131548733285283,-0.11345916130631205,0.758156886827099])\n",
"#Expected output: 0.61745341\n",
"in2=np.array([0.32158897156285354,-0.20689268438270836,0.12366748295758379,-0.11737425017261123,\n",
" -0.20689268438270836,0.7747346055276305,-0.05159966365446514,0.08215539696259792,\n",
" 0.12366748295758379,-0.05159966365446514,0.5769050487087416,0.3853362904758938,\n",
" -0.11737425017261123,0.08215539696259792,0.3853362904758938,0.3986256655167206])\n",
"#Expected output: 0.00246488"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"steps = np.linspace(5e-1, 1, 20)\n",
"momentum = np.linspace(0.1, 1, 20)\n",
"iterations = []\n",
"aux = []\n",
"\n",
"for step in steps:\n",
" for mom in momentum:\n",
" i = optimize_circuit(in1, step, mom,\"Momentum\", 0.61745341)\n",
" aux.append(i)\n",
" iterations.append(aux)\n",
" aux = []\n",
"\n",
"args_min = []\n",
"\n",
"for i in range(len(filtered_iterations)):\n",
" j = np.argmin(filtered_iterations[:][i])\n",
" args_min.append(j)\n",
"\n",
"mins = []\n",
"\n",
"for i in args_min:\n",
" mins.append(filtered_iterations[i][args_min[i]])\n",
"\n",
"min_iterations = np.min(filtered_iterations, axis=1)\n",
"\n",
"fig = plt.figure(figsize=(8, 6))\n",
"\n",
"for i in range(len(steps)):\n",
" plt.plot(steps[i], mins[i], 'k.')\n",
" plt.text(steps[i], mins[i] - 0.12, f'{momentum[args_min[i]]:.2f}', fontsize=10, ha='center', va='top')\n",
"\n",
"\n",
"plt.xlabel('Step Size', fontsize=12)\n",
"plt.ylabel('Steps', fontsize=12)\n",
"plt.title('Steps to Converge vs MomentumOptimizer')\n",
"plt.plot(0.975, 22.2, 'r.')\n",
"plt.text(0.94, 22, '(Momentum)', fontsize=10, ha='left', va='top')\n",
"plt.legend(loc='lower right')\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclutions\n",
"\n",
"Graficas individuales y cual fue el mejor optimizador y sus parametros, y el valor de espectación obtenido en él."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading