Skip to content

[Feature Request]: Make Unitary Calculations Differentiable #605

Open
@lockwo

Description

@lockwo

In the TFQ sync today there was talk of potential new features, and I wanted to suggest something that I think would be useful (to me and the larger community): making the unitary op differentiable. Currently if you try to differentiate through the unitary op (either via tfq.get_unitary_op() or tfq.layers.Unitary()) the following error is encountered: LookupError: gradient registry has no entry for: TfqCalculateUnitary. I don't know enough to say if the unitary is actually a differentiable function or if you can apply adjoint differentiation to it. However, I do know that given the nature of sampling based differentiators (e.g. parameter shift), using these techniques one could calculate the gradient for a function that involved the unitary matrix.

See below for a toy example which errors out currently:

import cirq
import sympy
import tensorflow as tf
import tensorflow_quantum as tfq
import numpy as np

params = sympy.symbols('q0:6')
qubits = [cirq.GridQubit(0, i) for i in range(2)]
circuit = cirq.Circuit()
for i in range(2):
    for j in range(3):
        r = np.random.uniform()
        p = params[i * 3 + j]
        if r < 1/3:
            circuit += cirq.rx(p).on(qubits[i])
        elif r < 2/3:
            circuit += cirq.ry(p).on(qubits[i])
        else:
            circuit += cirq.rz(p).on(qubits[i])

opt = tf.keras.optimizers.Adam(lr=0.1)
target_unitary = tf.convert_to_tensor(np.identity(4), dtype=tf.complex64)
tensor_circuit = tfq.convert_to_tensor([circuit])
names = [s.name for s in params]
init = tf.Variable(initial_value=np.random.uniform(0, 2 * np.pi, (1, 6)), dtype="float32", trainable=True)

# Unitary Op

unitary_op = tfq.get_unitary_op()

with tf.GradientTape() as tape:
    tape.watch(init)
    unitary = unitary_op(tensor_circuit, names, init).to_tensor()
    cost = tf.math.abs(tf.reduce_mean(target_unitary - unitary))

grads = tape.gradient(cost, init)
opt.apply_gradients(zip([grads], [init]))

# Unitary Layer

inputs = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
unitary_layer = tfq.layers.Unitary()(inputs, symbol_names=names, symbol_values=init)
output = unitary_layer.to_tensor()
model = tf.keras.models.Model(inputs=inputs, outputs=output)

with tf.GradientTape() as tape:
    cost = tf.math.abs(tf.reduce_mean(target_unitary - model(tensor_circuit)))

grads = tape.gradient(cost, model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_variables))

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions