Open
Description
Does tfq currently support circuit simulation with float64/complex128 precision?
At least the code demo below returns me a float32 tensor.
import tensorflow_quantum as tfq
import cirq
import numpy as np
import sympy
nwires = 10
nlayer = 6
qubits = [cirq.GridQubit(0,i) for i in range(nwires)]
symbols = sympy.symbols('params_0:'+str(nwires*nlayer))
symbol_values = [np.ones([nlayer*nwires], dtype=np.float64)]
circuit = cirq.Circuit()
for i in range(nwires):
circuit.append(cirq.H(qubits[i]))
for j in range(nlayer):
for i in range(nwires):
circuit.append(cirq.rx(symbols[j*nwires+i])(qubits[i]))
oprs = [sum([cirq.Z(qubits[i])*cirq.Z(qubits[(i+1)%nwires]) for i in range(nwires)])]
ep = tfq.layers.Expectation(dtype=tf.float64)
ep(inputs=[circuit], symbol_names=symbols, symbol_values=symbol_values, operators=oprs)
I wonder if there is a way to enable complex128 simulation in tfq.
I believe float64 support is vital for variational quantum algorithms, especially for quantum simulations such as VQE. This is different from common machine learning setups where float32 is more than enough.
For example, the above program should return zero, while it actually gives
<tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[2.9802322e-07]], dtype=float32)>
.
The error bar is float32-typical, and cannot be simply overlooked for physics problems.