Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix TRP #1

Closed
wants to merge 51 commits into from
Closed

Fix TRP #1

wants to merge 51 commits into from

Conversation

Gabriel-Bottrill
Copy link
Collaborator

  • [~] Ensure that the test suite passes, by running make test.
    (some tests failing in both master and branch, can not get an environment to work)

Context:
Currently the only method for finding gradients for qutrit circuits is through param-shift for qutrits this takes four evaluations and overall is very expensive. The TRP (TRX/TRY/TRZ) gates are not compatible with backwards propagation. With a planned default.qutrit.mixed it is important for these to be backprop differentiable to speed what will already be a very slow process.

Description of the Change:
qutrit parametric ops TRX, TRY, TRZ matrix construction is now done solely using qml.math and allow for inputs from TensorFlow, Torch, Autograd, and JAX backprop differentiation variables, also default.qutrit has been changed to use qml.math to allow for backwards propagation of qutrit circuits.

Benefits:
Backprop being implemented slows for much faster differentiation of qutrit circuits. This will be very helpful to differentiate larger circuits in a reasonable amount of time. The changes to TRP gates will also be useful for planned default.qutrit as the increased required resources for noisy simulation makes param-shift differentiation much less appealing

Possible Drawbacks:
Two small static methods copied from default_mixed, increase copied code, code smell
Makes interfaces useful for defualt_qutrit less tests than on other devices

Related GitHub Issues:
N/A

Gabriel-Bottrill and others added 27 commits October 17, 2023 16:02
Co-authored-by: Tom Bromley <[email protected]>
Co-authored-by: Olivia Di Matteo <[email protected]>
Co-authored-by: Olivia Di Matteo <[email protected]>
Gabriel-Bottrill pushed a commit that referenced this pull request Apr 9, 2024
…5446)

**Context:**

When testing the new lightning device, we discovered that TensorFlow
would error if the parameters were float64, the device returned float32,
and classical operations modified the parameter:

```
dev = qml.device('lightning.qubit', wires=2, c_dtype=np.complex64)

@qml.qnode(dev, diff_method="parameter-shift")
def circuit(x):
    qml.RX(tf.cos(x), wires=0)
    return qml.expval(qml.Z(0))

x = tf.Variable(0.1, dtype=tf.float64)

with tf.GradientTape() as tape:
    y = circuit(x)

tape.gradient(y, x)
```
```
InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a float tensor but is a double tensor [Op:Mul] name: 
```

The problem is that the output were results were `float32` precision, so
the vjp would be `float32`. But `float32` vjp cannot be combined with
`tf.cos`'s `float64` vjp.

The reverse problem (`float64` results but `float32` data) seems to be
fine.

**Description of the Change:**

If the input data are `float64` or `complex128`, we promote the results
to the type of the parameters.

[sc-58966]

---------

Co-authored-by: Mudit Pandey <[email protected]>
@Gabriel-Bottrill Gabriel-Bottrill deleted the fix_TRP branch May 15, 2024 22:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants