forked from PennyLaneAI/pennylane
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix TRP #1
Closed
Closed
Fix TRP #1
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Co-authored-by: Tom Bromley <[email protected]>
Co-authored-by: Olivia Di Matteo <[email protected]>
Co-authored-by: Olivia Di Matteo <[email protected]>
Co-authored-by: Olivia Di Matteo <[email protected]>
Gabriel-Bottrill
force-pushed
the
fix_TRP
branch
from
November 9, 2023 00:17
fd6ff62
to
1d71453
Compare
Co-authored-by: Olivia Di Matteo <[email protected]>
Gabriel-Bottrill
pushed a commit
that referenced
this pull request
Apr 9, 2024
…5446) **Context:** When testing the new lightning device, we discovered that TensorFlow would error if the parameters were float64, the device returned float32, and classical operations modified the parameter: ``` dev = qml.device('lightning.qubit', wires=2, c_dtype=np.complex64) @qml.qnode(dev, diff_method="parameter-shift") def circuit(x): qml.RX(tf.cos(x), wires=0) return qml.expval(qml.Z(0)) x = tf.Variable(0.1, dtype=tf.float64) with tf.GradientTape() as tape: y = circuit(x) tape.gradient(y, x) ``` ``` InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a float tensor but is a double tensor [Op:Mul] name: ``` The problem is that the output were results were `float32` precision, so the vjp would be `float32`. But `float32` vjp cannot be combined with `tf.cos`'s `float64` vjp. The reverse problem (`float64` results but `float32` data) seems to be fine. **Description of the Change:** If the input data are `float64` or `complex128`, we promote the results to the type of the parameters. [sc-58966] --------- Co-authored-by: Mudit Pandey <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
make test
.(some tests failing in both master and branch, can not get an environment to work)
Context:
Currently the only method for finding gradients for qutrit circuits is through param-shift for qutrits this takes four evaluations and overall is very expensive. The TRP (TRX/TRY/TRZ) gates are not compatible with backwards propagation. With a planned default.qutrit.mixed it is important for these to be backprop differentiable to speed what will already be a very slow process.
Description of the Change:
qutrit parametric ops TRX, TRY, TRZ matrix construction is now done solely using qml.math and allow for inputs from TensorFlow, Torch, Autograd, and JAX backprop differentiation variables, also default.qutrit has been changed to use qml.math to allow for backwards propagation of qutrit circuits.
Benefits:
Backprop being implemented slows for much faster differentiation of qutrit circuits. This will be very helpful to differentiate larger circuits in a reasonable amount of time. The changes to TRP gates will also be useful for planned default.qutrit as the increased required resources for noisy simulation makes param-shift differentiation much less appealing
Possible Drawbacks:
Two small static methods copied from default_mixed, increase copied code, code smell
Makes interfaces useful for defualt_qutrit less tests than on other devices
Related GitHub Issues:
N/A