Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define a set_interface function to replace the conceptual Qibo's set_backend #24

Open
MatteoRobbiati opened this issue Jun 14, 2024 · 3 comments

Comments

@MatteoRobbiati
Copy link
Contributor

As the title says, if needed.

@alecandido
Copy link
Member

alecandido commented Jun 14, 2024

What if we start phasing out global objects?

In this case in particular, since the gradient will be computed independently on the interface, and that will be just used by the downstream to consume it, maybe you don't need to specify the interface at all, until you will actually use it.
I.e. just specify it in the expectation() call :)

(In principle, you don't even need a backend until you execute, but that would require passing one in every circuit call - unless you cache a backend in the Circuit object itself)

@MatteoRobbiati
Copy link
Contributor Author

(In principle, you don't even need a backend until you execute, but that would require passing one in every circuit call - unless you cache a backend in the Circuit object itself)

This is probably true. I'll try to drop the interface definition and test all the interfaces implementing algorithms directly.

@MatteoRobbiati
Copy link
Contributor Author

In this case in particular, since the gradient will be computed independently on the interface, and that will be just used by the downstream to consume it, maybe you don't need to specify the interface at all, until you will actually use it. I.e. just specify it in the expectation() call :)

Step back: the Hamiltonian object right now is backend (interface 🙈) dependent. E.g. ham.matrix is a torch.tensor if initialized after setting qibo.set_backend("pytorch"). This is required if you want to run a symbolical_with_torch. Same if you want to run the symbolical_with_jax. The interface has to be jax friendly,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants