Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrating Pytorch backend to Qiboml #1510

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open

Conversation

MatteoRobbiati
Copy link
Contributor

@MatteoRobbiati MatteoRobbiati commented Oct 30, 2024

Checklist:

  • Reviewers confirm new code works as expected.
  • Tests are passing.
  • Coverage does not decrease.
  • Documentation is updated.

Some things to be discussed and still not touched in this PR

  1. I think we should fix the procedure based on this function, otherwise we will always need to depend on Torch;
  2. In examples like this one we define a torch tensor which is provided to our circuit. I would like to drop any dependence on pytorch in Qibo, thus I propose to delete these kind of advanced examples, pointing to Qiboml documentation (where we will rewrite dedicated examples). Otherwise, we should build the torch tensor doing something like the following (which is very confusing and we must avoid):
backend = construct_backend(backend="qiboml", platform="pytorch")

# create a torch tensor from backend using torch which is renamed np
tensor = backend.np.ones( .... )

Copy link

codecov bot commented Oct 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 99.69%. Comparing base (ea6984d) to head (832d43c).

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1510      +/-   ##
==========================================
- Coverage   99.72%   99.69%   -0.03%     
==========================================
  Files          80       79       -1     
  Lines       11784    11631     -153     
==========================================
- Hits        11752    11596     -156     
- Misses         32       35       +3     
Flag Coverage Δ
unittests 99.69% <100.00%> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

Comment on lines +10 to +18
+------------------+------+---------+-----------+---------+
| Operating System | qibo | qibojit |Tensorflow | Pytorch |
+==================+======+=========+===========+=========+
| Linux x86 | Yes | Yes | Yes | Yes |
+------------------+------+---------+-----------+---------+
| MacOS >= 10.15 | Yes | Yes | Yes | Yes |
+------------------+------+---------+-----------+---------+
| Windows | Yes | Yes | Yes | Yes |
+------------------+------+---------+-----------+---------+
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be updated to contemplate the CPU / GPU subdivision? Both qibojit and qiboml have backends that work on both.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know, since this is already shown in the backends diagram.

@renatomello renatomello self-requested a review November 4, 2024 08:17
@MatteoRobbiati MatteoRobbiati added this to the Qibo 0.2.14 milestone Nov 6, 2024
@renatomello
Copy link
Contributor

renatomello commented Nov 6, 2024

  1. I think we should fix the procedure based on this function, otherwise we will always need to depend on Torch;

That function is needed to properly have some methods of gates.Unitary working for all possible input types the matrix can be (i.e. np.ndarray, tf.Tensor, torch.Tensor, cp.ndarray). I'm not sure there is a way out of this dependency without creating a gate dependency on backends. One immediate thing we could do is to rename np as platform in all backends since that's the name we give in set_backend anyway.

preferred package is installed following `Tensorflow's <https://www.tensorflow.org/install>`_
or `Pytorch's <https://pytorch.org/get-started/locally/>`_ installation instructions.

To switch to Tensorflow or Pytorch backend please do:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure whether it is worth to have details about the ml backends. I would probably place this in the qiboml docs and link them here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I would remove these lines. Probably it is better to wait the moment the Qiboml docs are ready so that we can just link to the proper section.

Comment on lines +72 to +75
if backend.platform in ["tensorflow", "pytorch"]:
pytest.skip(
"Tensorflow and Pytorch do not support operations with sparse matrices."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not really for this PR, but I think that both of them support sparse matrices to some extent https://pytorch.org/docs/stable/generated/torch.sparse.mm.html
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/sparse-mat-mul

I am not sure what was causing problems here, but I guess it is more a matter of how the operation is called rather than the impossibility to perform it. Anyway, this is probably out of the scope of this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just opened an issue with this comment.

(backend.name != "pytorch") or (backend.platform != "tensorflow")
backend.platform not in ["tensorflow", "pytorch"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could become a property of the backend, e.g.:

@property
def support_differentiation(self,):
    return False

to avoid checking precisely for which backend this works, but again might be out of scope here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the best thing for Qibo will be to remove all these tests about variational models.

@MatteoRobbiati
Copy link
Contributor Author

By my side, I think we can stop modifications here for this PR. Still we need to remove a lot of examples and subsections from the docs, but I would prefer to do that after we populate the Qiboml documentation. Otherwise we will have vacations in the documentation. What you think @renatomello, @BrunoLiegiBastonLiegi?

@BrunoLiegiBastonLiegi
Copy link
Contributor

We need to update this https://github.com/qiboteam/qiboml/blob/800e800d54f4962f4890716c4b1539014c69c1b5/src/qiboml/backends/__init__.py#L3

BTW, is the PytorchBackend of qiboml updated or we are still waiting for qiboteam/qiboml#30 ?

@MatteoRobbiati
Copy link
Contributor Author

We need to update this https://github.com/qiboteam/qiboml/blob/800e800d54f4962f4890716c4b1539014c69c1b5/src/qiboml/backends/__init__.py#L3

Done in qiboml, thanks!

BTW, is the PytorchBackend of qiboml updated or we are still waiting for qiboteam/qiboml#30 ?

That was outdated. Just closed!

@BrunoLiegiBastonLiegi
Copy link
Contributor

okay it should be just a matter of regenerating the lock here now

@renatomello
Copy link
Contributor

qibo is importing the torch backend from qiboml and qiboml is importing the torch backend from qibo

https://github.com/qiboteam/qibo/actions/runs/11888475398/job/33209011064?pr=1510

@MatteoRobbiati
Copy link
Contributor Author

qibo is importing the torch backend from qiboml and qiboml is importing the torch backend from qibo

This should have been solved merging qiboteam/qiboml#52 (comment) yesterday. Rerunning the tests to double check :)

@MatteoRobbiati
Copy link
Contributor Author

I just removed a gradient test from here and opened a dedicated issue in qiboml.
Also, the test_list_available_backends should be fixed here, so after merging the master into your branch, @renatomello, the problem mentioned in #1520 should be solved.

@renatomello
Copy link
Contributor

I just removed a gradient test from here and opened a dedicated issue in qiboml. Also, the test_list_available_backends should be fixed here, so after merging the master into your branch, @renatomello, the problem mentioned in #1520 should be solved.

You mean after this PR is merged, right?

@MatteoRobbiati
Copy link
Contributor Author

You mean after this PR is merged, right?

Yes, I mean this PR should be solving that problem too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants