-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distilled ZairaChem models in ONNX format #32
Comments
We will start by testing again Olinda @JHlozek (see: ersilia-os/olinda#3) |
I've been working on this and currently I have Olinda installed in the ZairaChem environment (requiring some dependency conflict resolution as usual). There are still many improvements to work on next, including:
|
Thanks @JHlozek this is great. |
Olinda updates: As a test, I trained a model on H3D data up to June 2023 including 1k pre-calculated reference descriptors and then predicted prospective compounds from the next 6 months. Here is the scatter plot of how the distilled and zairachem model predictions compare and a ROC-curve for the distilled model on prospective data. To facilitate testing, I have written code that will prepare a folder of pre-calculated descriptors for 1k molecules, which can be run in the first cell in the demo notebook.
I suggest testing this and then closing #3 to keep the conversation centralized here. |
Next steps:
|
This is very interesting and promising, @JHlozek ! |
Summary of the weekly meeting: the distilled models look good but there seems to be a bit of underfitting as we add external data, so we need to make the ONNX model a bit more complex. |
Hi @JHlozek I have a dataset that contains IC50 data for P.Falciparum, over 17K molecules with Active (1) and Inactive (0) defined at two cut-offs (hc = high cut-off, 2.5 uM / lc = low cut-off, 10 uM). They are curated from ChEMBL - all public data |
This looks pretty good @GemmaTuron - many thanks. |
Some updates for Olinda that we spoke about yesterday. I have been working on improving the speed of the Olinda pipeline by addressing the list above of steps that need to be run at runtime. I am concurrently writing the script that can convert a given reference list of smiles into the expected directory structure.
Overall, the pipeline has gone from >10 hours for 50k reference molecules to ~45 minutes. Half an hour of this is still due to the tabpfn step, which we may want to discuss addressing further in future. Next, I am working on implementing the sample weights to weight the original model's training set higher than than the general reference smiles. |
Fantastic @JHlozek thanks for the updates. RE:
Let's address TabPFN in our meeting. About the weighting scheme - does it seem difficult? |
Thanks @miquelduranfrigola. Some more updates: The weighting is now implemented and wasn't too difficult - the generators just need to return a third value which KerasTuner automatically treats as the weight. At the moment I find the proportion of training compounds to the reference library and use the inverse as the weight. I'm exploring extending this weighting scheme to account for the large difference between low-scoring and high-scoring compounds. I now have 200k compounds pre-calculated. We should maybe start thinking about how we store and serve these (like from an S3 bucket). |
Very interesting. Thanks @JHlozek 100% agree that we need to have a data store for ZairaChem descriptors, and the right place to put this is S3. In principle, it should not be too difficult - they are in HDF5 format, correct? Tagging @DhanshreeA so she is in the loop. |
Most of the descriptors are .h5. The two bidd-molmap files are .np array files and then there are some txt files in the formats that ZairaChem expects. We might want to zip each fold to a single file. The folder structure for each 50k fold of data is as follows: I'm going to remove the duplication of grover embedding by pointing the manifolds to /grover-embedding/raw.h5 instead of the separate reference.h5 file. |
Fantastic. Definitely, we need to keep this as zip files in S3 and perhaps write a short script to fetch those file easily? |
Hi @JHlozek As we near the completion of Olinda for ZairaChem, can you summarise in this issue the current status and performance of Olinda so we keep all information up to date and are able to close the issue once the tasks are completed? Related to this issue, we are working on #46 and also in olinda issue 3 and olinda issue 7 |
The core of the Olinda pipeline is now complete and has been tested under various training configurations to identify a good inital setup for a v1 of the package (#46). Currently, the pipeline focuses on distilling ZairaChem models but can be extended to Ersilia Model Hub models once an adapter is developed to process the variable model output. The resulting ONNX surrogate models are lightweight (< 5Mb) and very fast while maintaining the majority of the ZairaChem model performance. A simple wrapper api is available for the ONNX models (https://github.com/JHlozek/olinda_api) to facilitate programmatic use and will also be incorporated into ZairaChem cli commands. Next steps:
|
Thanks @JHlozek |
I think this issue is ready to be closed once @JHlozek forks have been merged? |
I see that the Olinda README has a small merge conflict but I don't want to mess with the PR now. I'll get back to any questions/comments/suggestions that you raise when I am back in office on Tuesday or whenever you get to it (no pressure from my side). |
Motivation
ZairaChem models are large and will always be large, since ZairaChem uses an ensemble-based approach. Nonetheless, we would like to offer the opportunity to distill ZairaChem models for easier deployment, especially in online inference. We'd like to do it in an interoperable format such as ONNX.
The Olinda package
Our colleague @leoank already contributed a fantastic package named Olinda that we could, in principle, use for this purpose. Olinda takes an arbitrary model (in this case, a ZairaChem model) and produces a much simpler model, stored in ONNX format. Olinda uses a reference library to do the teacher/student training and is nicely coupled with other tools that @leoank developed such as ChemXOR for privacy-preserving AI and Ersilia Compound Embedding which provides dense 1024-dimensional embeddings.
Roadmap
The text was updated successfully, but these errors were encountered: