Skip to content

Commit

Permalink
updating the notebook after running it at DLMBL2024 (#149)
Browse files Browse the repository at this point in the history
* updating the notebook after running it at DLMBL2024

Co-authored-by: Diane Adjavon <[email protected]>
Co-authored-by: AlbertDominguez <[email protected]>
Co-authored-by: Anna Foix <[email protected]>

* adding template for the phase contrast demo

* adding custom viscy["examples"]
 installation

* updating the DLMBL2024 exercise and solution ipynb

* updating image2image example

* adding nbconvert and updating ipynb img2img example

* fixing the img2img outline

* update structure and links of main README

* updating readme links for the pngs

* - removing the 'redundant' readme.md
- adding the phase contrast example
- saving the PhC notebook with images

* update setup.sh to handle 0.2 versions only

* collapse citations, ignore slurm

* clean up citations

* addressing Ziwen's comments

* cleanup the img2img notebook after testing

* updating the inference demo scripts

* - making absolute paths on root readme.
- description on docstrings for inference demo scripts

---------

Co-authored-by: Diane Adjavon <[email protected]>
Co-authored-by: AlbertDominguez <[email protected]>
Co-authored-by: Anna Foix <[email protected]>
Co-authored-by: Shalin Mehta <[email protected]>
Co-authored-by: Shalin Mehta <[email protected]>
  • Loading branch information
6 people authored Sep 6, 2024
1 parent c5b9ab6 commit ac437af
Show file tree
Hide file tree
Showing 24 changed files with 11,991 additions and 1,394 deletions.
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,8 @@ coverage.xml
.hypothesis/
.pytest_cache/

# SLURM
slurm*.out

#lightning_logs directory
lightning_logs/
lightning_logs/
126 changes: 66 additions & 60 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,87 +1,94 @@
# VisCy

VisCy is a deep learning pipeline for training and deploying computer vision models for image-based phenotyping at single-cell resolution.

The following methods are being developed:
VisCy (abbreviation of `vision` and `cyto`) is a deep learning pipeline for training and deploying computer vision models for image-based phenotyping at single-cell resolution.

This repository provides a pipeline for the following.
- Image translation
- Robust virtual staining of landmark organelles
- Image classification
- Supervised learning of of cell state (e.g. state of infection)
- Image representation learning
- Self-supervised learning of the cell state and organelle phenotypes

<div style="border: 2px solid orange; padding: 10px; border-radius: 5px; background-color: #fff8e1;">
<strong>Note:</strong><br>
VisCy is currently considered alpha software and is under active development. Frequent breaking changes are expected.
</div>
> **Note:**
> VisCy has been extensively tested for the image translation task. The code for other tasks is under active development. Frequent breaking changes are expected in the main branch as we unify the codebase for above tasks. If you are looking for a well-tested version for virtual staining, please use release `0.2.1` from PyPI.
## Virtual staining
### Pipeline
A full illustration of the virtual staining pipeline can be found [here](docs/virtual_staining.md).

### Library of virtual staining (VS) models
The robust virtual staining models (i.e *VSCyto2D*, *VSCyto3D*, *VSNeuromast*), and fine-tuned models can be found [here](https://github.com/mehta-lab/VisCy/wiki/Library-of-virtual-staining-(VS)-Models)
## Virtual staining

### Demos
#### Image-to-Image translation using VisCy
- [Guide for Virtual Staining Models](https://github.com/mehta-lab/VisCy/wiki/virtual-staining-instructions):
Instructions for how to train and run inference on ViSCy's virtual staining models (*VSCyto3D*, *VSCyto2D* and *VSNeuromast*)
- [Virtual staining exercise](https://github.com/mehta-lab/VisCy/blob/46beba4ecc8c4f312fda0b04d5229631a41b6cb5/examples/virtual_staining/dlmbl_exercise/solution.ipynb):
Notebook illustrating how to use VisCy to train, predict and evaluate the VSCyto2D model. This notebook was developed for the [DL@MBL2024](https://github.com/dlmbl/DL-MBL-2024) course and uses UNeXt2 architecture.

- [Image translation Exercise](./dlmbl_exercise/solution.py):
Example showing how to use VisCy to train, predict and evaluate the VSCyto2D model. This notebook was developed for the [DL@MBL2024](https://github.com/dlmbl/DL-MBL-2024) course.
- [Image translation demo](https://github.com/mehta-lab/VisCy/blob/92215bc1387316f3af49c83c321b9d134d871116/examples/virtual_staining/img2img_translation/solution.ipynb): Fluorescence images can be predicted from label-free images. Can we predict label-free image from fluorescence? Find out using this notebook.

- [Virtual staining exercise](./img2img_translation/solution.py): exploring the label-free to fluorescence virtual staining and florescence to label-free image translation task using VisCy UneXt2.
More usage examples and demos can be found [here](https://github.com/mehta-lab/VisCy/blob/b7af9687c6409c738731ea47f66b74db2434443c/examples/virtual_staining/README.md)
- [Training Virtual Staining Models via CLI](https://github.com/mehta-lab/VisCy/wiki/virtual-staining-instructions):
Instructions for how to train and run inference on ViSCy's virtual staining models (*VSCyto3D*, *VSCyto2D* and *VSNeuromast*).

### Gallery
Below are some examples of virtually stained images (click to play videos).
See the full gallery [here](https://github.com/mehta-lab/VisCy/wiki/Gallery).

| VSCyto3D | VSNeuromast | VSCyto2D |
|:---:|:---:|:---:|
| [![HEK293T](docs/figures/svideo_1.png)](https://github.com/mehta-lab/VisCy/assets/67518483/d53a81eb-eb37-44f3-b522-8bd7bddc7755) | [![Neuromast](docs/figures/svideo_3.png)](https://github.com/mehta-lab/VisCy/assets/67518483/4cef8333-895c-486c-b260-167debb7fd64) | [![A549](docs/figures/svideo_5.png)](https://github.com/mehta-lab/VisCy/assets/67518483/287737dd-6b74-4ce3-8ee5-25fbf8be0018) |
| [![HEK293T](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_1.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/d53a81eb-eb37-44f3-b522-8bd7bddc7755) | [![Neuromast](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_3.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/4cef8333-895c-486c-b260-167debb7fd64) | [![A549](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/figures/svideo_5.png?raw=true)](https://github.com/mehta-lab/VisCy/assets/67518483/287737dd-6b74-4ce3-8ee5-25fbf8be0018) |

### Reference

The virtual staining models and training protocols are reported in our recent [preprint on robust virtual staining](https://www.biorxiv.org/content/10.1101/2024.05.31.596901):

```bibtex
@article {Liu2024.05.31.596901,
author = {Liu, Ziwen and Hirata-Miyasaki, Eduardo and Pradeep, Soorya and Rahm, Johanna and Foley, Christian and Chandler, Talon and Ivanov, Ivan and Woosley, Hunter and Lao, Tiger and Balasubramanian, Akilandeswari and Liu, Chad and Leonetti, Manu and Arias, Carolina and Jacobo, Adrian and Mehta, Shalin B.},
title = {Robust virtual staining of landmark organelles},
elocation-id = {2024.05.31.596901},
year = {2024},
doi = {10.1101/2024.05.31.596901},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901},
eprint = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901.full.pdf},
journal = {bioRxiv}
}
```

This package evolved from the [TensorFlow version of virtual staining pipeline](https://github.com/mehta-lab/microDL), which we reported in [this paper in 2020](https://elifesciences.org/articles/55502):

```bibtex
@article {10.7554/eLife.55502,
article_type = {journal},
title = {Revealing architectural order with quantitative label-free imaging and deep learning},
author = {Guo, Syuan-Ming and Yeh, Li-Hao and Folkesson, Jenny and Ivanov, Ivan E and Krishnan, Anitha P and Keefe, Matthew G and Hashemi, Ezzat and Shin, David and Chhun, Bryant B and Cho, Nathan H and Leonetti, Manuel D and Han, May H and Nowakowski, Tomasz J and Mehta, Shalin B},
editor = {Forstmann, Birte and Malhotra, Vivek and Van Valen, David},
volume = 9,
year = 2020,
month = {jul},
pub_date = {2020-07-27},
pages = {e55502},
citation = {eLife 2020;9:e55502},
doi = {10.7554/eLife.55502},
url = {https://doi.org/10.7554/eLife.55502},
keywords = {label-free imaging, inverse algorithms, deep learning, human tissue, polarization, phase},
journal = {eLife},
issn = {2050-084X},
publisher = {eLife Sciences Publications, Ltd},
}
```
The virtual staining models and training protocols are reported in our recent [preprint on robust virtual staining](https://www.biorxiv.org/content/10.1101/2024.05.31.596901).


This package evolved from the [TensorFlow version of virtual staining pipeline](https://github.com/mehta-lab/microDL), which we reported in [this paper in 2020](https://elifesciences.org/articles/55502).

<details>
<summary>Liu, Hirata-Miyasaki et al., 2024</summary>

<pre><code>
@article {Liu2024.05.31.596901,
author = {Liu, Ziwen and Hirata-Miyasaki, Eduardo and Pradeep, Soorya and Rahm, Johanna and Foley, Christian and Chandler, Talon and Ivanov, Ivan and Woosley, Hunter and Lao, Tiger and Balasubramanian, Akilandeswari and Liu, Chad and Leonetti, Manu and Arias, Carolina and Jacobo, Adrian and Mehta, Shalin B.},
title = {Robust virtual staining of landmark organelles},
elocation-id = {2024.05.31.596901},
year = {2024},
doi = {10.1101/2024.05.31.596901},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901},
eprint = {https://www.biorxiv.org/content/early/2024/06/03/2024.05.31.596901.full.pdf},
journal = {bioRxiv}
}
</code></pre>
</details>

<details>
<summary>Guo, Yeh, Folkesson et al., 2020</summary>

<pre><code>
@article {10.7554/eLife.55502,
article_type = {journal},
title = {Revealing architectural order with quantitative label-free imaging and deep learning},
author = {Guo, Syuan-Ming and Yeh, Li-Hao and Folkesson, Jenny and Ivanov, Ivan E and Krishnan, Anitha P and Keefe, Matthew G and Hashemi, Ezzat and Shin, David and Chhun, Bryant B and Cho, Nathan H and Leonetti, Manuel D and Han, May H and Nowakowski, Tomasz J and Mehta, Shalin B},
editor = {Forstmann, Birte and Malhotra, Vivek and Van Valen, David},
volume = 9,
year = 2020,
month = {jul},
pub_date = {2020-07-27},
pages = {e55502},
citation = {eLife 2020;9:e55502},
doi = {10.7554/eLife.55502},
url = {https://doi.org/10.7554/eLife.55502},
keywords = {label-free imaging, inverse algorithms, deep learning, human tissue, polarization, phase},
journal = {eLife},
issn = {2050-084X},
publisher = {eLife Sciences Publications, Ltd},
}
</code></pre>
</details>

### Library of virtual staining (VS) models
The robust virtual staining models (i.e *VSCyto2D*, *VSCyto3D*, *VSNeuromast*), and fine-tuned models can be found [here](https://github.com/mehta-lab/VisCy/wiki/Library-of-virtual-staining-(VS)-Models)

### Pipeline
A full illustration of the virtual staining pipeline can be found [here](https://github.com/mehta-lab/VisCy/blob/dde3e27482e58a30f7c202e56d89378031180c75/docs/virtual_staining.md).


## Installation

Expand Down Expand Up @@ -118,8 +125,7 @@ publisher = {eLife Sciences Publications, Ltd},
viscy --help
```

## Contributing
For development installation, see [the contributing guide](CONTRIBUTING.md).
For development installation, see [the contributing guide](https://github.com/mehta-lab/VisCy/blob/main/CONTRIBUTING.md).

## Additional Notes
The pipeline is built using the [PyTorch Lightning](https://www.pytorchlightning.ai/index.html) framework.
Expand Down
18 changes: 0 additions & 18 deletions examples/virtual_staining/README.md

This file was deleted.

32 changes: 20 additions & 12 deletions examples/virtual_staining/VS_model_inference/demo_vscyto2d.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,31 +11,39 @@

from iohub import open_ome_zarr
from plot import plot_vs_n_fluor

# Viscy classes for the trainer and model
from viscy.data.hcs import HCSDataModule
from viscy.light.engine import FcmaeUNet
from viscy.light.predict_writer import HCSPredictionWriter
from viscy.light.trainer import VSTrainer
from viscy.transforms import NormalizeSampled

# %% [markdown]
"""
## Data and Model Paths
The dataset and model checkpoint files need to be downloaded before running this example.
"""
# %% [markdown] tags=[]
#
# <div class="alert alert-block alert-info">
#
# # Download the dataset and checkpoints for the VSCyto2D model
#
# - Download the VSCyto2D test dataset and model checkpoint from here: <br>
# https://public.czbiohub.org/comp.micro/viscy
# - Update the `input_data_path` and `model_ckpt_path` variables with the path to the downloaded files.
# - Select a FOV (i.e 0/0/0).
# - Set an output path for the predictions.
#
# </div>

# %%
# Set download paths
# TODO: Set download paths
root_dir = Path("")
# Download from
# https://public.czbiohub.org/comp.micro/viscy/VSCyto2D/test/a549_hoechst_cellmask_test.zarr/
# TODO: modify the path to the downloaded dataset
input_data_path = root_dir / "VSCyto2D/test/a549_hoechst_cellmask_test.zarr"
# Download from GitHub release page of v0.1.0
model_ckpt_path = root_dir / "VisCy-0.1.0-VS-models/VSCyto2D/epoch=399-step=23200.ckpt"
# TODO: modify the path to the downloaded checkpoint
model_ckpt_path = "/epoch=399-step=23200.ckpt"
# TODO: modify the path
# Zarr store to save the predictions
output_path = root_dir / "./a549_prediction.zarr"
# FOV of interest
# TODO: Choose an FOV
fov = "0/0/0"

input_data_path = input_data_path / fov
Expand Down
30 changes: 23 additions & 7 deletions examples/virtual_staining/VS_model_inference/demo_vscyto3d.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
from iohub import open_ome_zarr
from plot import plot_vs_n_fluor
from viscy.data.hcs import HCSDataModule

# Viscy classes for the trainer and model
from viscy.light.engine import VSUNet
from viscy.light.predict_writer import HCSPredictionWriter
Expand All @@ -25,16 +26,31 @@
The dataset and model checkpoint files need to be downloaded before running this example.
"""

# %% [markdown] tags=[]
#
# <div class="alert alert-block alert-info">
#
# # Download the dataset and checkpoints VSCyto3D
#
# - Download the VSCyto3D test dataset and model checkpoint from here: <br>
# https://public.czbiohub.org/comp.micro/viscy
# - Update the `input_data_path` and `model_ckpt_path` variables with the path to the downloaded files.
# - Select a FOV (i.e plate/0/0).
# - Set an output path for the predictions.
#
# </div>
# %%
# Download from
# https://public.czbiohub.org/comp.micro/viscy/VSCyto3D/test/no_pertubation_Phase1e-3_Denconv_Nuc8e-4_Mem8e-4_pad15_bg50.zarr/
input_data_path = (
"VSCyto3D/test/no_pertubation_Phase1e-3_Denconv_Nuc8e-4_Mem8e-4_pad15_bg50.zarr"
)
# Download from GitHub release page of v0.1.0
model_ckpt_path = "VisCy-0.1.0-VS-models/VSCyto3D/epoch=48-step=18130.ckpt"
# TODO: modify the path to the downloaded dataset
input_data_path = "/no_pertubation_Phase1e-3_Denconv_Nuc8e-4_Mem8e-4_pad15_bg50.zarr"

# TODO: modify the path to the downloaded checkpoint
model_ckpt_path = "/epoch=48-step=18130.ckpt"

# TODO: modify the path
# Zarr store to save the predictions
output_path = "./hek_prediction_3d.zarr"

# TODO: Choose an FOV
# FOV of interest
fov = "plate/0/0"

Expand Down
30 changes: 23 additions & 7 deletions examples/virtual_staining/VS_model_inference/demo_vsneuromast.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
from iohub import open_ome_zarr
from plot import plot_vs_n_fluor
from viscy.data.hcs import HCSDataModule

# Viscy classes for the trainer and model
from viscy.light.engine import VSUNet
from viscy.light.predict_writer import HCSPredictionWriter
Expand All @@ -25,16 +26,31 @@
The dataset and model checkpoint files need to be downloaded before running this example.
"""

# %% [markdown] tags=[]
#
# <div class="alert alert-block alert-info">
#
# # Download the dataset and checkpoints
#
# - Download the neuromast test dataset and model checkpoint from here: <br>
# https://public.czbiohub.org/comp.micro/viscy
# - Update the `input_data_path` and `model_ckpt_path` variables with the path to the downloaded files.
# - Select a FOV (i.e 0/3/0).
# - Set an output path for the predictions.
#
# </div>
# %%
# Download from
# https://public.czbiohub.org/comp.micro/viscy/VSNeuromast/test/20230803_fish2_60x_1_cropped_zyx_resampled_clipped_2.zarr/
input_data_path = (
"VSNeuromast/test/20230803_fish2_60x_1_cropped_zyx_resampled_clipped_2.zarr"
)
# Download from GitHub release page of v0.1.0
model_ckpt_path = "VisCy-0.1.0-VS-models/VSNeuromast/timelapse_finetine_1hr_dT_downsample_lr1e-4_45epoch_clahe_v5/epoch=44-step=1215.ckpt"
# TODO: modify the path to the downloaded dataset
input_data_path = "/20230803_fish2_60x_1_cropped_zyx_resampled_clipped_2.zarr"

# TODO: modify the path to the downloaded checkpoint
model_ckpt_path = "/epoch=44-step=1215.ckpt"

# TODO: modify the path
# Zarr store to save the predictions
output_path = "./test_neuromast_demo.zarr"

# TODO: Choose an FOV
# FOV of interest
fov = "0/3/0"

Expand Down
42 changes: 0 additions & 42 deletions examples/virtual_staining/dlmbl_exercise/convert-solution.py

This file was deleted.

Loading

0 comments on commit ac437af

Please sign in to comment.