Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualize feature maps from the contrastive encoder #162

Draft
wants to merge 21 commits into
base: main
Choose a base branch
from

Conversation

ziw-liu
Copy link
Collaborator

@ziw-liu ziw-liu commented Sep 14, 2024

Added a script to perform PCA along the channel dimension to visualize multi-scale feature maps in the contrastive model's encoder stages.

mattersoflight and others added 15 commits August 28, 2024 10:59
* notes on standard report

* Add code for generating figures

---------

Co-authored-by: Alishba Imran <[email protected]>
…y and features learned by embeddings (#140)

* notes on standard report

* add lib of computed features

* correlates PCA with computed features

* compute for all timepoints

* compute correlation

* remove cv library usage

* remove edge detection

* convert to dataframe

* for entire well

* add std_dev feature

* fix patch size

---------

Co-authored-by: Soorya Pradeep <[email protected]>
* remove obsolete training and prediction scripts

* lint contrastive scripts
* draft projection head per Update the projection head (normalization and size). #139

* reorganize comments in example fit config

* configurable stem stride and projection dimensions

* update type hint and docstring for ContrastiveEncoder

* clarify embedding_dim

* use the forward method directly for projected

* normalize projections only when fitting
the projected features saved during prediction is now *not* normalized

* remove unused logger

* refactor training code into translation and representation modules

* extract image logging functions

* use AdamW instead of Adam for contrastive learning

* inline single-use argument

* fix normalization

* fix MLP layer order

* fix output dimensions

* remove L2 normalization before computing loss

* compute rank of features and projections

* documentation

---------

Co-authored-by: Shalin Mehta <[email protected]>
* docstring

* move scripts from contrastive_scripts to viscy/scripts

* organize files in applications/contrastive_phenotyping

* delete unused evaluation code

* more cleanup

* refactor evaluation metrics for translation task

* refactor viscy.evaluation -> viscy.translation.evaluation_metrics and viscy.representation.evaluation

* WIP: representation evaluation module

* WIP: representation eval - docstrings in numpy format

* WIP: more documentation

* refactor: feature_extractor moved to viscy.representation.evaluation

* lint

* bug fix

* refactored common computations and dataset

* add imbalance-learn dependecy to metrics

* refactor classification of embeddings

* organize viscy.representation.evaluation

* ruff

* Soorya's plotting script

* WIP: combine two versions of plot_embeddings.py

* simplify representation.viscy.evaluation - move LCA to its own module

* refactor of viscy.representation.evaluation

* refactored and tested PCA and UMAP plots

---------

Co-authored-by: Soorya Pradeep <[email protected]>
…et contrastive task (#154)

* wip: sample positive and negative samples from another time point

* configure time interval in triplet data module

* vectorized anchor filtering

* conditional augmentation for anchor
anchor is augmented if the positive is another time point

* example training script for the CTC dataset
this is optimized to run on MPS

* add example CTC prediction config for MPS
@ziw-liu ziw-liu changed the base branch from main to representation September 14, 2024 03:52
@ziw-liu ziw-liu added the representation Representation learning (SSL) label Sep 21, 2024
@ziw-liu
Copy link
Collaborator Author

ziw-liu commented Sep 27, 2024

Not urgent to merge at this moment.

@ziw-liu ziw-liu added the wontfix This will not be worked on label Sep 27, 2024
Base automatically changed from representation to main October 17, 2024 23:06
@mattersoflight mattersoflight added this to the v0.4.0 milestone Oct 18, 2024
@mattersoflight
Copy link
Member

@ziw-liu, feature visualization vs. time didn't serve the purpose in early iterations. Did you also try it after we started training time-aware models? The temporal smoothness in the output layer does not enforce the temporal smoothness in intermediate layers. This is interesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
representation Representation learning (SSL) wontfix This will not be worked on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants