-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualize feature maps from the contrastive encoder #162
Draft
ziw-liu
wants to merge
21
commits into
main
Choose a base branch
from
ssl-feature-maps
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* notes on standard report * Add code for generating figures --------- Co-authored-by: Alishba Imran <[email protected]>
…y and features learned by embeddings (#140) * notes on standard report * add lib of computed features * correlates PCA with computed features * compute for all timepoints * compute correlation * remove cv library usage * remove edge detection * convert to dataframe * for entire well * add std_dev feature * fix patch size --------- Co-authored-by: Soorya Pradeep <[email protected]>
* remove obsolete training and prediction scripts * lint contrastive scripts
* draft projection head per Update the projection head (normalization and size). #139 * reorganize comments in example fit config * configurable stem stride and projection dimensions * update type hint and docstring for ContrastiveEncoder * clarify embedding_dim * use the forward method directly for projected * normalize projections only when fitting the projected features saved during prediction is now *not* normalized * remove unused logger * refactor training code into translation and representation modules * extract image logging functions * use AdamW instead of Adam for contrastive learning * inline single-use argument * fix normalization * fix MLP layer order * fix output dimensions * remove L2 normalization before computing loss * compute rank of features and projections * documentation --------- Co-authored-by: Shalin Mehta <[email protected]>
* docstring * move scripts from contrastive_scripts to viscy/scripts * organize files in applications/contrastive_phenotyping * delete unused evaluation code * more cleanup * refactor evaluation metrics for translation task * refactor viscy.evaluation -> viscy.translation.evaluation_metrics and viscy.representation.evaluation * WIP: representation evaluation module * WIP: representation eval - docstrings in numpy format * WIP: more documentation * refactor: feature_extractor moved to viscy.representation.evaluation * lint * bug fix * refactored common computations and dataset * add imbalance-learn dependecy to metrics * refactor classification of embeddings * organize viscy.representation.evaluation * ruff * Soorya's plotting script * WIP: combine two versions of plot_embeddings.py * simplify representation.viscy.evaluation - move LCA to its own module * refactor of viscy.representation.evaluation * refactored and tested PCA and UMAP plots --------- Co-authored-by: Soorya Pradeep <[email protected]>
…et contrastive task (#154) * wip: sample positive and negative samples from another time point * configure time interval in triplet data module * vectorized anchor filtering * conditional augmentation for anchor anchor is augmented if the positive is another time point * example training script for the CTC dataset this is optimized to run on MPS * add example CTC prediction config for MPS
Not urgent to merge at this moment. |
@ziw-liu, feature visualization vs. time didn't serve the purpose in early iterations. Did you also try it after we started training time-aware models? The temporal smoothness in the output layer does not enforce the temporal smoothness in intermediate layers. This is interesting. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Added a script to perform PCA along the channel dimension to visualize multi-scale feature maps in the contrastive model's encoder stages.