You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This ticket is being written as part of the effort to refine #374.
We want to have a wrapper around tensorflow that we own which can be used to train denoising models for use in the segmentation pipeline.
The module should conform to Pika's coding standards.
The module should be well tested.
The module should use our typical argschema-driven CLI.
We can use the training implementation in deepinterpolation as a guide. For reference:
this JSON script
/allen/programs/mindscope/workgroups/surround/denoising_labeling_2022/slurm_jsons/ensemble_input_even_smaller.json
was used to configure the job which fine-tuned the ensemble model (the denoising model that was based on all 160 SSF experiments) used in the 2022 labeling effort.
the JSON scripts here
/allen/programs/mindscope/workgroups/surround/denoising_labeling_2022/bespoke_models/slurm_jsons
were used to configure the jobs that fine-tuned the 160 individual models for use in the 2022 labeling effort.
The module should accept as input a movie or a set of movies. It should select training and validation frames randomly from within the movie in a repeatable way (i.e. all random number generators should be deterministically seeded).
Given that the denoising model uses chunks of frames (a "frame of interest" and the 30 frames before and 30 frames after the frame of interest), users should be able to specify some minimum spacing between training and validation frames (in case they do not want the 30 frame windows to overlap).
Components required (a lot of which can be taken from the inference implementation in #469)
Data loader
Validation
The module should be able to train a model either from scratch or in the fine-tuning mode. That model should be such that it can be run through the inference module written in #469 to produce a denoised movie.
The text was updated successfully, but these errors were encountered:
This ticket is being written as part of the effort to refine #374.
We want to have a wrapper around tensorflow that we own which can be used to train denoising models for use in the segmentation pipeline.
We can use the training implementation in deepinterpolation as a guide. For reference:
this JSON script
/allen/programs/mindscope/workgroups/surround/denoising_labeling_2022/slurm_jsons/ensemble_input_even_smaller.json
was used to configure the job which fine-tuned the ensemble model (the denoising model that was based on all 160 SSF experiments) used in the 2022 labeling effort.
the JSON scripts here
/allen/programs/mindscope/workgroups/surround/denoising_labeling_2022/bespoke_models/slurm_jsons
were used to configure the jobs that fine-tuned the 160 individual models for use in the 2022 labeling effort.
Components required (a lot of which can be taken from the inference implementation in #469)
Validation
The module should be able to train a model either from scratch or in the fine-tuning mode. That model should be such that it can be run through the inference module written in #469 to produce a denoised movie.
The text was updated successfully, but these errors were encountered: