-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inpainting with CFM #123
Comments
Cool! I have not experimented with this. I'm curious if you've tried the same strategy for flow matching? My feeling is the same trick may work. |
@atong01 I think it should, but with the caveat that you might have to integrate the clean target through vector field (network) up to the current noisy timestep. Just using I think there is another possible approach—use the mask/clean target to zero out the vector field (network output) i.e. indicating that the unmasked regions have no derivative/don't change at any timestep. and then at every single network forward pass, we force the input to be the clean target with the mask. but with this approach the network input would be noisy in some regions and clean in others which is a training/inference mismatch if the network were not trained with only some regions being corrupted. |
I think the training-free inpainting method would work for both SGM and FM, the sampling process has no difference. Would you like to tell more? Repaint is no longer the best method for training-free inapinting method, you could check on this flow-based repaint method: https://arxiv.org/pdf/2310.04432 |
Been having great success using cfm over diffusion methods for audio tasks so far, kudos for the great library!
One thing I'm having trouble wrapping my head around is the most correct way to formulate the inpainting task.
with denoising diffusion the repaint method is extremely intuitive and works well in practice. but i think its more complicated for flow?
The text was updated successfully, but these errors were encountered: