Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How about applying this conditional flow matching to tasks without paired training data? #127

Open
Luciennnnnnn opened this issue Jul 26, 2024 · 4 comments

Comments

@Luciennnnnnn
Copy link

Luciennnnnnn commented Jul 26, 2024

Hi, very interesting work and nice presentation in paper. I'm interested in applying conditional flow matching to the tasks without paired training data. For example, many image restoration tasks do not or are hard to collect large scale real-world paired images.

Are there any advice or best practice in these problems?

Best regards
Xin

@Luciennnnnnn Luciennnnnnn changed the title Hi, how about applying this conditional flow matching to tasks without paired training data? How about applying this conditional flow matching to tasks without paired training data? Jul 26, 2024
@atong01
Copy link
Owner

atong01 commented Jul 26, 2024

Hi Xin,

I don't quite understand your question. Do you have paired data? Or unpaired data? Both can work. Stochastic dynamics seems to help more with image to image tasks empirically.

@Luciennnnnnn
Copy link
Author

Sorry, I have rephrased my statement now. I'm taking about unpaired data mostly.
In addition, I see some paper notice that stochasticity is beneficial in high dimentional generation, how can we extend your work into a SDE?

@atong01
Copy link
Owner

atong01 commented Jul 27, 2024

Hi Xin,

Yes flow matching can be (and in fact is usually) applied to unpaired data. This is the setting considered in our paper and example notebooks. Would also point you towards notebooks on stochastic versions in our repo and others such as diffusion Schrödinger bridge matching, Light Schrödinger bridge matching, and stochastic interpolants.

In image to image models it seems that stochastic dynamics provide better performance.

@Luciennnnnnn
Copy link
Author

Thank you very much for your advice, I‘ll take a look at it! These flow or Schrödinger bridge techniques are extremely interesting; however, I find it very difficult to clarify their relationship with diffusion. For instance, flow matching comes from efficiently learning neural ODEs, whereas diffusion is typically understood as learning the score function in SDEs. But I have noticed that for a given ODE/SDE, we can always find a corresponding SDE/ODE with the same marginal distribution [1], [2]. What is the difference between them?

[1] https://arxiv.org/abs/2011.13456
[2] https://arxiv.org/abs/2401.08740

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants